SPDX License List is a list of (common) open source
licenses that can be referred to by a “short identifier”.
It has several advantages compared to the common "license header texts"
usually found in source files.
Some of the advantages:
* It is precise; there is no ambiguity due to variations in license header
text
* It is language neutral
* It is easy to machine process
* It is concise
* It is simple and can be used without much cost in interpreted
environments like java Script, etc.
* An SPDX license identifier is immutable.
* It provides simple guidance for developers who want to make sure the
license for their code is respected
See http://spdx.org for further reading.
Signed-off-by: Marcus Folkesson <marcus.folkesson@gmail.com>
Closes: #1439
Approved by: cgwalters
This ends up a lot better IMO. This commit is *mostly* just
`s/glnx_close_fd/glnx_autofd`, but there's also a number of hunks like:
```
- if (self->sysroot_fd != -1)
- {
- (void) close (self->sysroot_fd);
- self->sysroot_fd = -1;
- }
+ glnx_close_fd (&self->sysroot_fd);
```
Update submodule: libglnx
Closes: #1259
Approved by: jlebon
Buried in this large patch is a logical fix:
```
- if (!map)
- return glnx_throw_errno_prefix (error, "mmap");
+ if (map == (void*)-1)
+ return glnx_null_throw_errno_prefix (error, "mmap");
```
Which would have helped me debug another patch I was working
on. But it turns out that actually correctly checking for
errors from `mmap()` triggers lots of other bugs - basically
because we sometimes handle zero-length variants (in detached
metadata). When we start actually returning errors due to
this, things break. (It wasn't a problem in practice before
because most things looked at the zero size, not the data).
Anyways there's a bigger picture issue here - a while ago
we made a fix to only use `mmap()` for reading metadata from disk
only if it was large enough (i.e. `>16k`). But that didn't
help various other paths in the pull code and others that were
directly doing the `mmap()`.
Fix this by having a proper low level fs helper that does "read all data from
fd+offset into GBytes", which handles the size check. Then the `GVariant` bits
are just a clean layer on top of this. (At the small cost of an additional
allocation)
Side note: I had to remind myself, but the reason we can't just use
`GMappedFile` here is it doesn't support passing an offset into `mmap()`.
Closes: #1251
Approved by: jlebon
We added a `.dir-locals.el` in commit: 9a77017d87
There's no need to have it per-file, with that people might think
to add other editors, which is the wrong direction.
Closes: #1206
Approved by: jlebon
If a delta happens to have zero objects, we could end up doing
a divide-by-zero when inferring endianness. In practice,
a zero-object delta isn't possible to generate I think, but
let's make sure the code is defensive all the same.
Spotted by Coverity.
Coverity CID: 1452208
Closes: #1041
Approved by: pwithnall
I added `glnx_open_anonymous_tmpfile()`, but then later noticed
that the usage of this was really to be combined with `mmap()`,
and we had two versions of that in the delta code. Add a helper.
(Bigger picture...how is this different from glibc's "mmap() of /dev/zero"
approach for large chunks? One advantage is the storage can be "swapped" to
`/var/tmp`, but still deleted automatically, rather than requiring swap space)
Closes: #973
Approved by: jlebon
Follow up to a previous patch that addressed a double-close; I
realized we already had a helper for doing "open dfd iter, do nothing
if we get ENOENT". Raise it to libotuil, and port all consumers.
Closes: #863
Approved by: jlebon
It's just simpler, and I'm not sure people are going to care
much about the difference by default.
We already folded in the fallback sizes into the download totals, so folding in
the count makes things consistent; previously you could see e.g.
`3/3 parts, 100MB/150MB` and be confused.
Closes: #678
Approved by: giuseppe
Doing `g_variant_print (superblock)` is unreadable and not very useful,
since we show the checksums as byte arrays.
However, do show the checksums for fallback objects. This makes it easier to see
which objects are fallbacks (and inspect why).
Closes: #678
Approved by: giuseppe
I was having this thought today about making more of the OS readonly,
and ultimately if we got to the point where all ostree operations are
through the repo and sysroot dfds, we could have rpm-ostree be the
only process holding those fds open, and have a read-only bind mount
on top.
Anyways, we're not there, likely won't be soon, but this gets us
closer to being fully fd relative.
Closes: #628
Approved by: jlebon
This is a follow up to conversation on list - in practice, if we're
backing away from summary signing, then it makes sense to remove the
special casing for checksums in deltas around summary signatures.
This is also related to the recent change to enable GPG checking for
commits in deltas - now we have a more coherent story between the
previous pull path and deltas.
I didn't do any performance checking, and while it's slightly annoying
that we're now doing sha256 on the delta content twice (once for the
part and once per object)...sha256 is pretty fast, I think most users
are I/O bound anyways, and it'd drop even farther if we started using
openssl.
Closes: #612
Approved by: jlebon
If a static delta is generated between 2 commits with the same content,
then the delta will contain 1 part with no checksums. While useless,
this is a valid delta that shouldn't raise an assertion. If the delta
part has no checksums, then there are no objects to recreate and the
processing can be skipped.
Closes: #420
Approved by: cgwalters
I often want to have "idempotent" systems that iterate to a known
state. If after generating a commit, the system is interrupted, I'd
like the next run to still generate a delta. But we don't want to
regenerate if one exists, hence this option.
Closes: #375
Approved by: jlebon
Import `gs_file_enumerator_iterate()` for the next six months or
so...after RHEL 7.3 is released I'm strongly considering hard
requiring 2.46 or so.
Likely at some point we should figure out how to share more "glib
backport" code with NetworkManager at least.
Closes: #341
Approved by: jlebon
We have a lot of "allow_noent" type wrapper functions since
a common pattern is to allow files to not exist, but still
throw cleanly on other issues.
This is another instance of that, and cleans up duplicated error
handling code.
Part of this is prep for moving away from `GFile` consumers.
Closes: #319
Approved by: jlebon
This is similar to changes Krzesimir has been doing recently - we
really don't need the ergonomics of floating refs since we have
autocleanups.
We should continue to change most of our code to sink refs.
Specifically here it was pretty broken that the `_map()` API was
sinking but the other two weren't, and this broke some refactoring I
was trying to do later.
Closes: #317
Approved by: jlebon
I see when analyzing a delta here that due to byteswapping a negative
compression ratio of 540%, 66%, and 28%. Let's arbitrarily pick 20%
as a threshold for detecting byetswapping.
If the average object size is greater than 4GiB, let's assume we're
dealing with opposite endianness. I'm fairly confident no one is
going to be shipping peta- or exa- byte size ostree deltas, period.
Past the gigabyte scale you really want bittorrent or something.
xdg-app passed this a filename directly, and in this case it should be
used as is. This regressed to always look for "superblock" in the same
directory as the passed in filename.
https://bugzilla.gnome.org/show_bug.cgi?id=762617
Now we display stats on the individual parts, such as the blob size
and the number of each type of opcode. Most interesting to me is
things like how many bsdiff opcodes there are vs new objects, etc.
We had code to deal with opening/checksumming/decompressing static
deltas in a few places. I'd like to teach `ostree static-delta show`
how to display more information, and this will allow it to just use
`_ostree_static_delta_part_open()` too.
Right now though, almost all of the details of deltas are private, so
we can't do the "honest thing" and have the command line just use the
shared library.
Eventually some of this should appear in the API, but for now add
command line which is useful for debugging.
This is very useful for the inline-parts case, as you can then include
detached signatures in a single file representing the commit.
It is not as important for the generic pull case, as the detached
metadata is only a single small file. Additionally the detached
metadata is not content referenced and may change after the static
delta file was created, so we need to pull the latest version anyway.
If you pass a diriectory it will look for the "superblock" child, otherwise
it will use the file as the superblock. I need this in xdg-app to be able
to install any filename as a bundle.
In this mode the parts are stored in the metadata of the main delta
superblock file. This can be useful if you want a single-file delta
for easy transport, or for http in the case the delta is very small.