Using the error prefixing in the delta processing allows us to
do new code style. Also strip trailing whitespace.
Use error prefixing in a few other random places. I didn't
hunt for all of them, just testing out the new API.
Use `glnx_fchmod()`. Also note I dropped one `fchmod (tmpf, 0600)`
which is no longer necessary.
Update submodule: libglnx
Closes: #1011
Approved by: jlebon
It's more natural for a few calling places. Prep for patches to go the other
way, which in turn are prep for adding a commit filter v2 that takes `struct
stat`.
`ot_gfile_type_for_mode()` was only used in this function, so inline it here.
Closes: #974
Approved by: jlebon
There's lots of mechanically replacing `OtTmpFile` with `GLnxTmpfile`;
the biggest changes are in the commit path. Symlink commits are now
very clearly separated from regular files. Symlinks are `OtCleanupUnlinkat`,
and regular files are `GLnxTmpfile`.
The commit codepath separates those as `_ostree_repo_commit_path_final()` and
`_ostree_repo_commit_tmpf_final()`. A nice aspect of all of this is that they
both *consume* the temporary on success. This avoids an extra spurious
`unlink()` call.
One of the biggest bits of code motion is in `commit_loose_regfile_object()`,
which no longer needs to care about symlinks. For the most parth though it's
just removing conditionals.
Update submodule: libglnx
Closes: #958
Approved by: jlebon
The `OstreeRepoContentBareCommit` struct was basically an `OtTmpFile`, so let's
make it one. I moved the "convert to `GOutputStream`" logic into the callers,
since that bit can't fail; it makes the implementation much simpler since we can
just return the result of `ot_open_tmpfile_linkable_at()`.
Prep for `GLnxTmpfile` porting.
Closes: #957
Approved by: jlebon
This is followon work from previous cleanups. Basically
`stat_bare_content_object()` was the `fstatat()` logic
and `ostree_repo_read_bare_fd()` was the `openat()` implementation;
they duplicated some bits to find the object in staging, recurse
into parent etc.
Further, I wanted an internal-only version of this API which didn't allocate
`GFileInfo`/`GInputStream` but used a plain `fd` and `struct stat` to avoid
mallocs.
The end version here I think looks a lot nicer, since we deduplicate the various
`open()` calls in the different cases for example.
Closes: #952
Approved by: jlebon
Flatpak make check is failing when applying a static delta
to a bare-user-only repo due to an assert. The fix is to add
bare-user-only to the assert check.
Closes: #940
Approved by: giuseppe
This is a follow up to conversation on list - in practice, if we're
backing away from summary signing, then it makes sense to remove the
special casing for checksums in deltas around summary signatures.
This is also related to the recent change to enable GPG checking for
commits in deltas - now we have a more coherent story between the
previous pull path and deltas.
I didn't do any performance checking, and while it's slightly annoying
that we're now doing sha256 on the delta content twice (once for the
part and once per object)...sha256 is pretty fast, I think most users
are I/O bound anyways, and it'd drop even farther if we started using
openssl.
Closes: #612
Approved by: jlebon
If a static delta is generated between 2 commits with the same content,
then the delta will contain 1 part with no checksums. While useless,
this is a valid delta that shouldn't raise an assertion. If the delta
part has no checksums, then there are no objects to recreate and the
processing can be skipped.
Closes: #420
Approved by: cgwalters
When reworking the ostree core [to use O_TMPFILE](https://github.com/ostreedev/ostree/pull/369),
I hit an issue in the way the untrusted delta codepath ends up trying
to re-open the file to checksum it. That's not possible with
`O_TMPFILE` since the fd (which we opened `O_WRONLY`) is the only
accessible reference to the content.
Fix this by changing the delta processing code to update a checksum as
we're doing writes, which is also faster, and ends up simplifying the
code as well.
What would be an even larger simplification here is if we e.g. used a
separate thread calling `write_object()` or something like that; the
main issue I see there is somehow bridging the fact that function
wants a `GInputStream*` but the delta code is generating stream of
writes.
Closes: #392
Approved by: jlebon
⚠️ There is a notable spiked pit trap here around
`posix_fallocate()` and `errno`. This has bit other projects,
see e.g.
7bb87460e6
Otherwise the port was straightforward.
Now we display stats on the individual parts, such as the blob size
and the number of each type of opcode. Most interesting to me is
things like how many bsdiff opcodes there are vs new objects, etc.
We had code to deal with opening/checksumming/decompressing static
deltas in a few places. I'd like to teach `ostree static-delta show`
how to display more information, and this will allow it to just use
`_ostree_static_delta_part_open()` too.
Also renames OstreeRepoTrustedContentBareCommit to
OstreeRepoContentBareCommit so that it can be used by both.
This will be needed when we introduce checksum verification of objects
in static deltas.
There is already a check that the destination object does not
exist in all other cases when processing an incoming static delta.
However, the bspatch case would still try to run and fail. Add
an analogous check to that case as well.
https://bugzilla.gnome.org/show_bug.cgi?id=756260
This does an rsync-style prepared delta basically. On my test data,
it shaves ~6MB of uncompressed data. Not a huge amount, but I expect
this to be more useful for things like binaries which embed data, etc.
There's still some silliness here, but there is now only one opcode
open-splice-and-close, that writes a single chunk from the payload.
This is really all we need for metadata, and small content objects are
also fine with this.
We get some deduplication between content objects by creating a
dictionary for (uid,gid,mode) tuples and xattrs.
This still keeps the operation/payload code in, so we could do
rollsums in a future update easily.
We have a chain of checksums from the root up until here. While doing
checksums of the objects individually would be a good redundancy check
for test cases and the like, when doing a pull there's no good reason
to burn cycles on SHA256.
This caused deadlocks and/or EMFILE due to the interaction between
threads and fds. What we really want here is a better pull-based
model for parsing content objects.
Another idea would be to change static deltas so that content objects
have a special opcode that includes their metadata first, and then do
rollsums etc. only over actual content.
This will avoid too many open files at the same time that could cause
an EMFILE error.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
(cherry picked from commit bc092b06f0e34e93f7d6102957bf55fd7ffd1b9e)
This has a very basic level of functionality (deltas can be generated,
and applied offline). There is only some stubbed out pull code to
fetch them via HTTP.
But, better to commit this now and improve it from a known starting
point, rather than have it languish in a branch.