This may not be the best idea for general usage, but the only use case
for metalinks currently is fetching a summary file and those are pretty
small. Far more convenient to return the file content in a GBytes.
The state machine's "passthrough_previous" field never got set, so the
machine gets put back into the wrong state after a passthrough phase.
Couple other minor issues around error handling.
The global keyring directory (trusted.gpg.d) is deprecated. Only use it
when a specified remote does NOT have its own keyring, or when verifying
local repository objects.
Note, because mixing in the global keyring directory is now an explicit
choice, OstreeGpgVerifier no longer needs to implement GInitableIface.
We need to check that it's 'ay'. Also reuse the existing validation
function to check it's 32 bytes rather than potentially crashing with
assertion.
Just noticed this during a code review.
I haven't done a full dig through the history, but it seems quite
possible right now we've been relying on inode enumeration
order for generating bootloader configuration.
Most of the time, newer inodes (i.e. later written files) will win.
But that's obviously not reliable.
Fix this by sorting the returned configuration internally.
If a remote keyring does not already exist, create an empty pubring.gpg
file in the temporary directory prior to importing keys. This prevents
gpg2 from creating a pubring.kbx file in the new keybox format [1]. We
want to stay with the older keyring format since its performances issues
are not relevant here.
[1] https://gnupg.org/faq/whats-new-in-2.1.html#keybox
External daemons like rpm-ostree want push notification any time a
change is made by an external entity. inotify provides notification,
but a problem is there's no easy way to monitor all of the refs.
In the past, there has been discussion of opt-in recursive timestamps:
https://lkml.org/lkml/2013/4/5/307
But in today's world, let's just bump the mtime on the repo itself, as
a central inotify point.
Closes: https://github.com/GNOME/ostree/pull/111
Imports one or more GPG keys from a source stream or from the user's
personal keyring into a remote-specific keyring. The keys to import
can optionally be restricted by a list of key IDs.
The imported keys are used to conduct GPG verification when pulling
from the given remote.
The blocking locking API wasn't sufficient for use in the rpm-ostree
daemon; it really wants to know if the lock is held, then continue to
do other things (like service DBus requests), and get notification
when the lock is available.
We also add an async variant that can be called if the lock is not
available.
Implement a higher level "loop until lock is available" method in the
`ostree admin` commandline.
This originally was a way that we detected the case where a pull was
interrupted. Later, we added `.commitpartial` files which also cover
this case.
See also https://github.com/GNOME/ostree/pull/85
We still want to honor their existence (and unlink them) in case an
old version of ostree was in use, but I believe it's safe to stop
creating them now.
The only case where this would break is if you have a version of
ostree that predates commitpartial in your rollback history, but such
old versions are no longer in use by operating systems I support at
least.
Closes: https://github.com/GNOME/ostree/pull/100
An OSTree user noticed that `ostree fsck` would produce `missing
object` errors in the case of interrupted pulls.
It's possible to do e.g. `ostree pull --subpath=/usr/share/rpm ...`,
which gets you just that portion of the commit. The use case for this
was being able to see what changes would appear in an update before
actually downloading all of it.
(I think this would be better covered by static deltas, but those
aren't final yet, and `--subpath` predates it)
Further, `.commitpartial` is used as a successor to the `transaction`
symlink for more precise knowledge in the case where a pull was
interrupted that we needed to resume scanning.
So it makes sense for `ostree fsck` to be aware of it.
First, this is just a general continuation of the `GFile -> openat`
transition.
Second, it's preparatory work for fsck to gain awareness of partial
commits.
If a system administrator happens to type `ostree admin upgrade`
multiple times, currently that will lead to a potentially corrupted
system.
I originally attempted to do locking *internally* in `libostree`, but
that didn't work out because currently a number of the commands
perform multi-step operations that all need to be serialized. All of
the current code in `ostree admin deploy` is an example.
Therefore, allow callers to perform locking, as most of the higher
level logic is presently implemented there.
At some point, we can revisit having internal locking, but it will be
difficult. A more likely approach would be similar to Java's approach
with concurrency on iterators - a "fail fast" method.
Always request detached metadata for commit objects, even if we already
have the commit object. This ensures we fetch any post facto detached
metadata updates such as new GPG signatures.
https://bugzilla.gnome.org/748220
These fsyncs were added for what turned out to be a fairly bogus
reason; I was hitting read errors from extlinux after upgrades and out
of conservatisim tried adding fsync calls, but the *actual* problem
was that extlinux didn't support 64 bit ext4. Now that at least for
Project Atomic hosts we're just targeting grub2, we can drop these
fsync calls and rely on `syncfs()` being both faster and catching any
errors.
For some sort of crazy reason, the `sync()` system call doesn't
actually return an error code, even though from what I can tell in the
kernel it wouldn't be terribly hard to add.
Regardless though, it is better for userspace apps to use `syncfs()`
to avoid flushing filesystems unrelated to what they want to sync. In
the case of OSTree, this does matter - for example you might have a
network mount point backing your database, and we don't want to block
upgrades on syncing it.
This change is safe because we're doing syncfs in *addition* to the
previous global `sync()` (a revision from an earlier patch).
Now because OSTree only touches the `/` mount point which covers the
repository, the deployment roots (including their copy of `/etc`), as
well as `/boot`, we should at some point later be able to drop the
`sync()` call. Note that on initial system installs we do relabel
`/var` but that shouldn't happen at ostree time - any new directories
are taken care of via `systemd-tmpfiles` on boot.
This way external programs like rpm-ostree can do fd-relative
operations on the deployment directories, like inspecting the RPM
database.
Closes: https://github.com/GNOME/ostree/pull/91
Rather than returning a new OstreeRepo instance in each call to
ostree_sysroot_get_repo(), cache one internally so the same instance
is returned each time.
Emitted during a pull operation upon GPG verification (if enabled).
Applications can connect to this signal to output the verification
results if desired.
do not write directly to the summary file but use a temporary file
first. It avoids to create an empty file if "ot_util_variant_save"
fails.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
I was looking at https://bugzilla.gnome.org/show_bug.cgi?id=738954
which wants us to ensure we chown() the refs. As part of that,
I did a generic conversion to use `*at()` (which naturally gives
us more low level control so we can call `fchown` etc.
This patch also sneaks in a change to respect the repo's
`disable_fsync` flag - if fsync is not set, then we never
`fdatasync()` (unlike the `g_file_replace_contents()` default. Also
unlike it, if fsync is enabled, we *always* sync even if the file
didn't exist.
This will be used by rpm-ostree to unset the immutable bit temporarily
in order to do package layering. We could add an API to deploy a tree
without the immutable bit, but this is simpler.
rpm-ostree currently uses ostree_repo_checkout_tree(), which as a side
effect will use the uncompressed objects cache by default. This is
rather annoying if you're using rpm-ostree on a server-side
repository, because if you then rsync the repo, you'll be syncing out
the uncompressed objects unless you exclude them.
We added the ability to disable the uncompressed cache in the
repository config to fix this, but it's better to allow application
control over this. The uncompressed cache will in some future version
become opt in as well.
This new API further:
- Drops the `GFile` usage in favor of `openat` APIs
- Improves ergonomics by avoiding callers having to query the source
`GFileInfo` (and carry around a copy of `OSTREE_GIO_FAST_QUERYINFO`)
- Has a more extensible options structure
Per the comment, I rather crudely have the `ostree checkout` builtin
call both APIs to ensure some testing coverage.
However, I'd like to in the future have easier-to-set-up testing code
that calls `libtest.sh` to set up dummy data.
It's valid for the remote server to say 200 OK and give us the entire
file instead of a 206 Partial Content, and in that case we should blow
away the previous cached data, rather than blindly appending to it and
thus creating multiple copies of the data inside the file.
This problem primarily occurs when we do have the complete file, and
we're interrupted, then try again, where the new process didn't record
the download was already complete. We do a range request for bytes
past the end, and some web servers (e.g. Akamai) will return 200 OK
with the whole content again, rather than a 416 Requested Range Not
Satisfiable.
Thus we could also fix this by saner caching strategy - since we know
the file is complete, rename it again to $checksum.done or something
before it's processed. (Or really, rework how we do caching more
intelligently in general).
This fixes the issue that interrupted pulls failed with such
webservers, although repeated attempts would eventually succeed
because we'd unlink files that failed to pull.
Related: https://bugzilla.redhat.com/show_bug.cgi?id=1207292
If the starting index is beyond the end of the list, it's a programming
error. Previously, the code was trying to raise a runtime error, but
actually causing a segfault.
This was detected by test code in test-mutable-tree.c, which is removed
in this commit because it should now not be possible to crash here.
https://bugzilla.gnome.org/747032
Uses OstreeGpgVerifyResult to catch duplicate signatures.
If the commit has already been signed with the given GPG key ID, fail
with a G_IO_ERROR_EXISTS error code.
Wrappers a referenced gpgme_verify_result_t so detailed verify results
can be examined independently of executing a verify operation.
_ostree_gpg_verifier_check_signature() now returns this object instead
of a single valid/invalid boolean, but the idea is for OstreeRepo to also
return this object for commit signature verification so it can be utilized
at the CLI layer (and possibly by other programs).
Similar to c2b01ad. For some reason I was thinking the commit data
still needed to be written to disk prior to verifying, but it's just
another artifact of spawning gpgv2 (predates using GPGME).
Makes for a nice cleanup in fetch_metadata_to_verify_delta_superblock()
as well.
In anticipation of API enhancements for GPG signature verification, which
would otherwise require a non-functional stub version were GPGME excluded.
GPGME is a pretty lightweight dependency, and the motivation to exclude
it is not clear.