A fast way to generate new OSTree content using an existing
tree is to checkout (as hard links), add/replace files, then
call `ostree_repo_scan_hardlinks()`, then commit.
But `ostree_repo_scan_hardlinks()` scans the entire repo, which
can be slow if you have a lot of content.
All we really need is a mapping of (device,inode) -> checksum
just for the objects we checked out, then use that mapping
for commits.
This patch adds API so that callers can create a mapping via
`ostree_repo_devino_cache_new()`, then pass it to
`ostree_repo_checkout_tree_at()` which will populate it, and then
`ostree_repo_write_directory_to_mtree()` can consume it.
I plan to use this in rpm-ostree for package layering work.
Notes:
- The old `ostree_repo_scan_hardlinks()` API still works.
- I tweaked the cache to be a set with the checksum colocated with
the key, to avoid a separate malloc block per entry.
https://github.com/GNOME/ostree/pull/167
New public function works like ostree_sysroot_cleanup() EXCEPT FOR
pruning the repository.
Under the hood, add _ostree_sysroot_piecemeal_cleanup() which takes
flags to better control what files are cleaned up. Both public cleanup
functions are now wrappers for _ostree_sysroot_piecemeal_cleanup() with
different flags.
The new API permits to query a remote repository summary file and
retrieve the list of available refs.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
It allows to specify whether GPG verification for the summary file is
enabled for a specific repository.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Imports one or more GPG keys from a source stream or from the user's
personal keyring into a remote-specific keyring. The keys to import
can optionally be restricted by a list of key IDs.
The imported keys are used to conduct GPG verification when pulling
from the given remote.
The blocking locking API wasn't sufficient for use in the rpm-ostree
daemon; it really wants to know if the lock is held, then continue to
do other things (like service DBus requests), and get notification
when the lock is available.
We also add an async variant that can be called if the lock is not
available.
Implement a higher level "loop until lock is available" method in the
`ostree admin` commandline.
An OSTree user noticed that `ostree fsck` would produce `missing
object` errors in the case of interrupted pulls.
It's possible to do e.g. `ostree pull --subpath=/usr/share/rpm ...`,
which gets you just that portion of the commit. The use case for this
was being able to see what changes would appear in an update before
actually downloading all of it.
(I think this would be better covered by static deltas, but those
aren't final yet, and `--subpath` predates it)
Further, `.commitpartial` is used as a successor to the `transaction`
symlink for more precise knowledge in the case where a pull was
interrupted that we needed to resume scanning.
So it makes sense for `ostree fsck` to be aware of it.
If a system administrator happens to type `ostree admin upgrade`
multiple times, currently that will lead to a potentially corrupted
system.
I originally attempted to do locking *internally* in `libostree`, but
that didn't work out because currently a number of the commands
perform multi-step operations that all need to be serialized. All of
the current code in `ostree admin deploy` is an example.
Therefore, allow callers to perform locking, as most of the higher
level logic is presently implemented there.
At some point, we can revisit having internal locking, but it will be
difficult. A more likely approach would be similar to Java's approach
with concurrency on iterators - a "fail fast" method.
This way external programs like rpm-ostree can do fd-relative
operations on the deployment directories, like inspecting the RPM
database.
Closes: https://github.com/GNOME/ostree/pull/91
rpm-ostree currently uses ostree_repo_checkout_tree(), which as a side
effect will use the uncompressed objects cache by default. This is
rather annoying if you're using rpm-ostree on a server-side
repository, because if you then rsync the repo, you'll be syncing out
the uncompressed objects unless you exclude them.
We added the ability to disable the uncompressed cache in the
repository config to fix this, but it's better to allow application
control over this. The uncompressed cache will in some future version
become opt in as well.
This new API further:
- Drops the `GFile` usage in favor of `openat` APIs
- Improves ergonomics by avoiding callers having to query the source
`GFileInfo` (and carry around a copy of `OSTREE_GIO_FAST_QUERYINFO`)
- Has a more extensible options structure
Per the comment, I rather crudely have the `ostree checkout` builtin
call both APIs to ensure some testing coverage.
However, I'd like to in the future have easier-to-set-up testing code
that calls `libtest.sh` to set up dummy data.
Wrappers a referenced gpgme_verify_result_t so detailed verify results
can be examined independently of executing a verify operation.
_ostree_gpg_verifier_check_signature() now returns this object instead
of a single valid/invalid boolean, but the idea is for OstreeRepo to also
return this object for commit signature verification so it can be utilized
at the CLI layer (and possibly by other programs).
I plan to rename all of these APIs to use the term 'loose', so that it
makes more sense after pack files are introduced. External users
should not use them; instead use _load_variant() or _read_commit().
Now that we have a real GObject for the sysroot, we have a convenient
place to keep track of 4 pieces of state:
* The current deployment list
* The current bootversion
* The current subbootversion
* The current booted deployment (if any)
Avoid requiring callers to pass all of this around and load it
piecemeal; instead the new thing is ostree_sysroot_load().
Rather than having separate write_ref calls, make clients start a
transaction, add some refs, and then commit it. While this doesn't
make it 100% atomic, it makes it easier for us to use an atomic
model, and it means we don't do as much I/O updating the summary
file and such.
https://bugzilla.gnome.org/show_bug.cgi?id=707644
An earlier version of this API acted like git in that some objects
would be staged in a temporary directory which would be then committed
in one go by moving files around. The API doesn't match most users
expectations though, as while the stage is nice as a high-level API
it isn't really suited for low-level APIs.
While the stage was removed, the APIs were never renamed. Rename
them now so that they match expectations.
https://bugzilla.gnome.org/show_bug.cgi?id=707644
It's now isolated almost entirely to ostree-core.c, except
ostree-repo.c needs to know how to create archive-z2 file headers. So
give it a private API for that.
The way we recurse into subdirectories in parallel makes it far too
easy to hit up against the arbitrary Linux fd limit of 1024.
Since the fix here is about dropping parallelism, let's just go all
the way for now and make a plain old synchronous API =(
This does simplify both internal callers which wanted a sync API
anyways.
https://bugzilla.gnome.org/show_bug.cgi?id=706380