We keep forgetting to update `apidoc/ostree-sections.txt`, so let's
start enforcing it. Of course it turns out we had some bugs here
like symbols marked as public but never implemented, etc. Those
are fixed in the prior commits.
Closes: #263
Approved by: giuseppe
This was removed in 14d682305b - long
before we had a stable library.
I again claim removing unimplemented symbols from the header is not an
API/ABI break.
Closes: #263
Approved by: giuseppe
If a symbol falls in a git merkle tree forest, but no one hears it,
did it ever exist?
I'm claiming this isn't an API/ABI break because nothing could
have actually used this. `git log -S checksum_update_meta` leads
me to 38ef75e6e0 which was before
we really had a stable shared library.
Closes: #263
Approved by: giuseppe
This allows you to replace the default
$sysroot/$sysconfdir/ostree/repos.d string value, and to use a similar
feature for repos that are not the system repo.
In particular, this allows us to support /etc/xdg-app/remotes.d for
xdg-app.
Closes: #247
Approved by: cgwalters
in write_directory_content_to_mtree_internal dfd_iter can be NULL,
for instance if commiting from --tree=ref=FOO. Don't blindly de-ref
it to avoid crashing.
Closes: #256
Approved by: cgwalters
ostree pull-local crashed for me in thread_closure_unref () doing:
g_mutex_clear (&thread_closure->output_stream_set_lock);
Seems like we never initialize this mutex.
Closes: #254
Approved by: cgwalters
I think it's cleaner if we use `remote_repo_local` to know
that we have a local repo. In reality, it might be nicest
if we didn't even create an `OstreeFetcher` for this case,
but untangling the code is tricky.
Closes: #239
Approved by: alexlarsson
Otherwise we get undefined behaviour if the client didn't explicitly set
any flags.
Also, add documentation for all the other options supported by
ostree_repo_pull_with_options().
Closes: #252
Approved by: cgwalters
These are useful for ostree users (like xdg-app) that have custom
options for remotes. In particular they are useful when we later make them
all respect self->parent_repo.
Closes: #236
Approved by: cgwalters
Force the otherwise disabled gpg verifications on.
Note: You need to pass --remote=foo so we know what gpg keys to verify
against.
Closes: #237
Approved by: cgwalters
Not only does this not make sense from a performance perspective, but
it also doesn't work because we can't use a url as a path element.
Closes: #237
Approved by: cgwalters
ostree-grub-generator can be used to customize
the generated grub.cfg file. Compile time
decision ostree-grub-generator vs grub2-mkconfig
can be overwritten with the OSTREE_GRUB2_EXEC
envvar - useful for auto tests and OS installers.
Why this alternative approach:
1) The current approach is less flexible than using a
custom 'ostree-grub-generator' script. Each system can
adjust this script for its needs, instead of using the
hardcoded values from ostree-bootloader-grub2.c.
2) Too much overhead on embedded to generate grub.cfg
via /etc/grub.d/ configuration files. It is still
possible to do so, even with this patch applied.
No need to install grub2 package on a target device.
3) The grub2-mkconfig code path has other issues:
https://bugzilla.gnome.org/show_bug.cgi?id=761180
Task: https://bugzilla.gnome.org/show_bug.cgi?id=762220Closes: #228
Approved by: cgwalters
This is a follow up to #227 to allow ostree to open the editor also for
orphan commits when no subject or body is given on the cmdline.
Closes: #229
Approved by: cgwalters
The API supports this, and it's not hard for us to do in the command
line as well. One possible use case is separating "content
generation" in a separate server.
Related: https://github.com/ostreedev/ostree/pull/223Closes: #227
Approved by: jlebon
When I'm doing local development builds, it's quite common for me not
to want to accumulate history. There are also use cases for this on
build servers as well.
In particular, using this, one could write a build system that didn't
necessarily need to have access to (a copy of) the OSTree repository.
Instead, the build system would determine the last commit ID on the
branch, and pass that to a worker node, then sync the generated
content back.
The API supported generating custom commits that don't necessarily
reference the previous commit on the same branch, let's just expose
this in the command line for convenience.
I plan to also support this rpm-ostree.
Closes: #223
Approved by: jlebon
Now that OSTree is used as G_LOG_DOMAIN, set the main handler to match
so the appropriate messages are filtered. It would probably be more
appropriate to spell out "OSTree" in the code, but since G_LOG_DOMAIN is
being defined globally in the project, might as well reuse it here.
https://bugzilla.gnome.org/show_bug.cgi?id=764237Closes: #225
Approved by: cgwalters
If you have a repo where a needed object has been inadvertantly removed,
all you'll get is a "No such metadata object" error with no clue about
where it was referenced from.
Add some debug messages to provide clues about which objects are being
traversed and found.
https://bugzilla.gnome.org/show_bug.cgi?id=764006Closes: #224
Approved by: cgwalters
When prune fails, it can be really difficult to figure out why. This at
least lets you know which objects are being considered.
https://bugzilla.gnome.org/show_bug.cgi?id=764006Closes: #224
Approved by: cgwalters
This can be used as a fingerprint to determine whether two
OstreeSePolicy objects are equivalent.
Also add documentation for ostree_sepolicy_get_name().
Closes: #219
Approved by: cgwalters
The dirtree object is required for traversing, so don't use the
load_variant_if_exists() function. This will return a
G_IO_ERROR_NOT_FOUND to the caller rather than trying to ref a NULL
variant in ostree_repo_commit_traverse_iter_init_dirtree() if the object
is missing.
https://bugzilla.gnome.org/show_bug.cgi?id=764091
If a commit only pull has been done, then the commit object exists in
the object store in addition to the commitpartial file. Traversing this
partial commit will likely fail, but that's expected. If traverse
returns a G_IO_ERROR_NOT_FOUND in this case, continue with pruning.
https://bugzilla.gnome.org/show_bug.cgi?id=764091
I'm trying to improve the developer experience on OSTree-managed
systems, and I had an epiphany the other day - there's no reason we
have to be absolutely against mutating the current rootfs live. The
key should be making it easy to rollback/reset to a known good state.
I see this command as useful for two related but distinct workflows:
- `ostree admin unlock` will assume you're doing "development". The
semantics hare are that we mount an overlayfs on `/usr`, but the
overlay data is in `/var/tmp`, and is thus discarded on reboot.
- `ostree admin unlock --hotfix` first clones your current deployment,
then creates an overlayfs over `/usr` persistent
to this deployment. Persistent in that now the initramfs switchroot
tool knows how to mount it as well. In this model, if you want
to discard the hotfix, at the moment you roll back/reboot into
the clone.
Note originally, I tried using `rofiles-fuse` over `/usr` for this,
but then everything immediately explodes because the default (at least
CentOS 7) SELinux policy denies tons of things (including `sshd_t`
access to `fusefs_t`). Sigh.
So the switch to `overlayfs` came after experimentation. It still
seems to have some issues...specifically `unix_chkpwd` is broken,
possibly because it's setuid? Basically I can't ssh in anymore.
But I *can* `rpm -Uvh strace.rpm` which is handy.
NOTE: I haven't tested the hotfix path fully yet, specifically
the initramfs bits.
In some cases (such as `ostree-sysroot-cleanup.c`), the surrounding
code would be substantially cleaner if it was also ported to
fd-relative, but I'm going to do that in a separate patch.
That way these patches are easier to review for mechanical
correctness. I used an Emacs keyboard macro as the poor man's
[Coccinelle](http://coccinelle.lip6.fr/).
⚠️ There is a notable spiked pit trap here around
`posix_fallocate()` and `errno`. This has bit other projects,
see e.g.
7bb87460e6
Otherwise the port was straightforward.
Since we hard-depend on GLib 2.40, we can start using GSubprocess.
This is part of dropping our dependency on libgsystem, which is
deprecated in favor of libglnx (as well as migrating things to GLib).
I'd like to encourage people to make OSTree-managed systems more
strictly read-only in multiple places. Ideally everywhere is
read-only normally besides `/var/`, `/tmp/`, and `/run`.
`/boot` is a good example of something to make readonly. Particularly
now that there's work on the `admin unlock` verb, we need to protect
the system better against things like `rpm -Uvh kernel.rpm` because
the RPM-packaged kernel won't understand how to do OSTree right.
In order to make this work of course, we *do* need to remount `/boot`
as writable when we're doing an upgrade that changes the kernel
configuration. So the strategy is to detect whether it's read-only,
and if so, temporarily mount read-write, then remount read-only when
the upgrade is done.
We can generalize this in the future to also do `/etc` (and possibly
`/sysroot/ostree/` although that gets tricky).
One detail: In order to detect "is this path a mountpoint" is
nontrivial - I looked at copying the systemd code, but the right place
is to use `libmount` anyways.
It used to be allowed to run something like "ostree remote refs" on
a read-only (e.g. system) repo. However, the summary cache caused that to
break. This commit just makes it not save the cache if we get some kind
of permission error when writing it. It'll still work, even without the
cache.
https://bugzilla.gnome.org/show_bug.cgi?id=763855
It allows an optimization to skip the download of the summary file
if its .sig file is unchanged.
Downloading the .sig file is much cheaper than downloading the summary
file from repositories with many branches.
https://bugzilla.gnome.org/show_bug.cgi?id=762973
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This will allow daemons like rpm-ostree to detect if there are any new
deployments efficiently, in combination with using inotify. If there
are any changes, rpm-ostree wants publish them on DBus.
While we're here, add some changes to start doing unit C testing of
the sysroot API.
And change the command line to use it. rpm-ostree had a copy
of this code, and thus there's a clear reason to have an API.
While we're moving this into API, ensure the mtime on deploy is bumped
after an osname is created, so that daemons like rpm-ostree can notice
changes. (In reality, creating the directory should do this, but
let's be double sure)
This allows other processes (e.g. rpm-ostreed) to monitor for external
changes (e.g. if someone does `ostree admin undeploy`) in a relatively
sane fashion.
Specifically, I'm trying to fix:
https://github.com/projectatomic/rpm-ostree/issues/220
It accepts a `flags` argument to control its behavior. Differently
from `ostree_repo_list_refs`, the `refspec_prefix` is not removed from
the results.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
I plan to use this in rpm-ostree at least for two reasons:
- To find the mtime on the repo
- To use the tmp/ directory to stage content (but we should eventually
add a better API)
As rpm-ostree evolves, it keeps driving API additions to libostree.
This creates a relatively tight coupling.
However, if delivering via e.g. RPM, unless one manually remembers to
increment the `Requires:` in the spec file, it's possible for the two
to become desynchronized.
RPM handles versioned symbols and will ensure a dependency if the
application starts using a newer version.
To implement this, switch to `-fvisibility=hidden`, along with an
annotation in the header, and finally add a `.sym` file.
This matches what other projects like systemd and libvirt do.
Although rather than attempting to retroactively version symbols, glom
them all onto the current one.
I see when analyzing a delta here that due to byteswapping a negative
compression ratio of 540%, 66%, and 28%. Let's arbitrarily pick 20%
as a threshold for detecting byetswapping.
If the average object size is greater than 4GiB, let's assume we're
dealing with opposite endianness. I'm fairly confident no one is
going to be shipping peta- or exa- byte size ostree deltas, period.
Past the gigabyte scale you really want bittorrent or something.
xdg-app passed this a filename directly, and in this case it should be
used as is. This regressed to always look for "superblock" in the same
directory as the passed in filename.
https://bugzilla.gnome.org/show_bug.cgi?id=762617
Some Docker layers are just metadata in the `layer.json`. If one is
mapping Docker layers to OSTree commits, one needs to create a dummy
root directory, because OSTree doesn't support metadata-only commits.
Let's just push that logic down here because it's easier than special
casing it in higher levels.
One of the design goals with deltas was not just wire efficiency,
but also having all the data up front about how much data would
be transferred before starting.
Let's expose that better by adding a `dry-run` option to the pull API.
This requires static deltas to be useful. Basically we simply call
the progress callback once with the data from the superblock.
For a production release repository, most OS vendors would want
to just always use static deltas. Add the ability for the pulls to
require it.
(I think I'll also add a summary key for this actually in addition,
so the repo manager can force it too)
If ostree is run in a test setup where it operates as root in a tmp
directory, it might cause issues to flag the deployments as immutable.
The test harness might simply be doing an `rm -rf` (effectively the case
for gnome-desktop-testing-runner), which will then fail.
We add a new debug option to the ostree_sysroot object using GLib's
GDebugKey functionality to allow our tests to communicate to ostree that
we don't want immutable deployments.
This is a more flexible version of the previous
ostree_repo_write_archive_to_mtree() which took a file reference.
This has an extensible options structure, and in particular
now supports `ignore_unsupported_content`.
I plan to use this for importing Docker images which contain device
nodes. (There's no reason for container images to have those, so
we'll just ignore them).
Also here, just like the export variant, the caller is responsible for
setting up libarchive.
I was going to add new API for importing, and it was really confusing
that what I think of now as import and export both had "write" in the
name. It's just clearer to talk about the direction.
At the same time, include `Export` in the options structure.
This isn't an ABI break as the API isn't in a release.
I was getting a compilation error with the GCC hardening flags which
look for a missing mode with `O_CREAT`. The right fix here is to drop
`O_CREAT`, as truncate() should throw `ENOENT` if the file doesn't
exist.
I don't know why we didn't do this a long time ago. This extends the
pull API to allow grabbing a specific commit, and will set the branch
to it. There's some support for this in the deploy engine, but there
are a lot of reasons to support it for raw pulls (such as subset
mirroring cases).
In fact I'm thinking we should also have the override-version logic
here too.
NOTE: One thing I debated here is inventing a new syntax on the
command line. Git doesn't seem to have this functionality (probably
because it'd be rarely used). The '@' character at least doesn't
conflict with anything.
Anyways, I wanted this for some other test cases. Without this,
writing tests that go between different commits is more awkward as one
must generate the content in one repo, then pull downstream, then
generate more content, then pull again. But now I can just keep track
of commit IDs and do exactly what I want without synchronizing the
tests.
At the moment I'm looking at using rpm-ostree to manage RPM inputs
which can then be converted into Docker images. It's most convenient
if we can stream directly out of libostree rather than doing a
checkout + tar combination.
There are also backup/debugging etc. reasons to implement `export` as
well.
While it's not strictly tied to OSTree, let's move
https://github.com/cgwalters/rofiles-fuse in here because:
- It's *very* useful in concert with OSTree
- It's tiny
- We can reuse OSTree's test, documentation, etc. infrastructure
One thing to consider also is that at some point we could experiment
with writing a FUSE filesystem for OSTree. This could internalize a
better equivalent of `--link-checkout-speedup`, but on the other hand,
the cost of walking filesystem trees for these types of operations is
really quite small.
But if we did decide to do more FUSE things in OSTree, this is a step
towards that too.
Now we display stats on the individual parts, such as the blob size
and the number of each type of opcode. Most interesting to me is
things like how many bsdiff opcodes there are vs new objects, etc.
We had code to deal with opening/checksumming/decompressing static
deltas in a few places. I'd like to teach `ostree static-delta show`
how to display more information, and this will allow it to just use
`_ostree_static_delta_part_open()` too.
Right now though, almost all of the details of deltas are private, so
we can't do the "honest thing" and have the command line just use the
shared library.
Eventually some of this should appear in the API, but for now add
command line which is useful for debugging.
I'd like to incrementally convert all of `ostree-repo*.c` to
fd-relative usage, so that we can sanely introduce
`ostree_repo_new_at()` which doesn't involve GFile.
This one is medium risk, but passes the test suite.
The original intention here was that we'd keey around a copy of the
file so that grub2 could eventually learn how to do atomic updates by
checking for a "fully written" marker in the *new* file, and if it
didn't exist, falling back to grub2.cfg.old.
I haven't yet proposed that upstream, but we might as well stop
deleting the file since it's useful as a backup at least.
Reported-by: Gatis Paeglis
This is a better followup to dc9239dd7b
since I wanted to do fsync-less checkouts in rpm-ostree too, and
replicating the "turn off fsync temporarily" was in retrospect just a
hack.
We can simply add a boolean to the checkout options.
https://github.com/GNOME/ostree/pull/172
Originally, a lot of the `fsync()` calls here were added for the
wrong reason - I was chasing a bug that ended up being the extlinux
bootloader not parsing 64 bit ext4 filesystems. But since it looked
like corruption, I tried adding a lot more `fsync()` calls.
All we should have to do is use `syncfs()`. If that doesn't work,
it's a kernel bug.
I'm making this change because skipping the individual fsyncs can be a
major performance win - it's easier for the FS to optimize, we do more
in parallel, etc.
https://bugzilla.gnome.org/show_bug.cgi?id=757117
A fast way to generate new OSTree content using an existing
tree is to checkout (as hard links), add/replace files, then
call `ostree_repo_scan_hardlinks()`, then commit.
But `ostree_repo_scan_hardlinks()` scans the entire repo, which
can be slow if you have a lot of content.
All we really need is a mapping of (device,inode) -> checksum
just for the objects we checked out, then use that mapping
for commits.
This patch adds API so that callers can create a mapping via
`ostree_repo_devino_cache_new()`, then pass it to
`ostree_repo_checkout_tree_at()` which will populate it, and then
`ostree_repo_write_directory_to_mtree()` can consume it.
I plan to use this in rpm-ostree for package layering work.
Notes:
- The old `ostree_repo_scan_hardlinks()` API still works.
- I tweaked the cache to be a set with the checksum colocated with
the key, to avoid a separate malloc block per entry.
https://github.com/GNOME/ostree/pull/167
The logic for checking which bootversion to use tries to access
sysroot->bootversion if the user didn't specify an explicit bootversion
on the command-line nor through the env var. However, at that point, the
sysroot object is not yet initialized, so it will always return 0, even
when it's 1.
This would cause e.g. `grub2-mkconfig` to have no output for the BLS
entries whenever the entries were under `/boot/loader.1`.
Related: RHBZ1293986
The tmp directory is lazily created for each fetcher instance, since
it may require superuser permissions and some instances only need
_ostree_fetcher_request_uri_to_membuf() which keeps everything in
memory buffers.
This is a continuation of earlier work to drop the individual fsync on
files/directories in favor of relying on `syncfs()` for speed.
As part of that cleanup, I'm porting it to be fd-relative.
I feel relatively confident about this change given that this area of
the code has notable test suite coverage, although that code runs as
non-root.
I'm porting the deployment code to be fd-relative, but part of the
logic was using `GFile` to talk to `OstreeRepoFile` to determine the
"bootcsum" (boot config checksum) before checking out the file tree.
We can avoid having both code paths by checking out the tree first,
then looking at it on the filesystem.
Extract existing code from ostree_repo_prune and add an argument COMMIT,
that controls which commit purge. If not set, the old behavior is kept.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Downloads and prints a remote summary file and any signatures in an
easy-to-read format, or alternatively with the --raw option, prints
the summary GVariant data directly.
https://bugzilla.gnome.org/show_bug.cgi?id=759250
Given the previous commit, which isolates SoupSession in a separate
thread, it should be safe to start pushing a temporary main context
for synchronous requests again.
This partially reverts 84fe2ff, which partially reverted 9f3d586.
Related to https://bugzilla.gnome.org/show_bug.cgi?id=753336
Move the SoupSession to a separate thread with its own isolated main
context and main loop. All interaction with the SoupSession occurs
by way of idle sources attached to the session's main context, which
execute on the session's thread.
This should solve the problem of running an asynchronous fetch request
synchronously by pushing a new thread-default main context and iterating
a main loop until the request completes. Prior to this, the new thread-
default main context would interfere with the SoupSession's own async
processing.
A lot of effort here just to avoid touching SoupSession directly in
ostree_fetcher_new(). The reason will become apparent in subsequent
commits.
Note this introduces generated enum/flags GTypes using glib-mkenums.
I could have just made the property type as plain integer, but doing
properties right will henceforth be easier now that the automake-fu
is established.
This way two pulls will not use the same tmpdir and accidentally
overwrite each other. However, consecutive OstreeFetchers will reuse
the tmpdirs, so that we can properly resume downloading large objects.
https://bugzilla.gnome.org/show_bug.cgi?id=757611
Concurrent pulls break since we're sharing the staging directory for
all transactions in the repo. This makes us use a per-transaction directory.
However, in order for resumes to work we first look for existing
staging directories and try to aquire an exclusive lock for them. If
we can't find any staging directory or they are all already locked,
then we create a new one.
https://bugzilla.gnome.org/show_bug.cgi?id=757611
This creates a subdirectory of the tmp dir with a selected prefix,
and takes a lockfile to ensure that nobody else is using the same directory.
However, if a directory with the same prefix already exists and is
not locked that is used instead.
The later is useful if you want to support some kind of resumed operation
on the tmpdir.
touch reused dirs
https://bugzilla.gnome.org/show_bug.cgi?id=757611
Bison is a well known external dependency, so just require it.
Including the generated content in git means it may or may not
be regenerated based randomly on timestamps, etc.
Also use `$(AM_V_GEN)` so we get prettier output.
Previously we were just ignoring this, which hid a bug in
an earlier commit that generated them.
Also change the `commit` program to use both APIs - this
involves extra code, but not too much.
This way, reverting the fix with this on top caused the test suite to
fail. Adding an active test for this would need a custom test program
using the C API, or adding a cmdline flag to the client, neither of
which quite seemed worth it.
Use the parse-datetime module from gnulib, and adapt it to not require
other modules as portability is not really an issue for us.
DATE can be specified in different formats, such as: "-1 week", "last
monday", "1 week ago".
Include the generated .c file in the repository so to not add another
dependency to Bison.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
ostree_repo_write_commit_with_time() converts the timestamp to
big-endian byte order.
ostree_repo_write_commit() was also doing this when calling
ostree_repo_write_commit_with_time(), resulting in a corrupted
commit object (timestamp bytes were backwards).
Recent regression in 14ffd7022a
Just to make copy-and-paste a little easier, as I often use this command
immediately before rebasing.
e.g.
# ostree remote refs fedora-atomic
fedora-atomic:fedora-atomic/f23/x86_64/docker-host
fedora-atomic:fedora-atomic/f23/x86_64/testing/docker-host
^^^^^^^^^^^^^^ (this part is new)
# rpm-ostree rebase fedora-atomic:fedora-atomic/f23/x86_64/testing/docker-host
Do not delete a .commitmeta file after removing the last metadata entry.
This way a client will pull the empty .commitmeta file and overwrite old
metadata as expected.
https://bugzilla.gnome.org/750459
ostree_checksum_bytes_peek() can return NULL if the checksum has an
incorrect length (most likely from disk corruption) but most callers
are not prepared to handle this and would likely crash.
Use ostree_checksum_bytes_peek_validate() instead, which sets a
GError on an invalid checksum.
It may have a different meaning, and the usage screen is not helpful.
Print the usage screen only when the command is not found.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This is very useful for the inline-parts case, as you can then include
detached signatures in a single file representing the commit.
It is not as important for the generic pull case, as the detached
metadata is only a single small file. Additionally the detached
metadata is not content referenced and may change after the static
delta file was created, so we need to pull the latest version anyway.
If you pass a diriectory it will look for the "superblock" child, otherwise
it will use the file as the superblock. I need this in xdg-app to be able
to install any filename as a bundle.
Also renames OstreeRepoTrustedContentBareCommit to
OstreeRepoContentBareCommit so that it can be used by both.
This will be needed when we introduce checksum verification of objects
in static deltas.
In this mode the parts are stored in the metadata of the main delta
superblock file. This can be useful if you want a single-file delta
for easy transport, or for http in the case the delta is very small.
When a commit is deleted and the repo is configured to use tombstone
commits, create one. Delete the tombstone file only if the commit is
pulled again.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Add a new object type: OSTREE_OBJECT_TYPE_TOMBSTONE_COMMIT that is
used when a commit was intentionally removed.
If the remote repository doesn't use tombstone commits, do not fail on
a missing commit (change 0b795785dd).
When the remote repository uses tombstones, if a commit cannot be
found, check if the tombstone file is present and fail if it is not
present.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
There might be a race here in that we create new symlink files *after*
calling `syncfs`, and they are not guaranteed to end up on disk.
Rework the code so that we create symlinks before, and then only
rename them after (and `fsync()` the directory for good measure).
Additional-fixes-by: Giuseppe Scrivano <gscrivan@redhat.com>
Tested-by: Giuseppe Scrivano <gscrivan@redhat.com>
This still needs verification that we're fixing a real bug; but I'm
fairly confident this won't make the fsync situation worse.
https://bugzilla.gnome.org/show_bug.cgi?id=755595
Adds an entry to the origin file to force the OstreeSysrootUpgrader to
pull and deploy the given checksum.
[origin]
override-commit=CHECKSUM
If the option is not given, any such entry is explicitly removed from
the origin file to ensure we upgrade to the latest available commit.
Upgrader now looks for an "override-commit" key in the origin file
with a commit checksum, which causes the upgrader to pull and deploy
the specified commit rather than the latest available commit on the
origin refspec.
The current code checks if /boot/uEnv.txt is a symlink to
decice if sysroot requires u-boot support. Why this is bad:
There are 2 ways to provide a custom env to u-boot from user space:
1) A compiled binary that is sourced from u-boot.
2) A text file (usually /uEnv.txt) that is imported into env from u-boot.
The current OSTree u-boot integration code was designed with the 1st
case in mind.
Many bootscripts provided by an embedded device vendors expect
to find uEnv.txt in the top level directory, it is often hardcoded
when building u-boot and is difficult to change later on. Or in other
cases it is stored in read-only memory so changing it would require
re-flushing boot loader with a new env. So the issue here is that
OSTree's and vendor uEnv.txt want to exist and the same path and OSTree
would throw away any changes added to /uEnv.txt by user on the next
upgrade/deploy.
This patch "hides" away the OSTree's env file loader/uEnv.txt from users
who are used to edditing uEnv.txt at the top level directory. Now to add
OSTree support on such boards you can simply add a custom logic in uEnv.txt
that loads ostree env from /loader/uEnv.txt
This change is backward compatible with the previous ostree releases and
solves the issue described in:
https://bugzilla.gnome.org/show_bug.cgi?id=755787
Using `commit_subject` instead of `arg` is clearer as it can refer to
a directory, archive or ref.
This is just an aesthetic change in the source code, having no impact
anywhere else.
There is already a check that the destination object does not
exist in all other cases when processing an incoming static delta.
However, the bspatch case would still try to run and fail. Add
an analogous check to that case as well.
https://bugzilla.gnome.org/show_bug.cgi?id=756260
zlib can return LZMA_BUF_ERROR, which indicates that either
the input or output buffer has size zero. This case should cause
the correct error to be passed back from g_converter_convert
to expand the relevant buffer. Since this error is ambiguous
as to which buffer is too small, an explicit check on the
output buffer size is added as well.
https://bugzilla.gnome.org/show_bug.cgi?id=756260
I was working on a different test, and ended up being very confused at
the behavior where removing the last deployment didn't remove the last
`ostree/X/X/X` ref pointing to its commit.
There's no reason to special case the last undeployment AFAIK, and the
existing code handles this.
Track outstanding HTTP requests in a table for easier debugging.
Also fixes a bug discussed in https://bugzilla.gnome.org/755224
where the outstanding request counter was not decremented in the
event of an error, which could result in the fetcher hitting its
max request limit and locking up.
The bug is fixed by removing the request struct from the table in
pending_uri_free(), which is always called regardless of error,
so the outstanding request count is always accurate.
Have OstreeFetcherPendingURI be the GTask's task_data and pass the GTask
around in queues and callback closures. The reference counting before
was a little confusing and this helps clarify it, at least to me.
OstreeFetcherPendingURI no longer needs its own reference count.
Had a rare situation where I had no libsoup development files, so I
took the opportunity to fix the build errors. Ugly, but works now.
Would be nice if libsoup could be a hard dependency since we rarely
ever test a configuration without it.
To support deploying older commits:
ostree pull <remote> <checksum>
ostree admin deploy <checksum>
Prior to this, the deploy command garbage collected <checksum> since
there's no ref pointing to it, and then ostree_sysroot_deploy_tree()
fails because it can't find the <checksum> commit.
https://bugzilla.gnome.org/732526
New public function works like ostree_sysroot_cleanup() EXCEPT FOR
pruning the repository.
Under the hood, add _ostree_sysroot_piecemeal_cleanup() which takes
flags to better control what files are cleaned up. Both public cleanup
functions are now wrappers for _ostree_sysroot_piecemeal_cleanup() with
different flags.
xdg-app was hanging for me with v2015.8, but worked with v2015.7.
I narrowed things down to the GMainLoop/context commit, in which
we started pushing a temporary main context for synchronous
requests internally.
That's never really going to work with libsoup - there needs
to be a single main context which works on the socket. Furthermore,
clients couldn't get progress messages that way.
For *other* internal uses where we added APIs that talk to the remote
repo, we cleanly push a temporary main context.
(Note that I kind of snuck in a change here around the GError handling
in pulls that isn't strictly related but came up in testing)
I noticed xdg-app was looping trying to fetch 1427 refs. We
don't want to do that unless asked to.
(And also, we need to make static delta requests async)
There's no reason to keep them hidden. I have a hard policy that
OSTree should *not* be used to carry secrets. Things like host ssh
private keys should be set up out of band by an OS-external
configuration mechanism such as kickstart, cloud-init, etc.
We also assume that hiding binaries is not very useful as most
attackers would be able to find them on the Internet or (for
subscribed content) acting as a customer.
This fixes a bug with mirroring because we changed to take the
unmodified upstream objects rather than uncompress <-> recompress.
https://bugzilla.gnome.org/show_bug.cgi?id=748959
Now that the computed similar objects are all regular files,
get_unpacked_unlinked_content should never be called on any other
object type. Assert that this is true instead of silently succeeding.
_ostree_delta_compute_similar_objects should not output symlinks.
Previously, a symlink in the "from" commit could be matched to a
real file in the "to" commit, since nothing was filtering symlinks
on the "from" side. This led to failures running the bzdiff
algorithm.
On systems with slow disks, the recursive scanning of directories can
be expensive -- it takes upwards of 2 minutes on our systems. This can
block the main loop for such a long time that it allows the download to
time out...
As such, move all the scanning of objects to a queue, processed from
an idle, to make sure that we don't block the main loop when scanning.
https://bugzilla.gnome.org/show_bug.cgi?id=753336
First of all, what we were doing with having GMainLoop in the internal
APIs is wrong. Synchronous APIs should always create their own main
context and not iterate the caller's. Doing the latter creates
potential for evil reentrancy issues. Sync API should block, async
API is for not blocking.
Now that's out of the way, fix the pull code to do the clean
```
while (termination_condition (state))
g_main_context_iteration (mainctx, TRUE);
```
model for looping. This is a lot easier to understand and ultimately
more reliable than having other code call `g_main_loop_quit()`, as the
loop condition is in exactly one place.
We can also remove the idle source which only fired once.
Note we have to add a hack here to discard the synchronous session and
create a new one which we only use async.
https://bugzilla.gnome.org/show_bug.cgi?id=753336
ostree_repo_prepare_transaction() should always be matched with a call
to either ostree_repo_commit_transaction() or
ostree_repo_abort_transaction().
Since ostree_repo_pull_with_options() does not call
ostree_repo_abort_transaction() on errors, the OstreeRepo instance will
hit an assertion when it's re-used later for another attempt, such as
when the update is driven by an external component through libostree and
network temporarily goes down.
This commit simply always calls ostree_repo_abort_transaction() in the
exit path of ostree_repo_pull_with_options(), since the function is safe
to call even when we're not in a transaction, and that matches e.g. what
ostree-sysroot-cleanup.c does.
The new API permits to query a remote repository summary file and
retrieve the list of available refs.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Returns a GFile for the default system root, which is usually the root
directory unless overridden by the OSTREE_SYSROOT environment variable
(which is mainly intended for testing).
libsoup will cache sessions, so it might be the case that we get a
reused session when pulling from the same repo multiple times in one
process.
In this case we were leaking signal connections, which caused
callbacks into freed memory with bad consequences.
Fix it by tying the signal connection to the object lifetime.
I did a quick audit pass through the pull code. What I focused on the
most is the case where `gpg-verify-summary=true`, and in particular
where `gpg-verify=false` too. This should be a valid and secure
configuration.
The primary change here is to error out very quickly if either
`summary` or `summary.sig` are 404. Previously, we'd only error out
if we were processing deltas.
Expand the existing test case to cover this, plus invalid summary and
invalid sig. (The test case was failing with current git master too).
It allows to specify whether GPG verification for the summary file is
enabled for a specific repository.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Works like "ostree refs" but fetches refs from a remote repo.
This depends on the remote repo having a summary file, but any repo
being served over HTTP *ought* to have one.
This may not be the best idea for general usage, but the only use case
for metalinks currently is fetching a summary file and those are pretty
small. Far more convenient to return the file content in a GBytes.
The state machine's "passthrough_previous" field never got set, so the
machine gets put back into the wrong state after a passthrough phase.
Couple other minor issues around error handling.
The global keyring directory (trusted.gpg.d) is deprecated. Only use it
when a specified remote does NOT have its own keyring, or when verifying
local repository objects.
Note, because mixing in the global keyring directory is now an explicit
choice, OstreeGpgVerifier no longer needs to implement GInitableIface.
We need to check that it's 'ay'. Also reuse the existing validation
function to check it's 32 bytes rather than potentially crashing with
assertion.
Just noticed this during a code review.
If there are multiple signatures to verify, we would attempt to
display them multiple times, but we can only call
`gs_console_end_status_line()` if the console has been enabled.
Ensure we turn back on the console after printing our status. This
will result in extra newlines, but fixing that cleanly would require a
saner GSConsole API.
I haven't done a full dig through the history, but it seems quite
possible right now we've been relying on inode enumeration
order for generating bootloader configuration.
Most of the time, newer inodes (i.e. later written files) will win.
But that's obviously not reliable.
Fix this by sorting the returned configuration internally.
When I was introducing the `_UNLOCKED` flag, I only audited
subcommands of `ostree admin`, but I missed that `ostree admin
instutil` also used the option parsing. Those are only used by
Anaconda today so we can ignore them for locking purposes.
Also, the usage help generation was grabbing the lock unnecessarily.
If a remote keyring does not already exist, create an empty pubring.gpg
file in the temporary directory prior to importing keys. This prevents
gpg2 from creating a pubring.kbx file in the new keybox format [1]. We
want to stay with the older keyring format since its performances issues
are not relevant here.
[1] https://gnupg.org/faq/whats-new-in-2.1.html#keybox
External daemons like rpm-ostree want push notification any time a
change is made by an external entity. inotify provides notification,
but a problem is there's no easy way to monitor all of the refs.
In the past, there has been discussion of opt-in recursive timestamps:
https://lkml.org/lkml/2013/4/5/307
But in today's world, let's just bump the mtime on the repo itself, as
a central inotify point.
Closes: https://github.com/GNOME/ostree/pull/111
The previous commit introduced locking for `ostree admin deploy`, but
we do expect people to possibly accidentally do e.g.
`ostree admin upgrade` concurrently.
Using consistent locking in the admin commands will help rpm-ostree.
Closes: https://github.com/GNOME/ostree/pull/110
Imports one or more GPG keys from a source stream or from the user's
personal keyring into a remote-specific keyring. The keys to import
can optionally be restricted by a list of key IDs.
The imported keys are used to conduct GPG verification when pulling
from the given remote.
The blocking locking API wasn't sufficient for use in the rpm-ostree
daemon; it really wants to know if the lock is held, then continue to
do other things (like service DBus requests), and get notification
when the lock is available.
We also add an async variant that can be called if the lock is not
available.
Implement a higher level "loop until lock is available" method in the
`ostree admin` commandline.
I use the trivial httpd server locally. Each time I restart the
server, I end up modifying manually the config file for other repos so
to point to the correct port. In this way I can just re-use the same
port.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This originally was a way that we detected the case where a pull was
interrupted. Later, we added `.commitpartial` files which also cover
this case.
See also https://github.com/GNOME/ostree/pull/85
We still want to honor their existence (and unlink them) in case an
old version of ostree was in use, but I believe it's safe to stop
creating them now.
The only case where this would break is if you have a version of
ostree that predates commitpartial in your rollback history, but such
old versions are no longer in use by operating systems I support at
least.
Closes: https://github.com/GNOME/ostree/pull/100
An OSTree user noticed that `ostree fsck` would produce `missing
object` errors in the case of interrupted pulls.
It's possible to do e.g. `ostree pull --subpath=/usr/share/rpm ...`,
which gets you just that portion of the commit. The use case for this
was being able to see what changes would appear in an update before
actually downloading all of it.
(I think this would be better covered by static deltas, but those
aren't final yet, and `--subpath` predates it)
Further, `.commitpartial` is used as a successor to the `transaction`
symlink for more precise knowledge in the case where a pull was
interrupted that we needed to resume scanning.
So it makes sense for `ostree fsck` to be aware of it.
First, this is just a general continuation of the `GFile -> openat`
transition.
Second, it's preparatory work for fsck to gain awareness of partial
commits.
If a system administrator happens to type `ostree admin upgrade`
multiple times, currently that will lead to a potentially corrupted
system.
I originally attempted to do locking *internally* in `libostree`, but
that didn't work out because currently a number of the commands
perform multi-step operations that all need to be serialized. All of
the current code in `ostree admin deploy` is an example.
Therefore, allow callers to perform locking, as most of the higher
level logic is presently implemented there.
At some point, we can revisit having internal locking, but it will be
difficult. A more likely approach would be similar to Java's approach
with concurrency on iterators - a "fail fast" method.
To make room for "remote gpg-import", which will be non-trivial.
ot-builtin-remote.c was already a little too crowded anyway.
Also while we're at it, port this bit of code away from libgsystem.
Always request detached metadata for commit objects, even if we already
have the commit object. This ensures we fetch any post facto detached
metadata updates such as new GPG signatures.
https://bugzilla.gnome.org/748220
These fsyncs were added for what turned out to be a fairly bogus
reason; I was hitting read errors from extlinux after upgrades and out
of conservatisim tried adding fsync calls, but the *actual* problem
was that extlinux didn't support 64 bit ext4. Now that at least for
Project Atomic hosts we're just targeting grub2, we can drop these
fsync calls and rely on `syncfs()` being both faster and catching any
errors.
For some sort of crazy reason, the `sync()` system call doesn't
actually return an error code, even though from what I can tell in the
kernel it wouldn't be terribly hard to add.
Regardless though, it is better for userspace apps to use `syncfs()`
to avoid flushing filesystems unrelated to what they want to sync. In
the case of OSTree, this does matter - for example you might have a
network mount point backing your database, and we don't want to block
upgrades on syncing it.
This change is safe because we're doing syncfs in *addition* to the
previous global `sync()` (a revision from an earlier patch).
Now because OSTree only touches the `/` mount point which covers the
repository, the deployment roots (including their copy of `/etc`), as
well as `/boot`, we should at some point later be able to drop the
`sync()` call. Note that on initial system installs we do relabel
`/var` but that shouldn't happen at ostree time - any new directories
are taken care of via `systemd-tmpfiles` on boot.
This way external programs like rpm-ostree can do fd-relative
operations on the deployment directories, like inspecting the RPM
database.
Closes: https://github.com/GNOME/ostree/pull/91
Rather than returning a new OstreeRepo instance in each call to
ostree_sysroot_get_repo(), cache one internally so the same instance
is returned each time.
Emitted during a pull operation upon GPG verification (if enabled).
Applications can connect to this signal to output the verification
results if desired.
First, git doesn't do this, and whatever Linus thinks is right or
something.
Second specifically to OSTree, it's quite common to not have
intermediate commits. If one wants to reset a ref in order to prune
data after a deployment, the parentage check will fail.
Closes: https://github.com/GNOME/ostree/pull/87
do not write directly to the summary file but use a temporary file
first. It avoids to create an empty file if "ot_util_variant_save"
fails.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
I was looking at https://bugzilla.gnome.org/show_bug.cgi?id=738954
which wants us to ensure we chown() the refs. As part of that,
I did a generic conversion to use `*at()` (which naturally gives
us more low level control so we can call `fchown` etc.
This patch also sneaks in a change to respect the repo's
`disable_fsync` flag - if fsync is not set, then we never
`fdatasync()` (unlike the `g_file_replace_contents()` default. Also
unlike it, if fsync is enabled, we *always* sync even if the file
didn't exist.
This will be used by rpm-ostree to unset the immutable bit temporarily
in order to do package layering. We could add an API to deploy a tree
without the immutable bit, but this is simpler.
rpm-ostree currently uses ostree_repo_checkout_tree(), which as a side
effect will use the uncompressed objects cache by default. This is
rather annoying if you're using rpm-ostree on a server-side
repository, because if you then rsync the repo, you'll be syncing out
the uncompressed objects unless you exclude them.
We added the ability to disable the uncompressed cache in the
repository config to fix this, but it's better to allow application
control over this. The uncompressed cache will in some future version
become opt in as well.
This new API further:
- Drops the `GFile` usage in favor of `openat` APIs
- Improves ergonomics by avoiding callers having to query the source
`GFileInfo` (and carry around a copy of `OSTREE_GIO_FAST_QUERYINFO`)
- Has a more extensible options structure
Per the comment, I rather crudely have the `ostree checkout` builtin
call both APIs to ensure some testing coverage.
However, I'd like to in the future have easier-to-set-up testing code
that calls `libtest.sh` to set up dummy data.
It's valid for the remote server to say 200 OK and give us the entire
file instead of a 206 Partial Content, and in that case we should blow
away the previous cached data, rather than blindly appending to it and
thus creating multiple copies of the data inside the file.
This problem primarily occurs when we do have the complete file, and
we're interrupted, then try again, where the new process didn't record
the download was already complete. We do a range request for bytes
past the end, and some web servers (e.g. Akamai) will return 200 OK
with the whole content again, rather than a 416 Requested Range Not
Satisfiable.
Thus we could also fix this by saner caching strategy - since we know
the file is complete, rename it again to $checksum.done or something
before it's processed. (Or really, rework how we do caching more
intelligently in general).
This fixes the issue that interrupted pulls failed with such
webservers, although repeated attempts would eventually succeed
because we'd unlink files that failed to pull.
Related: https://bugzilla.redhat.com/show_bug.cgi?id=1207292
The use case for non-default sysroots that I know of are:
1) The current test suite
2) Installers (Anaconda)
3) Inspecting VM disks
For 2) and 3), it'll quickly be obvious if they're not running as
root, and these are more obscure cases. We want to allow 1), and this
is a simple way to do it.
https://bugzilla.gnome.org/show_bug.cgi?id=747164
If the starting index is beyond the end of the list, it's a programming
error. Previously, the code was trying to raise a runtime error, but
actually causing a segfault.
This was detected by test code in test-mutable-tree.c, which is removed
in this commit because it should now not be possible to crash here.
https://bugzilla.gnome.org/747032
Indicates the command requires superuser privilege. Fails early with
a more helpful message than would otherwise be returned by libostree.
Currently all admin commands except 'status' require superuser.
Commands that need to write files within the repo directory can call
this early to ensure the directory is writable for the current user.
If not, it fails with a helpful "You need to be root to perform this
command" message.
Uses OstreeGpgVerifyResult to catch duplicate signatures.
If the commit has already been signed with the given GPG key ID, fail
with a G_IO_ERROR_EXISTS error code.
Wrappers a referenced gpgme_verify_result_t so detailed verify results
can be examined independently of executing a verify operation.
_ostree_gpg_verifier_check_signature() now returns this object instead
of a single valid/invalid boolean, but the idea is for OstreeRepo to also
return this object for commit signature verification so it can be utilized
at the CLI layer (and possibly by other programs).
The object count comes from g_hash_table_size(), so it's not a 0 based
index. In order to maintain the mod calculations correctly, just print
out index + 1.
https://bugzilla.gnome.org/show_bug.cgi?id=746360
Similar to c2b01ad. For some reason I was thinking the commit data
still needed to be written to disk prior to verifying, but it's just
another artifact of spawning gpgv2 (predates using GPGME).
Makes for a nice cleanup in fetch_metadata_to_verify_delta_superblock()
as well.