API changes:
- added function `ostree_sign_add_pk()` for multiple public keys using.
- `ostree_sign_set_pk()` now substitutes all previously added keys.
- added function `ostree_sign_load_pk()` allowed to load keys from file.
- `ostree_sign_ed25519_load_pk()` able to load the raw keys list from file.
- use base64 encoded public and private ed25519 keys for CLI and keys file.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Added the initial version of signing interface allowing to allowing to
sign and verify commits.
Implemented initial signing modules:
- dummy -- simple module allowing to sign/verify with ASCII string
- ed25519 -- module allowing to sign/verify commit with ed25519
(EdDSA) signature scheme provided by libsodium library.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
The [dev-overlay](332c6ab3b9/src/cmd-dev-overlay)
script shipped in coreos-assembler mostly exists to deal
with the nontrivial logic around SELinux policy. Let's make
the use case of "commit some binaries overlaying a base tree, using
the base's selinux policy" just require a magical
`--selinux-policy-from-base` argument to `ostree commit`.
A new C API was added to implement this in the case of `--tree=ref`;
when the base directory is already checked out, we can just reuse
the existing logic that `--selinux-policy` was using.
Requires: https://github.com/ostreedev/ostree/pull/2039
Add G_IO_ERROR_PARTIAL_INPUT to the list of error codes caused by
transient networking errors which lead us to retry the request. When
attempting to install the spotify flatpak you often get the error
message "Connection terminated unexpectedly" and the download of the deb
file fails. In this case, libsoup is setting G_IO_ERROR_PARTIAL_INPUT
and sometimes a subsequent download attempt is successful, so we should
treat it as transient.
Ideally we would behave as wget does in this case and retry the download
picking up where we left off in the file rather than starting over, but
that would require changes to libsoup I think.
Sadly this patch does not fix the flatpak installation of spotify in the
face of such errors, because flatpak doesn't use libostree to download
extra data, but presumably it's possible we could encounter such an
error pulling from an ostree repo, so the patch is still correct.
These had been added assuming 2019.7 would be the next version, but now
it's 2020 and there's been a release. In the case of
`OstreeCommitSizesEntry`, I'd forgotten to move it forward from 2019.5
to 2019.7 in the time between when I started working on the feature and
it landed.
For repo structure directories like `objects`, `refs`, etc... we should
be more permissive and let the system's `umask` narrow down the
permission bits as wanted.
This came up in a context where we want to be able to have read/write
access on an OSTree repo on NFS from two separate OpenShift apps by
using supplemental groups[1] so we don't require SCCs for running as the
same UID (supplemental groups are part of the default restricted SCC).
[1] https://docs.openshift.com/container-platform/3.11/install_config/persistent_storage/persistent_storage_nfs.html#nfs-supplemental-groups
For some reason I haven't fully debugged (probably a recent
kernel change), in the case where the immutable bit isn't set,
trying to call `EXT2_IOC_SETFLAGS` without it set returns `EINVAL`.
Let's avoid calling the `ioctl()` if we don't have anything to do.
This fixes a slew of `make check` failures here in my toolbox
environment.
(kernel is `5.5.0-0.rc6.git0.1.fc32.x86_64` with `xfs`)
Using fs-verity is natural for OSTree because it's file-based,
as opposed to block based (like dm-verity). This only covers
files - not symlinks or directories. And we clearly need to
have integrity for the deployment directories at least.
Also, what we likely need is an API that supports signing files
as they're committed.
So making this truly secure would need a lot more work. Nevertheless,
I think it's time to start experimenting with it. Among other things,
it does *finally* add an API that makes files immutable, which will
help against some accidental damage.
This is basic enablement work that is being driven by
Fedora CoreOS; see also https://github.com/coreos/coreos-assembler/pull/876
Currently `ostree_gpg_verify_result_require_valid_signature` always
returns an error that the key used for the signature is missing from the
keyring. However, all that's been determined is that there are no valid
signatures. The error could also be from an expired signature, an
expired key, a revoked key or an invalid signature.
Provide values for these missing errors and return them from
`ostree_gpg_verify_result_require_valid_signature`. The description of
each result is appended to the error message, but since the result can
contain more than one signature but only a single error can be returned,
the status of the last signature is used for the error code. See the
comment for rationale.
Related: flatpak/flatpak#1450
This function parses the object listing in the `ostree.sizes` metadata
and returns an array of `OstreeCommitSizesEntry` structures.
Unfortunately, for reasons I don't understand, the linker wants to
resolve `_ostree_read_varuint64` from `ostree-core.c` even though it's
not used by `test-checksum.c` at all.
Append a byte encoding the OSTree object type for each object in the
metadata. This allows the commit metadata to be fetched and then for the
program to see which objects it already has for an accurate calculation
of which objects need to be downloaded.
This slightly breaks the `ostree.sizes` `ay` metadata entries. However,
it's unlikely anyone was asserting the length of the entries since the
array currently ends in 2 variable length integers. As far as I know,
the only users of the sizes metadata are the ostree test suite and
Endless' eos-updater[1]. The former is updated here and the latter
already expects this format.
1. https://github.com/endlessm/eos-updater/
If the object was already in the repo then the sizes metadata entry was
skipped. Move the sizes entry creation after the data has been computed
but before the early return for an existing object.
The object sizes hash table was only being cleared when the repo was
finalized. That means that performing multiple commits while the repo
was open would reuse all the object sizes metadata for each commit.
Clear the hash table when the sizes metadata is setup and when it's
added to a commit. This still does not fix the issue all the way since
it does nothing to prevent the program from constructing multiple
commits simultaneously. To handle that, the object sizes hash table
should be attached to the MutableTree since that has the commit state.
However, the MutableTree is gone when the commit is actually created.
The hash table would have to be transferred to the root file when
writing the MutableTree. That would be an awkward addition to
OstreeRepoFile, though. Add a FIXME to capture that.
We want to support extending the read-only state to cover `/sysroot`
and `/boot`, since conceptually all of the data there should only
be written via libostree. Or at least for `/boot` should *mostly*
just be written by ostree.
This change needs to be opt-in though to avoid breaking anyone.
Add a `sysroot/readonly` key to the repository config which instructs
`ostree-remount.service` to ensure `/sysroot` is read-only. This
requires a bit of a dance because `/sysroot` is actually the same
filesystem as `/`; so we make `/etc` a writable bind mount in this case.
We also need to handle `/var` in the "OSTree default" case of a bind
mount; the systemd generator now looks at the writability state of
`/sysroot` and uses that to determine whether it should have the
`var.mount` unit happen before or after `ostree-remount.service.`
Also add an API to instruct the libostree shared library
that the caller has created a new mount namespace. This way
we can freely remount read-write.
This approach extends upon in a much better way previous work
we did to support remounting `/boot` read-write.
Closes: https://github.com/ostreedev/ostree/issues/1265
This allows copying the state from one OstreeAsyncProgress object to
another, atomically, without invoking the callback. This is needed in
libflatpak, in order to chain OstreeAsyncProgress objects so that you
can still receive progress updates when iterating a different
GMainContext than the one that the OstreeAsyncProgress object was
created under.
See https://github.com/flatpak/flatpak/pull/3211 for the application of
this API.
Define an `OstreeKernelArgsEntry` structure, which holds
both the key and the value. The kargs order array stores
entries for each key/value pair, instead of just the keys.
The hash table is used to locate entries, by storing
entries in a pointer array for each key. The same public
interface is preserved, while maintaining ordering
information of each key/value pair when
appending/replacing/deleting kargs.
Fixes: #1859
To allow for FIPS mode, we need to also install the HMAC file from
`/usr/lib/modules` to `/boot` alongside the kernel image where the
`fips` dracut module will find it. For details, see:
https://github.com/coreos/fedora-coreos-tracker/issues/302
Note I didn't include the file in the boot checksum since it's itself a
checksum of the kernel, so we don't really gain much here other than
potentially causing an unnecessary bootcsum bump.
I was hitting `SIGSEGV` when running `cosa build` and narrowed it down
to #1954. What's happening here is that because we're using the default
context, when we unref it in the out path, it may not actually destroy
the `GSource` if it (the context) is still ref'ed elsewhere. So then,
we'd still get events from it if subsequent operations iterated the
context.
This patch is mostly a revert of #1954, except that we still keep a ref
on the `GSource`. That way it is always safe to destroy it afterwards.
(And I've also added a comment to explain this better.)
We're creating the timer source and then passing ownership to the
context, but because we didn't free the pointer, we would still call
`g_source_destroy` in the exit path. We'd do this right after doing
`unref` on the context too, which would have already destroyed and
unref'ed the source.
Drop that and just restrict the scope of that variable down to make
things more obvious.
Just noticed this after reviewing #1953.
In glib 2.62 this has been changed to emitting a warning. Use G_STRFUNC
instead, which has been available for a long time and is already used in
other places in ostree.
zipl is a bit special in that it parses the BLS config files
directly *but* we need to run the command to update the "boot block".
Hence, we're not generating a separate config file like the other
backends. Instead, extend the bootloader interface with a `post_bls_sync`
method that is run in the same place we swap the `boot/loader` symlink.
We write a "stamp file" in `/boot` that says we need to run this command.
The reason we use stamp file is to prevent the case where the system is
interrupted after BLS file is updated, but before zipl is triggered,
then zipl boot records are not updated.
This opens the door to making things eventually-consistent/reconcilable
by later adding a systemd unit to run `zipl` if we're interrupted via
a systemd unit - I think we should eventually take this approach
everywhere rather than requiring `/boot/loader` to be a symlink.
Author: Colin Walters <walters@verbum.org>
Tested-by: Tuan Hoang <tmhoang@linux.ibm.com>
Co-Authored-By: Tuan Hoang <tmhoang@linux.ibm.com>
More "scan-build doesn't understand GError and our out-param conventions"
AKA "these errors would be impossible with Rust's sum type Result<> approach".
Got this error when trying to rebase libostree in RHEL:
```
Error: CLANG_WARNING: [#def1]
libostree-2019.2/src/libostree/ostree-repo-checkout.c:375:21: warning: Access to field 'disable_xattrs' results in a dereference of a null pointer (loaded from variable 'repo')
```
I think what's happening is it sees us effectively testing
`if (repo == NULL)` via the `while (current_repo)`. Let's
tell it we're sure it's non-null right after the loop.
Tiny release. Just want to get out the important bugfixes instead of
backporting patches (notably the gpg-agent stuff and
`ostree-finalize-staged.service` ordering).
Closes: #1927
Approved by: cgwalters
After the corruption has been fixed with "ostree fsck -a --delete", a
second run of the "ostree fsck" command will print X partial commits
not verified and exit with a zero.
The zero exit code makes it hard to detect if a repair operation needs
to be run. When ever fsck creates a partial commit it should add a
reason for the partial commit to the state file found in
state/<hash>.commitpartial. This will allow a future execution of the
fsck to still return an error indicating that the repository is still
in the damaged state, awaiting repair.
Additional reason codes could be added in the future for why a partial
commit exists.
Text from: https://github.com/ostreedev/ostree/pull/1880
====
cgwalters commented:
To restate, the core issue is that it's valid to have partial commits
for reasons other than fsck pruned bad objects, and libostree doesn't
have a way to distinguish.
Another option perhaps is to write e.g. fsck-partial into the
statefile state/<hash>.commitpartial which would mean "partial, and
expected to exist but was pruned by fsck" and fsck would continue to
error out until the commit was re-pulled. Right now the partial stamp
file is empty, so it'd be fully compatible to write a rationale into
it.
====
Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
Closes: #1910
Approved by: cgwalters
If there are different deployments for the same commit version, the BLS
snippets will have the same title fields (but different version fields):
$ grep title *
ostree-1-testos.conf:title TestOS 42 20190902.0 (ostree)
ostree-2-testos.conf:title TestOS 42 20190902.0 (ostree)
ostree-3-testos.conf:title TestOS 42 20190902.0 (ostree)
But bootloaders could expect the title field to be unique for BLS files.
For example, the zipl bootloader used in the s390x architecture uses the
field to name the boot sections that are created from the BLS snippets.
So two BLS snippets having the same title would lead to zipl failing to
create the IPL boot sections because they would have duplicated names:
$ zipl
Using config file '/etc/zipl.conf'
Using BLS config file '/boot/loader/entries/ostree-3-testos.conf'
Using BLS config file '/boot/loader/entries/ostree-2-testos.conf'
Using BLS config file '/boot/loader/entries/ostree-1-testos.conf'
Error: Config file '/etc/zipl.conf': Line 0: section name 'TestOS 42 20190902.0 (ostree)' already specified
Avoid this by always including the deployment index along with the commit
version in the title field, so this will be unique even if there are BLS
files for deployments that use the same commit version:
$ grep title *
ostree-1-testos.conf:title TestOS 42 20190902.0 (ostree:2)
ostree-2-testos.conf:title TestOS 42 20190902.0 (ostree:1)
ostree-3-testos.conf:title TestOS 42 20190902.0 (ostree:0)
$ zipl
Using config file '/etc/zipl.conf'
Using BLS config file '/boot/loader/entries/ostree-3-testos.conf'
Using BLS config file '/boot/loader/entries/ostree-2-testos.conf'
Using BLS config file '/boot/loader/entries/ostree-1-testos.conf'
Building bootmap in '/boot'
Building menu 'zipl-automatic-menu'
Adding #1: IPL section 'TestOS 42 20190902.0 (ostree:0)' (default)
Adding #2: IPL section 'TestOS 42 20190902.0 (ostree:1)'
Adding #3: IPL section 'TestOS 42 20190902.0 (ostree:2)'
Preparing boot device: dasda (0120).
Done.
Closes: #1911
Approved by: cgwalters
Currently the BLS fragments fields write is non-determinisitc. The order
of the fields will depend on how the iterator of the options GHashTable
iterates over the key/value pairs.
But some bootloaders expect the fields to be written in a certain order.
For example the zipl bootloader (used in the s390x architecture) fails to
parse BLS files if the first field is not the 'title' field, since that's
used to name the zipl boot sections that are created from the BLS files.
Write the fields in a deterministic order, following what is used in the
example file of the BootLoaderspec document:
https://systemd.io/BOOT_LOADER_SPECIFICATION
Related: https://github.com/ostreedev/ostree/issues/1888Closes: #1904
Approved by: cgwalters
OSTree has some logic to preserve comment lines in the BLS fragments, but
the BLS fragments are always created on new deployments so the comments
are never carried.
Also, OSTree never writes BLS fragments with comments so these will only
be present in BLS files that were modified outside of OSTree. Something
that should be avoided in general.
Finally, there is a bug in the logic that causes BLS files to have lines
with only a newline character.
The ostree_bootconfig_parser_parse_at() function reads the bootconfig file
using glnx_fd_readall_utf8() but this function NUL terminates the returned
string with the file contents.
So when the string is later split using '\n' as delimiter, the last token
is set to '\0' and a wrong GVariant will be added to the lines GPtrArray
in the OstreeBootconfigParser struct.
This will lead to bootconfig files that contains lines with only a newline
character, since the key in the GVariant would be set to NUL and won't be
present in the options GHashTable of the OstreeBootconfigParser struct.
So let's just remove that logic since is never used and makes BLS files to
have wrong empty lines.
Before this patch:
$ tail -n 4 /boot/loader/entries/ostree-1-testos.conf | hexdump -C
00000000 74 69 74 6c 65 20 54 65 73 74 4f 53 20 34 32 20 |title TestOS 42 |
00000010 32 30 31 39 30 38 32 34 2e 30 20 28 6f 73 74 72 |20190824.0 (ostr|
00000020 65 65 29 0a 0a 0a 0a |ee)....|
00000027
After this patch:
$ tail -n 4 /boot/loader/entries/ostree-1-testos.conf | hexdump -C
00000000 76 65 72 73 69 6f 6e 20 31 0a 6f 70 74 69 6f 6e |version 1.option|
00000010 73 20 72 6f 6f 74 3d 4c 41 42 45 4c 3d 4d 4f 4f |s root=LABEL=MOO|
00000020 20 71 75 69 65 74 20 6f 73 74 72 65 65 3d 2f 6f | quiet ostree=/o|
00000030 73 74 72 65 65 2f 62 6f 6f 74 2e 31 2f 74 65 73 |stree/boot.1/tes|
00000040 74 6f 73 2f 61 65 34 36 34 39 36 38 30 64 33 65 |tos/ae4649680d3e|
00000050 38 33 62 32 34 65 34 37 66 38 64 66 31 30 38 31 |83b24e47f8df1081|
00000060 38 62 66 36 39 38 39 64 36 34 37 61 62 32 38 38 |8bf6989d647ab288|
00000070 64 31 63 30 39 38 30 36 65 34 61 33 36 61 34 65 |d1c09806e4a36a4e|
00000080 62 62 66 36 2f 30 0a 6c 69 6e 75 78 20 2f 6f 73 |bbf6/0.linux /os|
00000090 74 72 65 65 2f 74 65 73 74 6f 73 2d 61 65 34 36 |tree/testos-ae46|
000000a0 34 39 36 38 30 64 33 65 38 33 62 32 34 65 34 37 |49680d3e83b24e47|
000000b0 66 38 64 66 31 30 38 31 38 62 66 36 39 38 39 64 |f8df10818bf6989d|
000000c0 36 34 37 61 62 32 38 38 64 31 63 30 39 38 30 36 |647ab288d1c09806|
000000d0 65 34 61 33 36 61 34 65 62 62 66 36 2f 76 6d 6c |e4a36a4ebbf6/vml|
000000e0 69 6e 75 7a 2d 33 2e 36 2e 30 0a 74 69 74 6c 65 |inuz-3.6.0.title|
000000f0 20 54 65 73 74 4f 53 20 34 32 20 32 30 31 39 30 | TestOS 42 20190|
00000100 38 32 34 2e 30 20 28 6f 73 74 72 65 65 29 0a |824.0 (ostree).|
0000010f
Closes: #1904
Approved by: cgwalters
I've seen people confused by this error in the case where
`/boot` isn't mounted or the BLS fragments were deleted, etc.
If you understand ostree deeply it's clear but, let's do
better here and a direct error message for the case where
we can't find `/boot/loader` which is the majority of these.
The other case could happen if e.g. just the BLS fragment
for the booted deployment was deleted; let's reword that
one a bit too.
Closes: #1905
Approved by: rfairley
When running under qemu, unimplemented ioctls such as FIFREEZE
return ENOSYS, and this causes the deployment to fail.
Catch this and handle it like EOPNOTSUPP.
I'm not sure if qemu's behaviour is fully correct here (or if it should
return EOPNOTSUPP) but it's trivial to handle regardless.
Closes: #1901
Approved by: cgwalters
Add dummy stubs for GPG public functions to be compiled instead of
original code in case if support of GPG is disabled.
Need that to keep API backward compatibility.
Based on original code from file `ostree-gpg-verify-result.c`.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Closes: #1889
Approved by: cgwalters
Some gpg-named functions/variables should be used for any signature
system, so remove "gpg_" prefix from them to avoid confusion.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Closes: #1889
Approved by: cgwalters
Do not build the code related to GPG sign and verification if
GPGME support is disabled.
Public functions return error 'G_IO_ERROR_NOT_SUPPORTED' in case if
gpg-related check is rquested.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Closes: #1889
Approved by: cgwalters
There's a valid use case for enabling the timestamp downgrade check
while still also using override commits.
We'll make use of this in Fedora CoreOS, where the agent specifies the
exact commit to upgrade to, while still enforcing that it be newer.
Closes: #1891
Approved by: cgwalters
This way projects can dispatch at run-time based on ostree's
build time options, e.g. detect the availability of GPG.
Closes: #1890
Approved by: jlebon
Move the OstreeKernelArgs autoptr cleanup definition to
ostree-autocleanups.h, which will only expose the definitions when
building ostree or if glib is new enough. The include of
ostree-kernel-args.h needs to be moved before ostree-autocleanups.h in
ostree.h so that the OstreeKernelArgs type is declared when the autoptr
cleanup is defined. All the places it's used already pull in libglnx.h
first so that the compat macros are picked up if glib it too old during
the ostree build.
Closes: #1892
Approved by: jlebon
When a temporary directory is used for GPG operations, it's pretty clear
that the running agent will be useless after the directory is deleted.
Call the new `ot_gpgme_kill_agent ()` helper to kill gpg-agent rather
than leaving them it hanging around forever.
As it turns out, gnupg does have code to make gpg-agent automatically
exit when the homedir is removed (https://dev.gnupg.org/T2756), but
that's only available on gnupg 2.2 or newer. Possibly this code can be
dropped later when that's more widely deployed or users/distros have
been advised to backport the necessary changes.
Closes: #1799
Approved by: cgwalters
Introduce a new signature attribute for the key expiration timestamp and
display it when the key has a non-zero expiration time. Without this,
the error shown is `BAD signature`, which isn't correct.
Closes: #1872
Approved by: cgwalters
This change makes public the current kargs API in src/libostree/ostree-kernel-args.c
and adds documentations.
Upstreams the new kargs API from rpm-ostree/src/libpriv/rpmostree-kargs-process.c
Merges libostree_kernel_args_la_SOURCES to libostree_1_la_SOURCES in Makefile-libostree.am
Upstreams tests/check/test-kargs.c from rpm-ostree.
Closes: #1833Closes: #1869
Approved by: jlebon
Similar to ostree_repo_write_archive_to_mtree(), but takes
a file descriptor to read the archive from instead of mandating
a file path.
Usefull for importing archives into an OSTree repo over a socket
or from standard input in command line tools.
Closes: #1862
Approved by: jlebon
Use GIOErrorEnum as the return value for
_ostree_fetcher_http_status_code_to_io_error(), to avoid an
implicit cast from GIOError.
Closes: #1857
Approved by: cgwalters
Teach `ostree-finalize-staged.service` to check for a file in `/run` to
determine if it should do the finalization. This will be used in
RPM-OSTree, where we want to be able to separate out "preparing updates"
from "making update the default" for more fine-grained control. See:
https://github.com/projectatomic/rpm-ostree/issues/1748Closes: #1841
Approved by: cgwalters
This can happen if a deployment was staged and later cleaned up. Though
just as a helper when debugging issues, let's explicitly mention that
case.
Closes: #1841
Approved by: cgwalters
Rather than wrapping each instance of `sd_journal_*` with
`HAVE_SYSTEMD`, let's just add some convenience macros that are just
no-op if we're not compiling with systemd.
Closes: #1841
Approved by: cgwalters
Generate a grub2 config using the pending deployment, if a grub2
bootloader is detected in the sysroot. Allows grub2-mkconfig
to run if there are no previous deployments.
Fixes: #1774Closes: #1831
Approved by: jlebon
Otherwise, we'll be subject to whatever `umask` is currently. Normally,
processes should respect `umask` when creating files and directories,
but specifically for `ostree admin unlock` (or `rpm-ostree usroverlay`),
this poses a problem since e.g. a `/usr` with mode 0700 will break any
daemon that doesn't run as root and needs to read files under `/usr`,
such as polkitd.
This patch just does a `chmod()` after the `mkdir()`. An alternative
would be to do `umask(0000)` after forking into the child process
that'll call `mount()`, but that'd require also moving the `mkdir()`
calls into there, making for a more intrusive patch.
Closes: #1843
Approved by: cgwalters
Currently for a "normal" refspec you can choose to use
ostree_repo_resolve_rev_ext() instead of ostree_repo_resolve_rev() if
you only want to look at local refs (in refs/heads/) not remote ones.
This commit provides the analogous functionality for
ostree_repo_resolve_collection_ref() by adding a flag
OSTREE_REPO_RESOLVE_REV_EXT_LOCAL_ONLY and implementing it. This
will be used by Flatpak.
Closes: #1825
Approved by: jlebon
Currently the flag OSTREE_REPO_LIST_REFS_EXT_EXCLUDE_REMOTES for
ostree_repo_list_collection_refs() means that refs in refs/remotes/
should be excluded but refs in refs/mirrors/ should still be checked, in
addition to refs/heads/ which is always checked. However in some
situations you want to exclude both remote and mirrored refs and only
check local "owned" ones. So this
commit adds a new flag OSTREE_REPO_LIST_REFS_EXT_EXCLUDE_MIRRORS which
lets you exclude refs/mirrors/ from the listing.
This way we can avoid breaking API but still allow the listing of local
collection-refs.
The impetus for this change is that I'm changing Flatpak to make more
use of refs/mirrors, and we need a way to specify that a collection-ref
is local when using ostree_repo_resolve_collection_ref() in, for
example, the implementation of the repo command. The subsequent commit
will make the changes needed there.
Closes: #1825
Approved by: jlebon
My last commit "lib/repo-refs: Resolve collection-refs in-memory and in
parent repos" changed ostree_repo_resolve_collection_ref() to check the
in-memory set of refs *after* failing to find the ref on disk but that's
not what we want. We want to use the in-memory set of refs first,
because those are the most up to date commits, and then fall back to the
on-disk repo and finally fall back to checking any parent repo. This
commit makes such a change to the order of operations, which is
consistent with how ostree_repo_resolve_rev() works.
Aside from this change being logical, it also fixes some unit test
failures on an unmerged branch of flatpak:
https://github.com/flatpak/flatpak/pull/2705
Also, tweak the comments here.
Closes: #1825
Approved by: jlebon
On at least one user's computer, g_getenv("http_proxy") returns the
empty string, so check for that and treat it as no proxy rather than
printing a warning.
See https://github.com/flatpak/flatpak/issues/2790Closes: #1835
Approved by: cgwalters
Currently the P2P code requires you to trust every remote you have
configured to the same extent, because a remote controlled by a
malicious actor can serve updates to refs (such as Flatpak apps)
installed from other remotes.[1] The way this attack would play out is
that the malicious remote would deploy the same collection ID as the
victim remote, and would then be able to serve updates for it.
One possible remedy would be to make it an error to configure remotes
such that two have the same collection ID but differing GPG keys. I
attempted to do that in Flatpak[2] but it proved difficult because it is
valid to configure two remotes with the same collection ID, and they may
then each want to update their keyrings which wouldn't happen
atomically.
Another potential solution I've considered is to add a `trusted-remotes`
option to ostree_repo_find_remotes_async() which would dictate which
keyring to use when pulling each ref. However the
ostree_repo_finder_resolve_async() API would still remain vulnerable,
and changing that would require rewriting a large chunk of libostree's
P2P support.
So this commit represents a third attempt at mitigating this security
hole, namely to have the client specify which remote to use for GPG
verification at pull time. This way the pull will fail if the commits
are signed with anything other than the keys we actually trust to serve
updates.
This is implemented as an option "ref-keyring-map" for
ostree_repo_pull_from_remotes_async() and
ostree_repo_pull_with_options() which dictates the remote to be used for
GPG verification of each collection-ref. I think specifying a keyring
remote for each ref is better than specifying a remote for each
OstreeRepoFinderResult, because there are some edge cases where a result
could serve updates to refs which were installed from more than one
remote.
The PR to make Flatpak use this new option is here[3].
[1] https://github.com/flatpak/flatpak/issues/1447
[2] https://github.com/flatpak/flatpak/pull/2601
[3] https://github.com/flatpak/flatpak/pull/2705Closes: #1810
Approved by: cgwalters
When writing a delta to a file this may not always be recorded
in the filename, and it's useful data.
Ref: https://mail.gnome.org/archives/ostree-list/2019-February/msg00000.html
This also required teaching `show` to accept a file path.
Note...for some reason `test-deltas.sh` breaks when run from
a tty - we get `SIGTTIN` which implies something is reading from
the tty but it wasn't obvious to me what.
Closes: #1823
Approved by: jlebon
In Silverblue right now, the boot menu title looks like this:
Fedora 29.20190301.0 (Workstation Edition) 29.20190301.0 (ostree)
This is because RPM-OSTree's `mutate-os-release` feature is enabled,
which injects the OSTree version string directly into `VERSION` and
`PRETTY_NAME`. So appending the version string again is a bit redundant.
Let's just do a simple substring check here before adding the version to
the title.
Closes: #1829
Approved by: cgwalters
The sysroot.bootloader key configures the bootloader
that OSTree uses when deploying a sysroot. Having this key
allows specifying behavior not to use the default bootloader
backend code, which is preferable when creating a first
deployment from the sysroot (#1774).
As of now, the key can take the values "auto" or "none". If
the key is not given, the value defaults to "auto".
"auto" causes _ostree_sysroot_query_bootloader() to be used
when writing a new deployment, which is the original behavior
that dynamically detects which bootloader to use.
"none" avoids querying the bootloader dynamically. The BLS
config fragments are still written to
sysroot/boot/loader/entries for use by higher-level software.
More values can be supported in future to specify a single
bootloader, different behavior for the bootloader code, or
a list of bootloaders to try.
Resolves: #1774Closes: #1814
Approved by: jlebon
Rename ot_keyfile_get_string_as_list() to
ot_keyfile_get_string_list_with_separator_choice() which expresses
more clearly why the function is needed. Also shorten the
function comment.
Closes: #1814
Approved by: jlebon
Currently the behavior of ostree_repo_resolve_rev() is that it tries to
resolve a ref to a commit by checking the refs/ directories, but also by
checking for in-memory ref-checksum pairs which are part of an
in-progress transaction and also by checking the parent repo if one
exists. Currently ostree_repo_resolve_collection_ref() only checks the
refs/ directories, so this commit makes its behavior analagous since it
is the analagous API which supports collection-refs.
The impetus for this was that currently Flatpak uses
ostree_repo_resolve_rev() to load a commit after doing a P2P pull in
flatpak_dir_do_resolve_p2p_refs(), but that assumes the ref came from
the same remote that originally provided it, which might not be the case
if more than one remote has the same collection ID configured. And
changing Flatpak to use ostree_repo_resolve_collection_ref() doesn't
work without this patch.
Closes: #1821
Approved by: pwithnall
Add the OSTREE_REPO_REMOTE_CHANGE_REPLACE operation to the
OstreeRepoRemoteChange enum. This operation will add a remote or replace
an existing one. It respects the location of the remote configuration
file when replacing and the remotes config dir settings when adding a
new remote.
Closes: #1166
Approved by: cgwalters
We have a `http2=[0|1]` remote config option; let's have the
`--disable-http2` build option define the default for that. This way
it's easy to still enable http2 for testing even if
we have it disabled by default.
Closes: #1798
Approved by: jlebon
Similar as available for u-boot (ce2995e1dc)
and syslinux (c5112c25e4), enable parsing
and writing devicetree filename into grub.cfg.
This is required by arm64-based devices running edk2 instead of u-boot
as the main bootloader (e.g. 96boards HiKey and HiKey960).
Signed-off-by: Ricardo Salveti <ricardo@foundries.io>
Closes: #1790
Approved by: cgwalters
We want a case where we can disable the min-free-space check. Initially,
it felt like to add a OSTREE_REPO_PULL_FLAGS_DISABLE_FREE_SPACE_CHECK but
the problem is prepare_transaction() does not have a OstreeRepoPullFlags
parameter which we can parse right here. On top of it, prepare_transaction()
enforces min-free-space check and won't let the transaction proceed if
the check failed.
This is pretty bad in conjunction with "inherit-transaction" as what
Flatpak uses. There is no way to disable this check unless we remove
it altogether from prepare_transaction.
This issue came out to light when flatpak wasn't able to write metadata
after fetching from remote:
[uajain@localhost ~]$ flatpak remote-info flathub org.kde.Platform//5.9
error: min-free-space-size 500MB would be exceeded
Metadata objects helps in housekeeping and restricting them means
restricting crucial UX (like search, new updates) functionalities
in clients like gnome-software. The error banners originated from
these issues are also abrupt and not much helpful to the user. This
is the specific instance of the issue this patches tries to address.
See https://github.com/flatpak/flatpak/issues/2139 for discussion.
Closes: #1779
Approved by: mwleeds
The way _ostree_repo_import_object() is written, a hardlink copy is only
attempted if the source repo is trusted, so update the docs for
ostree_repo_import_object_from_with_trust() to reflect that.
Closes: #1777
Approved by: cgwalters
This allows specifying gpgpath as list of
paths that can point to a file or a directory. If a directory path
is given, paths to all regular files in the directory are added
to the remote as gpg ascii keys. If the path is not a directory,
the file is directly added (whether regular file, empty - errors
will be reported later when verifying gpg keys e.g. when pulling).
Adding the gpgkeypath property looks like:
ostree --repo=repo remote add --set=gpgpath="/path/key1.asc,/path/keys.d" R1 https://example.com/some/remote/ostree/repoCloses#773Closes: #1773
Approved by: cgwalters
When falling back to copying objects when importing them into a
bare-user repo, we only actually need to transfer over the
`user.ostreemeta` xattr.
This allows the destination repo to be on a separate filesystem that
might not even support `security.selinux`. (I hit this while importing
over virtio-9p).
Closes: #1771
Approved by: cgwalters
I found this useful while hacking on rpm-ostree but I think it might be
useful enough to upstream. This stat is really helpful for validating
that a pipeline is hitting the devino cache sweet spot.
Closes: #1772
Approved by: cgwalters
if a file ".wh..wh..opq" is present in a directory, delete anything
from lower layers that is already in that directory.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Closes: #1486
Approved by: cgwalters
This renames a config key to make its semantics more obvious. Despite
what the commit message says, it only applies when a set of repo finders
is not specified (either on the command line or in a library API call).
This also renames the corresponding ostree_repo_get function. We can do
this since it hasn't been released yet.
Closes: #1763
Approved by: pwithnall
Rather than manually starting the `ostree-finalize-staged.service` unit,
we can leverage systemd's path units for this. It fits quite nicely too,
given that we already have a path we drop iif we have a staged
deployment.
To give some time for the preset to make it to systems, we don't yet
drop the explicit call to `systemctl start`. Though we do make it
conditional based on a DEBUG env var so that we can actually test it in
CI for now. Once we're sure this has propagated, we can drop the
`systemctl start` path and the env var together.
Closes: #1740
Approved by: cgwalters
This commit disables searching on the local network for refs, unless
explicitly requested by the user either by changing the value of the
"core.repo-finders" config option, or by passing an OstreeRepoFinderAvahi to
ostree_repo_find_remotes_async() / ostree_repo_finder_resolve_async(),
or by specifying "lan" in the --finders option of the find-remotes
command.
The primary reason for this is that ostree_repo_find_remotes_async()
takes about 40% longer to complete with the LAN finder enabled, and that
API is used widely (e.g. in every flatpak operation). It's also probable
that some users don't want ostree doing potentially unexpected traffic
on the local network, even though everything pulled from a peer is GPG
verified.
Flathub will soon deploy collection IDs to everyone[1] so these code
paths will soon see a lot more use and that's why this change is being
made now.
Endless is the only potential user of the LAN updates feature, and we
can revert this patch on our fork of ostree. For it to be used outside
Endless OS we will need to upstream eos-updater-avahi and
eos-update-server into ostree.
[1] https://github.com/flathub/flathub/issues/676Closes: #1758
Approved by: cgwalters
Currently libostree essentially has two modes when it's pulling refs:
the "legacy" code paths pull only from the Internet, and the code paths
that are aware of collection IDs try to pull from the Internet, the
local network, and mounted filesystems (such as USB drives). The problem
is that while we eventually want to migrate everyone to using collection
IDs, we don't want to force checking LAN and USB sources if the user
just wants to pull from the Internet, since the LAN/USB code paths can
have privacy[1], security[2], and performance[3] implications.
So this commit implements a new repo config option called "repo-finders"
which can be configured to, for example, "config;lan;mount;" to check
all three sources or "config;mount;" to disable searching the LAN. The
set of values mirror those used for the --finders option of the
find-remotes command. This configuration affects pulls in three places:
1. the ostree_repo_find_remotes_async() API, regardless of whether or
not the user of the API provided a list of OstreeRepoFinders
2. the ostree_repo_finder_resolve_async() /
ostree_repo_finder_resolve_all_async() API
3. the find-remotes command
This feature is especially important right now since we soon want to
have Flathub publish a metadata key which will have Flatpak clients
update the remote config to add a collection ID.[4]
This effectively fixes https://github.com/flatpak/flatpak/issues/1863
but I'll patch Flatpak too, so it doesn't pass finders to libostree only
to then have them be removed.
[1] https://github.com/flatpak/flatpak/issues/1863#issuecomment-404128824
[2] https://github.com/ostreedev/ostree/issues/1527
[3] Based on how long the "ostree find-remotes" command takes to
complete, having the LAN finder enabled slows down that step of the
pull process by about 40%. See also
https://github.com/flatpak/flatpak/issues/1862
[4] https://github.com/flathub/flathub/issues/676Closes: #1758
Approved by: cgwalters
The idea is that if the process is running as root, it can change
ownership of newly written files to match the owner of the repo.
Unfortunately, it currently applies in the other direction, too - a
non-root user writing to a root owned repository. If the repo is
writable by the user but owned by root, it can still create files and
directories there, but it can't change ownership of them.
This feature comes from
https://bugzilla.gnome.org/show_bug.cgi?id=738954. As it turns out, this
feature was never completed. It only works on content objects and not
metadata objects, refs, deltas, summaries, etc. Rather than try to fix
all of those, remove the feature until someone has interest in
completing it.
Closes: #1754
Approved by: cgwalters
Actually testing the patch to add `--force-copy-zerosized` to
rpm-ostree tripped over the fact that it uses `--union-identical`,
and we just hit an assertion failure with that combination.
Fix this by copying over the logic we have for the hardlink case.
Closes: #1753
Approved by: jlebon
In rpm-ostree we've hit a few cases where hardlinking zero-sized
files causes us problems. The most prominent is lock files in
`/usr/etc`, such as `/usr/etc/selinux/semanage.LOCK`. If there
are two zero-sized lock files to grab, but they're hardlinked,
then locking will fail.
Another case here is if one is using ostree inside a container
and don't have access to FUSE (i.e. `rofiles-fuse`), then the
ostree hardlinking can cause files that aren't ordinarily hardlinked
to become so, and mutation of one mutates all. An example where
this is concerning is Python `__init__.py` files.
Now, these lock files should clearly not be in the tree to begin
with, but - we're not gaining a huge amount by hardlinking these
files either, so let's add an option to disable it.
Closes: #1752
Approved by: jlebon
Write to the journal when starting to finalize a staged deployment.
Combined with the "Transaction completed" message we already emit, this
makes it easy later on to determine whether the operation was successful
by inspecting the journal. This will be used by `rpm-ostree status`.
Closes: #1750
Approved by: cgwalters
There are use cases for libostree as a local content store
for content derived or delivered via other mechanisms (e.g. OCI
images, RPMs, etc.). rpm-ostree today imports RPMs into OSTree
branches, and puts the RPM header value as commit metadata.
Some of these can be quite large because the header includes
permissions for each file. Similarly, some OCI metadata is large.
Since there's no security issues with this, support committing
such content.
We still by default limit the size of metadata fetches, although
for good measure we make this configurable too via a new
`max-metadata-size` value.
Closes: https://github.com/ostreedev/ostree/issues/1721Closes: #1744
Approved by: jlebon
Currently the locking code checks if the value -1 was set for the config
key "lock-timeout-secs" and if so, a thread trying to acquire a lock
will block indefinitely. Positive values specify how long to attempt to
acquire a lock in a non-blocking way (the attempt is made once every
second). But when the value is read from the config file,
g_ascii_strtoull() is used, which converts it to an unsigned integer.
This commit makes libostree use g_ascii_strtoll() instead, so that it's
possible to set that key to -1 as intended.
Closes: #1737
Approved by: jlebon
Copying the xattrs on metadata objects is wrong in general, we
don't "own" them. Notably this would fail in the situation of
doing a pull from e.g. a `bare-user` source to a destination
that was on a different mount point (so we couldn't hardlink),
and the source had e.g. a `security.selinux` attribute.
Closes: #1734Closes: #1736
Approved by: jlebon
Just include the whole URL that failed if libcurl failed with something
elementary like CURLE_COULDNT_CONNECT or CURLE_COULDNT_RESOLVE_HOST.
Closes: #1731Closes: #1732
Approved by: cgwalters
There's some subtlety to this, we don't handle all cases.
But the 99% cases are using `--sysroot deploy` to create an
initial deployment, and then doing upgrades from inside
a booted deployment.
It was only the latter case that didn't work with a separate `/var`.
Fixing all of them would probably require libostree to learn
how to e.g. look at `/etc/fstab` (or worse, systemd mount units?)
and handle the mounting. I don't think we want to do anything
like that right now, since there are no active drivers for the
use case.
Closes: https://github.com/ostreedev/ostree/issues/1729Closes: #1730
Approved by: akiernan
Earlier, the actual reserved space (in blocks) were calculated inside the
transaction codepath ostree_repo_prepare_transaction(). However, while
reworking on ostree_repo_get_min_free_space_bytes() API, it was realized that
this calculation can be done independently from the transaction's codepaths, hence
enabling the usage for ostree_repo_get_min_free_space_bytes() API irrespective
of whether there is an ongoing transaction or not.
https://github.com/ostreedev/ostree/issues/1720Closes: #1722
Approved by: pwithnall
This commit defines a metadata key that tells clients to update their
remote config to add a collection ID. This functionality is currently
implemented in Flatpak for the key "xa.collection-id", but there are two
good reasons for moving the key to OSTree:
1) Servers such as Flathub shouldn't set xa.collection-id in their
metadata now or in the medium term future, because many users are still
using old versions of Flatpak and OSTree[1] which would hit various
bugs[2][3][4] on the P2P code paths that are enabled by collection IDs.
Defining a new key means that only clients running recent
(as-yet-unreleased) versions of Flatpak and OSTree will pay attention to
it and deploy the collection ID, leaving the users on old versions
unaffected.
2) OSTree is as "invested" in collection IDs as Flatpak, so there's no
reason the key should be defined in Flatpak rather than here. According
to Philip Withnall, the reason the key was put in Flatpak originally was
that at the time there was uncertainty about tying OSTree to collection
IDs.
[1] https://ahayzen.com/direct/flathub.html#downloadsbyflatpakstacked
[2] https://github.com/ostreedev/ostree/commit/e4e6d85ea
[3] https://github.com/flatpak/flatpak/commit/5813639f
[4] https://github.com/flatpak/flatpak/commit/5b21a5b7Closes: #1726
Approved by: pwithnall
There is no API method to remove a file or subdirectory from a MutableTree
besides directly manipulating the GHashTable returned by _get_files or
_get_subdirs. This isn't possible from an introspection binding that transforms
the returned GHashTable, and may also leave the tree checksum in an invalid
state. Introduce a new method so that removing files or subdirectories is
safe, and possible from bindings.
Closes: #1724
Approved by: jlebon
This fixes typos and grammar in the docs for OstreeRepo, and copies the
information about OSTREE_REPO_MODE_BARE_USER_ONLY from ostree-core.h
Closes: #1725
Approved by: jlebon
In the OstreeRepoFinderAvahi implementation,
ostree_avahi_service_build_repo_finder_result() is where the DNS-SD
records are processed and turned into OstreeRepoFinderResult objects.
Each result object is supposed to have a hash table mapping refs to
checksums, so this is accomplished by first adding a placeholder (a ref
mapping to a NULL checksum) for each ref matched by the bloom filter,
and later filling in the checksums using the remote's summary file,
which happens in get_checksums(). The problem is that there's no
guarantee all the checksums will be resolved (non-NULL), so the
ostree_repo_finder_result_new() call then hits an assertion failure in
is_valid_collection_ref_map() leading to a crash (in the case that one
or more refs had NULL checksums).
There are at least two situations where the ref checksum might not be
found in the peer remote's summary file:
1) The bloom filter match was a false positive. This is going to happen
sometimes by design.
2) The peer remote's summary is out of sync with its DNS-SD records.
This shouldn't normally happen but it's still good to be robust to the
possibility; in Endless OS nothing guarantees the atomicity of updating
the summary and DNS-SD records.
This commit changes libostree to be robust to the possibility of refs
missing from the peer remote's summary, by removing any that still have
a NULL checksum associated with them after the summary has been fetched
and processed.
The other OstreeRepoFinder implementations don't have this issue because
they use summary files directly and therefore always have access to the
checksum.
Closes: #1717
Approved by: pwithnall
when converted to bytes
In a subsequent commit, we add a public API to read the value of
min-free-space-* value in bytes. The value for free space check
is enforced in terms of block size instead of bytes. Therefore,
for consistency we check while preparing the transaction that the
value doesn't overflow when converted to bytes.
https://phabricator.endlessm.com/T23694Closes: #1715
Approved by: cgwalters
Debian and Debian-derived systems have their GRUB configuration file in
/boot/grub/grub.cfg, rather than /boot/grub2/grub.cfg. Detecting this
file is necessary to correctly generate GRUB boot configuration on
Debian systems.
Closes: #1714
Approved by: cgwalters
If a ref already exists, we are likely only a few commits behind the
current head of the ref, so it is probably better for bandwidth
consumption to pull the individual objects rather than the from-scratch
delta.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1709
Approved by: cgwalters
Since 9dc6ddce08 it has not been true that
'new_config' was simply ref'd: it's serialized, and then re-parsed into
a new GKeyFile.
Closes: #1707
Approved by: jlebon
Add an invalid-cache test error flag to ensure that the code that checks
for and recovers from a corrupted summary cache is hit. This helps make
sure that the recovery path is actually used without resorting to
G_MESSAGES_DEBUG.
Closes: #1698
Approved by: cgwalters
If for some reason the cached summary doesn't match the cached signature
then fetch the remote summary and verify again. Since commit c4c2b5eb
this is unlikely to happen since the summary will only be cached if it
matches the signature. However, if the summary cache has been corrupted
for any other reason then it's best to be safe and fetch the remote
summary again.
This is essentially the corollary to c4c2b5eb. Where that commit helps
you from getting into the corrupted summary cache in the first place,
this helps you get out of it. Without this the client can get wedged
until a prune or the remote server republishes the summary.
Closes: #1698
Approved by: cgwalters
copy_option() unnecessarily passed ownership of the value
to g_variant_dict_insert_value, but that already refs, so it was leaked.
Closes: #1702
Approved by: cgwalters
We need to have the g_auto(GLnxDirFdIterator) inside the loop, or
we don't correctly clean up when iterating several times.
Closes: #1700
Approved by: cgwalters
Now that we have `auto-update-summary`, there is no point in having
`commit-update-summary`. The latter also only had an effect through
the `commit` CLI command, whereas the former is embedded directly in
libostree.
There is one corner case that slips through: `commit` would update the
summary file even if orphan commits were created, which we no longer do
here. I can't imagine anyone relying on this, so it seems safe to drop.
Closes: #1689Closes: #1693
Approved by: mwleeds
Mildly bikeshed, though I find the name `auto-update-summary` to be
easier to grok than `change-update-summary`. I think it's because it can
be read as "verb-verb-noun" rather than "noun-verb-noun".
Closes: #1693
Approved by: mwleeds
Normally, a configured remote will only serve refs with one associated
collection ID, but temporary remotes such as USB drives or LAN peers can
serve refs from multiple collection IDs which may use different GPG
keyrings. So the OstreeRepoFinderMount and OstreeRepoFinderAvahi classes
create dynamic OstreeRemote objects for each (uri, keyring) pair. So if
for example the USB mounted at /mnt/usb serves content from the
configured remotes "eos-apps" and "eos-sdk", the OstreeRepoFinderResult
array returned by ostree_repo_find_remotes_async() will have one result
with a remote called something like
file_mnt_usb_eos-apps.trustedkeys.gpg and the list of refs on the USB
that came from eos-apps, and another result with a remote
file_mnt_usb_eos-sdk.trustedkeys.gpg and the list of refs from eos-sdk.
Unfortunately while OstreeRepoFinderMount and OstreeRepoFinderAvahi
correctly only include refs in a result if the ref uses the associated
keyring, the find_remotes_cb() function used to clean up the set of
results looks at the remote summary file and includes every ref that's
in the intersection with the requested refs, regardless of whether it
uses a different remote's keyring. This leads to an error when you try
to pull from a USB containing refs from different collection IDs: the
pull using the wrong collection ID will error out with "Refspec not
found" and the result with the correct keyring will then be ignored "as
it has no relevant refs or they have already been pulled." So the pull
ultimately fails.
This commit fixes the issue by filtering refs coming from a dynamic
remote, so that only ones with the collection ID associated with the
keyring remote are examined. This only needs to be done for dynamic
remotes because you should be able to pull any ref from a configured
remote using its keyring. It's also only done when looking at the
collection map in the summary file, because LAN/USB remotes won't have a
"main" collection ID set (OSTREE_SUMMARY_COLLECTION_ID).
Closes: #1695
Approved by: pwithnall
This commits adds and implements a boolean repo config option called
"change-update-summary" which updates the summary file every time a ref
changes (additions, updates, and deletions).
The main impetus for this feature is that the `ostree create-usb` and
`flatpak create-usb` commands depend on the repo summary being up to
date. On the command line you can work around this by asking the user to
run `ostree summary --update` but in the case of GNOME Software calling
out to `flatpak create-usb` this wouldn't work because it's running as a
user and the repo is owned by root. That strategy also means flatpak
can't update the repo metadata refs for fear of invalidating the
summary.
Another use case for this relates to LAN updates. Specifically, the
component of eos-updater that generates DNS-SD records advertising ostree
refs depends on the repo summary being up to date.
Since ostree_repo_regenerate_summary() now takes an exclusive lock, this
should be safe to enable. However it's not enabled by default because of
the performance cost, and because it's more useful on clients than
servers (which likely have another mechanism for updating the summary).
Fixes https://github.com/ostreedev/ostree/issues/1664Closes: #1681
Approved by: jlebon
This ensures that commits aren't deleted and refs aren't added, removed,
or updated while the summary is being generated. This is in preparation
for adding a repo config option that will automatically regenerate the
summary on every ref change.
Closes: #1681
Approved by: jlebon
Using `MAX(0, $x)` here is useless since we're comparing against an
unsigned integer. Just unpack this and only subtract if it's safe to do
so.
Also, explicitly check for `fd >= 0` rather than just `!= -1` to be sure
it's a valid fd. And finally, explicitly check the return value of
`g_input_stream_read_all` as is done everywhere else in the tree and
make it clear that we're purposely ignoring the return value of `_flush`
here, but not in other places.
Discovered by Coverity.
Closes: #1692
Approved by: cgwalters
I initially was going to add a `G_DEFINE_AUTOPTR_CLEANUP_FUNC` for
`FetchStaticDeltaData`, but it honestly didn't seem worth mucking around
ownership everywhere and potentially getting it wrong.
Discovered by Coverity.
Closes: #1692
Approved by: cgwalters
In `write_metadata_object()`, make sure when creating tombstone commits
that we're actually passed an expected checksum to use.
In `write_dir_entry_to_mtree_internal()`, sanity check that `dfd_iter`
is indeed not `NULL` before trying to dereference it.
Discovered by Coverity.
Closes: #1692
Approved by: cgwalters
Since min_free_space_size_mb is considered before min_free_space_percent
in min_free_space_calculate_reserved_blocks(), it has to be considered
first when generating the error message in order for it to be accurate.
Closes: #1691
Approved by: jlebon
Previously, we would error out if both of the options were mentioned
in the config file (even if one of them is disabled with 0). There
were few suggestions that this behavior was not quite right.
Therefore, instead of throwing error and exiting, it's preferred to
warn the user. Hence, the solution that worked out is:
* Allow both options to exist simulateneously
* Check each config's value and decide:
* If both are present and are non-zero, warn the user. Also, prefer
to use min-free-space-size over the another.
* If both are absent, then use -percent=3% as fallback
* Every other case is valid hence, no warning
https://phabricator.endlessm.com/T13698
(cherry picked from commit be68991cf80f0aa1da7d36ab6e1d2c4d6c7cd3fb)
Signed-off-by: Robert McQueen <rob@endlessm.com>
Closes: #1685
Approved by: cgwalters
This is the inverse of https://github.com/ostreedev/ostree/pull/1558
aka commits cadece6c4f398ca61d21e497bd6e3fbb549f9cf6 and
3358698c86d80821d81443c906621c92672f99fb
Needed to fix `rpm-ostree kargs` test suite with default staging; skipping
a test here for now as eventually what we'll do is turn on the rpm-ostree
suite fully here.
Closes: #1677
Approved by: jlebon
Running `ostree commit --tree=ref=a --tree=dir=b` involves reading the
whole of a into an `OstreeMutableTree` before composing `b` on top. This
is inefficient if a is a complete rootfs and b is just touching one file.
We process O(size of a + size of b) directories rather than
O(number of touched dirs).
This commit makes `ostree commit` more efficient at composing multiple
directories together. With `ostree_mutable_tree_fill_empty_from_dirtree`
we create a lazy `OstreeMutableTree` which only reads the underlying
information from disk when needed. We don't need to read all the
subdirectories just to get the checksum of a tree already checked into the
repo.
This provides great speedups when composing a rootfs out of multiple other
rootfs as we do in our build system. We compose multiple containers
together with:
ostree commit --tree=ref=base-rootfs --tree=ref=container1 --tree=ref=container2
and it is much faster now.
As a test I ran
time ostree --repo=... commit --orphan --tree=ref=big-rootfs --tree=dir=modified_etc
Where modified_etc contained a modified sudoers file under /etc. I used
`strace` to count syscalls and I seperatly took timing measurements. To
test with a cold cache I ran
sync && echo 3 | sudo tee /proc/sys/vm/drop_caches
Results:
| | Before | After |
| -------------------- | ------ | ----- |
| Time (cold cache) | 8.1s | 0.12s |
| Time (warm cache) | 3.7s | 0.08s |
| `openat` calls | 53589 | 246 |
| `fgetxattr` calls | 78916 | 0 |
I'm not sure if this will have some negative interaction with the
`_ostree_repo_commit_modifier_apply` which is short-circuited here. I
think it was disabled for `--tree=ref=` anyway, but I'm not certain. All
the tests pass anyway.
I originally implemented this in terms of the `OstreeRepoFile` APIs, but
it was *way* less efficient, opening and reading many files unnecessarily.
Closes: #1643
Approved by: cgwalters
It is important that we use user-friendly error strings. The reason
being error strings are seen by users such as in GNOME Software's
error banner.
Closes: #1671
Approved by: jlebon
For `rpm-ostree ex livefs` we have a use case of pushing a rollback
deployment. There's no reason this should require deleting the staged
deployment (and doing so actually breaks livefs which tries to access
it as a data source).
I was initially very conservative here, but I think it ends up
being fairly easy to retain the staged deployment. We need to handle
two cases:
First, when the staged is *intentionally* deleted; here, we just need
to unlink the `/run` file, and then everything will be sync'd up after
reloading.
Second, (as in the livefs case) where we're retaining it,
e.g. adding a deployment to the end. What I realized here is that
we can have the code keep `new_deployments` as view without staged,
and then when we do the final reload we'll end up re-reading it from
disk anyways.
Closes: #1672
Approved by: jlebon
This implements a TODO item from
`ostree_mutable_tree_get_contents_checksum`. We now no-longer invalidate
the dirtree contents checksum at `get_contents_checksum` time - we
invalidate it when the mtree is modified. This is implemented by keeping
a pointer to the parent directory in each `OstreeMutableTree`. This gives
us stronger invariants on `contents_checksum`.
For even stronger guarantees about invariants we could make
`ostree_repo_write_mtree` or similar a member of `OstreeMutableTree` and
remove `ostree_mutable_tree_set_metadata_checksum`.
I think I've fixed a bug here too. We now invalidate parent's contents
checksum when our metadata checksum changes, whereas we didn't before.
Closes: #1655
Approved by: cgwalters
If ostree_repo_prepare_transaction() fails, we should reset the
repository’s state so that the failed call was essentially idempotent.
Do that by calling ostree_repo_abort_transaction() on the failure path.
Typically, the way for preparing a transaction to fail is for its
GCancellable to be triggered, rather than because any of the operations
involved in preparing a transaction are particularly failure prone.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1647
Approved by: cgwalters
min-free-space-* act as a gating condition whether to we want hold onto caches in
repo/tmp. If it is found that the free-disk space is going below this threshold,
we flag it as an error and cleanup current boot's staging directory.
Closes: #1602
Approved by: jlebon
Here's a subtle bug in abort_transaction():
One of the policies of cleaning up is to skip the current boot's staging
directory. The responsible function for this is cleanup_tmpdir() which tries
to lock each of the tmpdir before deleting it. When it comes to the current
boot's staging dir, it tries to lock the directory(again!) but fails as there
is already a lockfile present. Just because the current boot's staging dir was
meant to be skipped, the bug never surfaced up and wasn't catastrohpic.
if (!_ostree_repo_try_lock_tmpdir (dfd, path, &lockfile, &did_lock, error))
return FALSE;
if (!did_lock)
return TRUE; /* Note early return */
...
if (g_str_has_prefix (path, self->stagedir_prefix))
return TRUE; /* Note early return */
The actual check for skipping staging dir for current boot was never reached
because the function returned at did_lock failure.
Therefore, execute cleanup_tmpdir() after releasing the lockfile in
abort_transaction() so that cleanup_tmpdir gets a chance to lock current boot's
staging directory and succeed.
Closes: #1602
Approved by: jlebon
Currently the BLS snippets are named ostree-$ID-$VARIANT_ID-$index.conf,
but the BLS config files are actually sorted by using the version field
which is the inverse of the index.
In most places, _ostree_sysroot_read_boot_loader_configs() is used to
get the BLS files and this function already returns them sorted by the
version field. The only place where the index trailing number is used is
in the ostree-grub-generator script that lists the BLS files to populate
the grub config file.
But for some bootloaders the BLS filename is the criteria for sorting by
taking the filename as a string version. So on these bootloaders the BLS
entries will be listed in the reverse order.
To avoid that, change the BLS snippets filename to have the version field
instead of the index and also to have the version before deployment name.
Make the filenames to be of the form ostree-$version-$ID-$VARIANT_ID.conf
so the version is before the deployment name.
Signed-off-by: Javier Martinez Canillas <javierm@redhat.com>
Closes: #1654
Approved by: cgwalters
If a function has a guint "out argument", passing a pointer to a gsize
is not, in general, valid. On an ILP64 platform there is no problem
since guint and gsize are identical, but on an LP64 platform it will
overwrite only the first word of the gsize, leaving the second word
unaffected. On little-endian machines, if the second word is
zero-initialized (as it is here), the result is numerically equal to
the guint, but on big-endian machines the result is around 4 billion
times what it should be, resulting in
ostree_repo_finder_config_resolve_async() reading past the end of
the array and causing undefined behaviour.
In practice this caused assertion failures (and consequently test
failures) on Debian's s390x (z/Architecture), ppc64 (64-bit PowerPC)
and sparc64 (64-bit SPARC) ports.
Closes: #1640
Signed-off-by: Simon McVittie <smcv@debian.org>
Closes: #1641
Approved by: cgwalters
A prelude to my understanding. Unfortunately `OstreeMutableTree` provides
little encapsulation, as each member has setters† so it's difficult to come
up with a list of invariants.
† `files` and `subdirs` only have getters, but the getters return mutable
references to the internals, so we still can't reason about invariants.
Closes: #1645
Approved by: jlebon
We special-case AVAHI_ERR_NO_DAEMON to not cause warnings, but if
we pass AVAHI_CLIENT_NO_FAIL to avahi_client_new, we never actually
see AVAHI_ERR_NO_DAEMON. Instead, we will get AVAHI_ERR_BAD_STATE
when we try to use the client.
Closes: #1618
Signed-off-by: Simon McVittie <smcv@debian.org>
Closes: #1639
Approved by: cgwalters
During the pull, there is an explicit check for free space on disk
vs. the size of uncompressed delta; But while writing the new content
objects that are generated, they have to honor min-free-space-* checks
too. We enforce this check in _bare_content_commit as that is where
we can know the final size of the new content object.
Closes: #1614
Approved by: jlebon
We were referencing the txn bits outside of the lock in the error
path. Generally shouldn't matter, but e.g. Rust wouldn't let us do this, and
race detector tooling will warn about it.
Closes: #1632
Approved by: jlebon
I generally like having variables include their units where applicable;
timer variables having `_secs` or `_ms`, etc.
Closes: #1632
Approved by: jlebon
Systemd units using ConditionNeedsUpdate run if the mtime of .updated in
the specified directory is newer than /usr. Since /usr has an mtime of
0, there's no way to have an older .updated file. Systemd units
typically specify ConditionNeedsUpdate=/etc or ConditionNeedsUpdate=/var
to support stateless systems like ostree.
Remove the file from the new deployment's /etc and the OS's /var
regardless of where they came from to ensure that these systemd units
run when booting new deployments. This will provide a method to run
services only on upgrade.
Closes: #1628https://bugzilla.gnome.org/show_bug.cgi?id=752950Closes: #1631
Approved by: cgwalters
Currently when I run `ostree prune` it hits a seg fault when the
hash_func is used (in this case g_str_hash) from the call stack
_ostree_repo_prune_tmp() -> g_hash_table_contains() ->
g_hash_table_lookup_node(). So the key, in this case dent->d_name, must
be corrupt in some way.
glnx_dirfd_iterator_next_dent() uses readdir() to get the dirent struct.
And according to the man page for readdir(3), "POSIX.1 explicitly notes
that this field should not be used as an lvalue" (in reference to
d_name). So this commit avoids modifying d_name in place and copies it
instead. This seems to avoid the seg fault.
Closes: #1627
Approved by: jlebon
If there’s a problem while aborting a transaction, store the error but
don’t report it until the end of the function — do a best effort at
clearing the rest of the transaction state first (since most of it
cannot fail).
If cleanup_tmpdir() fails (which, arguably, should not be a
showstopper), this allows a caller to recover and start a new
transaction in future.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1626
Approved by: jlebon
Similar to min-free-space-percent but it supports specific sizes
(in MB, GB or TB). Also, making min-free-space-percent and -size
mutually exclusive.
min-free-space-percent does not give a fine tuning of the free disk
space that a user might decide to keep. It can translate to very large
size (e.g. 1% = ~10GB on 1TB HDD) or very small (e.g. 1% = ~330MB on 32GB
system like Endless devices). Hence, it makes sense to introduce a config
option to honor specific size as per the user.
Closes: #1616
Approved by: jlebon
We need to include libglnx.h in places where ostree-autocleanups.h is
included, so that we get backports of G_DEFINE_AUTOPTR_CLEANUP_FUNC and
friends.
Closes: #1615
Approved by: jlebon
This reverts commit f1d9196076.
Since libglnx.h does not get installed, it can't be included in
ostree-autocleanups.h, which is included by ostree.h.
Closes: #1615
Approved by: jlebon
Currently the API that allows P2P operations (e.g. pulling an ostree ref
from a LAN or USB source) is hidden behind the configure flag
--enable-experimental-api. This commit makes the API public and makes
that flag essentially a no-op (leaving it in place in case we want to
use it again in the future). The P2P API has been tested over the last
several months and proven to work.
This means that since we're no longer using the "experimental" feature
flag, P2P builds of Flatpak will fail when using versions of OSTree from
this commit onwards, until Flatpak is patched in the near future. If you
want to build Flatpak < 0.11.8 with P2P enabled and link against OSTree
2018.6, you'll have to patch Flatpak. However, since Flatpak won't yet
have a hard dependency on OSTree 2018.6, it needs a new way to determine
if the P2P API in OSTree is available, so this commit adds a "p2p"
feature flag. This way the feature set is more semantically correct than
if we had continued to use the "experimental" feature flag.
In addition to making the P2P API public, this commit makes the P2P unit
tests run by default, removes the f27-experimental CI instance that's no
longer needed, changes a few man pages to reflect the changes, and
updates the bash completion script to accept the new commands and
options.
Closes: #1596
Approved by: cgwalters
This commit includes libglnx.h in ostree-autocleanups.h, so we get the
g_autoptr backports wherever they're needed. Also, remove the "#include
libglnx.h" lines elsewhere that are no longer needed.
Closes: #1596
Approved by: cgwalters
Use the recently introduced architecture for retrying network requests
on transient failure to do the same for delta superblock requests, now
that they’re queued.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1600
Approved by: jlebon
Just like all the other requests made for delta parts and objects by the
pull code, use a queue for delta superblocks. Currently this doesn’t do
any prioritisation or retries after transient failures, but it could do
in future.
This means that delta superblocks are now subject to the parallel
request limit in the fetcher, which was a problem highlighted here:
https://github.com/ostreedev/ostree/pull/1453#discussion_r168321706.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1600
Approved by: jlebon
Use the same G_IO_ERROR_* values for HTTP status codes in both fetchers.
The libsoup fetcher still handles a few more internal error codes than
the libcurl one; this could be built on in future.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1594
Approved by: jlebon
Various of the counters already have assertions like this; add some more
for total paranoia.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1594
Approved by: jlebon
This allows the retry code in ostree-repo-pull.c to recover from (for
example) timeouts at the libsoup layer in the stack, as well as from the
GSocket layer in the stack.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1594
Approved by: jlebon
Allow network requests to be re-queued if they failed with a transient
error, such as a socket timeout. Retry each request up to a limit
(default: 5), and only then fail the entire pull and propagate the error
to the caller.
Add a new ostree_repo_pull_with_options() option, n-network-retries, to
control the number of retries (including setting it back to the old
default of 0, if the caller wants).
Currently, retries are not supported for FetchDeltaSuperData requests,
as they are not queued. Once they are queued, adding support for retries
should be trivial. A FIXME comment has been left for this.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1594
Approved by: jlebon
When building the OstreeBloom code against old versions of glib, we have
to have the libglnx headers included so that it defines
G_DEFINE_AUTOPTR_CLEANUP_FUNC and friends for us.
This is similarly true for test-repo-finder-mount.c which indirectly
includes ostree-autocleanups.h.
Closes: #1605
Approved by: cgwalters
This commit rearranges a few things in ostree-repo-pull.c so that OSTree
will successfully compile with experimental API enabled and without
libsoup, libcurl, or avahi:
./autogen.sh --enable-experimental-api --without-soup --without-curl
--without-avahi
This is accomplished with two sets of changes:
1. Move ostree_repo_resolve_keyring_for_collection() so it can be used
even without libsoup or libcurl.
2. Add stub functions for ostree_repo_find_remotes_async() and
ostree_repo_pull_from_remotes_async(), and their _finish() counterparts,
so they return an error when libsoup or libcurl isn't available.
Closes: #1605
Approved by: cgwalters
This introduces no functional changes, but will make upcoming support
for retrying downloads easier to add.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1599
Approved by: jlebon
This introduces no functional changes, but will make upcoming support
for retrying downloads easier to add.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1599
Approved by: jlebon
This introduces no functional changes, but will make upcoming support
for retrying downloads easier to add.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1599
Approved by: jlebon
Rename from `fdata` to `fetch_data` to clarify things and make it
consistent with other similar functionality in the file.
This introduces no functional changes.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1599
Approved by: jlebon
This introduces no functional changes, but does make the code a little
cleaner.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1599
Approved by: jlebon
This introduces no functional changes; just makes the code a bit shorter
in a few places.
https://gcc.gnu.org/onlinedocs/gcc/Conditionals.html
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1599
Approved by: jlebon
This introduces no functional changes, but will make some upcoming
refactoring a little easier.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1599
Approved by: jlebon
The initial motivation for this is that the "staging" code currently
didn't rewrite the deployment refs, meaning that the staged commit
could be pruned.
Hence first, this new API ensures that deployments also
hold a strong ref to their commit, without relying on the magical
"deployment refs" that we inject. That has always been a weird
artifact of the strict layering separation between OstreeSysroot
and OstreeRepo.
I also plan to change rpm-ostree to start using this API to
hold references to base layers for client-side layering; it also
today generates various refs.
That said, if we still want to support multiple processes
writing to a single repo (as happens on EndlessOS today) we
still need to write refs; perhaps later we could add a concept
of "generators" or something that create refs based on whatever
logic?
Another minor thing this fixes is that we had a printf inside
the library; this propagates the pruned data to the higher level
which can log however it likes.
Closes: #1566
Approved by: jlebon
Prep for reworking how we do sysroot cleanup. We're going to
start doing more lowlevel pruning work there, and I wanted to avoid
duplicating the ref enumeration.
Closes: #1566
Approved by: jlebon
Likewise the corresponding support for syslinux introduced by commit
c5112c25e4, this one enables writing devicetree
filename into the uEnv.txt environment file for u-boot.
Since u-boot does not strictly defines variable names, here 'fdt_file' was
chosen as it appear to be one the most frequently adopted names in u-boot
default environments. Outer boot logic should of course comply with this choice
and use $fdt_file as the device tree file name to pass to boot commands.
This was tested on a custom board booting with u-boot.
Closes: #1590
Approved by: cgwalters
I feel like I'm drowning in a pile of experimental-but-almost-stable
features...
Anyways, since we made the feature opt-in in rpm-ostree in
https://github.com/projectatomic/rpm-ostree/pull/1352
let's mirror that a bit here with an environment variable so people
can play with it more easily.
The tests needed some tweaks; specifically we need to reload the
status fact after making changes. I'm still a bit uncertain
about the Ansible-as-tests.
But we add an upgrade test that uses the new environment variable.
Closes: #1583
Approved by: jlebon
This should give a more insightful error message if the user provides
a UID which is present on multiple keys.
This happens if you have an old key in your keyring which you are not
actively using any more, e.g. because it is too old. You still have
your old keys in your keyring, because you want to read old email
encrypted for that key, though.
The gpgme function used by ostree right now complains if a UID is found
on multiple keys:
https://www.gnupg.org/documentation/manuals/gpgme/Listing-Keys.html#index-gpgme_005fget_005fkey
The used API is too simple for that use case.
Note that it would be nicer if ostree picked the only valid signing key out
of the available keys rather than using the simplistic gpgme_get_key
function. It be nicer, of course, if there was such a gpgme function.
Closes: #1579
Approved by: cgwalters
Let's add a semi-colon between the "bootconfig swap" part and the
"deployment count change" to make it more clear they're separate
statements.
Closes: #1575
Approved by: cgwalters
While we do a `syncfs()` plus `FIFREEZE/THAW` for `/boot`, that
only comes during deployment finalization.
The code here today generally assumes that if the file exists
it's been fully written. So let's do a `fdatasync()` before
we do the `rename()`.
This just came out of looking through the code while working
on deployment staging. In that scenario there's a much larger
window between when we copy the kernel/initramfs and when we
sync `/boot`.
Closes: #1571
Approved by: jlebon
This is easier to `git grep` etc. versus ad-hoc English. Although
we still have some English for the prepare_transaction/commit which
acquire/release in separate phases.
Closes: #1572
Approved by: jlebon
These are further fixes based on running more of the rpm-ostree
test suite.
When dropping the staged deployment, we do need to do the
"post operations" such as bumping the sysroot mtime, so that
clients know something changed. We also need to regenerate
the deployment refs. And of course do a sysroot reload.
Also, add a "base cleanup" after creating a staged deployment
which also regenerates the refs.
Closes: #1570
Approved by: jlebon
There's no reason to do this. I didn't actually hit this problem,
but it's a corner case that just occurred to me while working on
the code.
I think callers should be adapted to skip trying to use staging
if there's no booted deployment.
Closes: #1568
Approved by: jlebon
This was pointed out in a previous PR review; we don't have
a need for the separate variables. Prep for adding an API for
this.
Closes: #1568
Approved by: jlebon
Doing so can break rpm-ostree, which wants to own the cleanup process
to ensure its baselayer refs are generated.
Further, doing the cleanup at shutdown time adds latency. It's also
going to be generally unnecessary as we expect repo pruning to have
been done when writing the refs.
Closes: #1567
Approved by: jlebon
The code has been sitting around for a while but since I disabled
it by default, I doubt anyone is really using it or relying on it.
This patch and turns on locking by default, and also drops the
API which was only public in the experimental API builds.
Conceptually these are two distinct things, and we
may actually want to split up the patches.
I don't think this will break anyone, but it's hard to say for sure.
It's also going to be hard to find out until we actually release
I suspect...
But anyone who is broken should be able to add `locking=false` into
their repo config. On the flip side Endless has been shipping with
this enabled and it is reported to help.
The reason to drop the APIs: I'm a bit concerned about the interactions over time
between libostree's use of the API and any apps that start using it.
For example, if an app specifies a SHARED lock in their code, then
later internally we decide to temporarily grab an `EXCLUSIVE`, but the
app had a second thread/process that was `EXCLUSIVE` already, and
that process was waiting on the first bit of code, then we could
deadlock. I can't think of a real world situation where this would happen
yet though.
We are likely to in the future have say `fsck` take an external lock,
`checkout` grab a shared one, etc.
Closes: #1555
Approved by: jlebon
Today rpm-ostree has some code to run a "sanitycheck" on a deployment.
I had initially deleted that when adapting it to use the staging code,
but I realized it should work fine; we just won't see the merged
config, but that's OK.
When I readded that code it started crashing because we didn't
actually return the new deployment object. We'll gain some coverage
here as I'll land the code to have rpm-ostree use staging, then bump
the rpm-ostree tests here.
Closes: #1559
Approved by: jlebon
Testing out the staged API with rpm-ostree, ostree-prepare-root.service
in the initramfs was failing. Turned out that was because we didn't
have a `root=` kernel argument. Which was because we didn't have
any kernel arguments at all except `ostree=`.
That in turn was because we weren't loading the bootloader config
from the merge deployment.
The serialized deployment data holds the unique identity of
(osname, checksum, deployserial) - look for the real merge deployment
in our deployment list which has the bootloader arguments we need.
This issue was entirely masked by the `ostree admin deploy` command
which itself explicitly loads the merge deployment's kernel arguments
in every case - it never passes the `NULL` default down. A followup
patch will fix that.
Closes: #1558
Approved by: jlebon
When comparing deployments to determine whether we need a new
bootversion, we should also check whether the commit "version" metadata
is the same. Otherwise, we may end up with the a bootconfig whose
`title` includes a version that doesn't match the one from the
deployment checksum.
Closes: https://github.com/projectatomic/rpm-ostree/issues/1343Closes: #1556
Approved by: cgwalters
This is a normal case when running unit tests in client code
on continuous integration infrastructure. When those tests are
running they will set G_DEBUG=fatal-warnings which will cause
the program to abort if a warning is emitted. Instead, emit
a debug message if the problem was that we couldn't connect to
the daemon.
Closes: #1542
Approved by: jlebon
Followup to: https://github.com/ostreedev/ostree/pull/1503
After starting some more work on on this in rpm-ostree, it is
actually simpler if the staged deployment just shows up in the list.
It's effectively opt-in today; down the line we may make it the default,
but I worry about breaking things that e.g. assume they can mutate
the deployment before rebooting and have `/etc` already merged.
There's not that many things in libostree that iterate over the deployment
list. The biggest change here is around the
`ostree_sysroot_write_deployments_with_options` API. I initially
tried hard to support a use case like "push a rollback" while retaining
the staged deployment, but everything gets very messy because that
function truly is operating on the bootloader list.
For now what I settled on is to just discard the staged deployment;
down the line we can enhance things.
Where we then have some new gymnastics is around implementing
the finalization; we need to go to some effort to pull the staged
deployment out of the list and mark it as unstaged, and then pass
it down to `write_deployments()`.
Closes: #1539
Approved by: jlebon
This is a version of ostree_repo_traverse_commit_union that also
remembers where the objects came from, by recording the parent
relationships in a hashtable. This can be used to later find which
commits each object was from, which we want to use in fsck.
Closes: #1533
Approved by: cgwalters
reintroduce the feature that was reverted with commit:
28c7bc6d0e
Differently than the original implementation, now we don't attempt any
test for reflinks support on the parent repository, since the test
requires write access to the repository.
Additionally, also check that the two repositories are on the same
device before attempting any reflink.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Closes: #1525
Approved by: cgwalters
In prep for staging work, where we'll need to load the origin
for the staged deployment too.
The function was previously trying to avoid operating on an
instantiated deployment, but the data we need is in the deployment
object at that point.
Closes: #1538
Approved by: jlebon
Prep for handling staged deployments better; if we're not passed
the staged one back, then we just want to delete it but not
touch the bootloader config.
Closes: #1538
Approved by: jlebon
The reason we were returning a hashtable is a bit lost to history,
there's no reason to do so now anyways. Also port to declare-and-initialize
style and add more comments.
Closes: #1538
Approved by: jlebon
Add API to write a deployment state to `/run/ostree/staged-deployment`,
along with a systemd service which runs at shutdown time.
This is a big change to the ostree model for hosts,
but it closes a longstanding set of bugs; many, many people have
hit the "losing changes in /etc" problem. It also avoids
the other problem of racing with programs that modify `/etc`
such as LVM backups:
https://bugzilla.redhat.com/show_bug.cgi?id=1365297
We need this in particular to go to a full-on model for
automatically updated host systems where (like a dual-partition model)
everything is fully prepared and the reboot can be taken
asynchronously.
Closes: https://github.com/ostreedev/ostree/issues/545Closes: #1503
Approved by: jlebon
A newly created archive-mode repository won't have a uncompressed-objects-cache
directory, and uncompressed_objects_dir is -1 to flag that. The special meaning of
-1 meaning "cwd" for libglnx means that the current directory was scanned as
if it was an objects directory, producing unexpected results, especially if there
were any two-letter files/subdirs in the current directory.
Closes: #1537
Approved by: jlebon
Currently OstreeRepoFinderResult, a data structure used by pull code
that supports P2P operations, has a hash table mapping refs to checksums
but doesn't include timestamp information. This means that clients have
no way of knowing just from the OstreeRepoFinderResult information if a
commit being offered by a peer remote is an update or downgrade until
they start pulling it. The client could check the summary or the commit
metadata for the timestamps, but this requires adding the temporary
remotes to the repo config, and ostree is already checking timestamps
before returning the results, so I think it makes more sense for them to
be returned rather than leaving it to the client. This limitation is
especially important for offline computers, because for online computers
the latest commit available from any remote is the latest commit,
period.
This commit adds a "ref_to_timestamp" hash table to
OstreeRepoFinderResult that is symmetric to "ref_to_checksum" in that it
shares the same keys. This is an API break, but it's part of the
experimental API, and none of the current users of that (flatpak,
eos-updater, and gnome-software) are affected. See the documentation for
more details on "ref_to_timestamp". One thing to note is the data
structure currently gets initialized in find_remotes_cb(), so only users
of ostree_repo_find_remotes_async() will get them, not users of, say,
ostree_repo_finder_resolve_all_async(). This is because the individual
OstreeRepoFinder implementations don't currently access the timestamps
(but I think this could be changed in the future if there's a need).
This commit will allow P2P support to be added to
flatpak_installation_list_installed_refs_for_update, which will allow
GNOME Software to update apps from USB drives while offline (it's
already possible online).
Closes: #1518
Approved by: cgwalters
In case of some kind of race or other weirdness we might be getting
non-matching versions of summary.sig and summary, where summary.sig
is the latest version. Currently we're saving them to the cache
directly after downloading them successfully, but they will then fail
to gpg validate. Then on the next run we'll keep using the cached files
even if they are incorrect, until summary.sig changes upstream.
This changes the order so that we verify the signatures before saving
to the cache, thus ensuring that we don't end up in a stuck state.
Fixes https://github.com/ostreedev/ostree/issues/1523Closes: #1529
Approved by: cgwalters
In ostree_repo_remote_fetch_summary_with_options(), if no summary is
found on the server and summary verification is enabled, the error
message implies that it's the summary signature that's missing, which is
misleading. This commit adds a more specific error message for the case
of a missing summary, which has the side effect of explicitly checking
for the case that signatures != NULL && summary == NULL after
repo_remote_fetch_summary(), even though that should never happen.
One effect of this is that if you run "flatpak remote-add" with an
incorrect URL you get a more helpful error message, and similarly for
other flatpak operations and other users of ostree.
Closes: #1522
Approved by: cgwalters
In libostree, the phrase "commit metadata" has two meanings-- one is the
first dictionary in a commit GVariant that stores metadata such as ref
bindings, and the other is the commit metadata in the summary file,
which stores the commit size, checksum, and timestamp. In
find_remotes_process_refs(), the entire commit GVariant was being
referred to as commit metadata, so this commit changes the variable
name and a comment to make things more consistent.
Closes: #1528
Approved by: cgwalters
In _try_clone_from_payload_link, don't try to do the clone in the
parent repo, because we don't want to modify that. parent repos are
typically used when you want a shared, immutable base.
For example in flatpak, the parent repo is the system repo which you
don't have write access to, so any modification to it will fail with
EACCES, making it impossible to install via the system helper.
Closes: #1524
Approved by: cgwalters
ostree_repo_pull_from_remotes_async() passes along some options to
ostree_repo_pull_with_options(), so document them.
Closes: #1519
Approved by: cgwalters
The main blocker for doing this before was the `goto out` handling
for remounting `/boot`. Handle that by factoring out the bits that
require it to a helper function, and do the C/GError equivalent of
"try/finally".
Not prep for anything right now, just decided to do this since I had the file
open.
Closes: #1515
Approved by: jlebon
For staged deploy, we want to pay the cost of creating copies from
`/usr/etc` → `/etc` at stage time, since it can be expensive. (We
want to minimize time spent during shutdown).
Split it up into two functions; the logic is also simply clearer.
Closes: #1514
Approved by: jlebon
Prep for deployment staging. We had the code to hande "explicit kargs" in one
place, but the "use merge deployment" karg bits mixed in with the "/etc merge"
logic. Those are separate things, and it's better to have karg handling in one
place.
Closes: #1514
Approved by: jlebon
A quick turnaround to include one PR: https://github.com/ostreedev/ostree/pull/1508
"switchroot: Ensure /run/ostree-booted is created even without initramfs"
This fixes ostree when booting without an initramfs. Thanks to @akiernan for the
bug report and helping review the fix! I'm working on enhancing
the test suite, which will help in adding some coverage here.
Also for this release I'm going to avoid adding a "stub" symbol section
to the `-released.sym` file; I don't believe it's necessary.
Closes: #1512
Approved by: jlebon
It's been over a month since 2018.2; we have a few features and various fixes,
and the "stage" work pending which is pretty invasive. Time for a new release!
Closes: #1506
Approved by: jlebon
Ensures it's labeled consistently. Prep for staged deployments which reworks the
logic around when the origin file is written.
Closes: #1505
Approved by: jlebon
Pulling some of this out of stage deploy work. It's generally better as it's
easier to change functions to have multiple callers.
Closes: #1505
Approved by: jlebon
We do already have `http-headers`, which potentially could be used to
allow clients to completely override the field, but it seems like the
more common use case is simply to append.
Closes: #1496
Approved by: cgwalters
Allow users to pass `<remote>:` to list all refs we have locally
belonging to `<remote>`. Also (re-)allow the similar `<remote>:.` syntax
for backwards compatibility with flatpak.
Closes: #1500
Approved by: cgwalters
I was looking at this code in prep for "staging" deployments,
and there are several cleanups to be made here. The first
thing I noticed is that we look for the `ostree=` kernel argument,
but the presence of that should be exactly equivalent to having
`/run/ostree-booted` exist. We just added a member variable for
that, so let's make use of it.
Related to this, we were erroring out if we had the karg but
didn't find a deployment. But this can happen if e.g. one is
using `ostree admin --sysroot` from an ostree-booted system! It's
actually a bit surprising no one has reported this so far; I guess
in the end people are either using non-ostree systems or running
from containers.
Let's add a member variable `root_is_sysroot` that we can use
to determine if we're looking at `/`. Then, our more precise
"should find a booted deployment" state is when both `ostree_booted`
and `root_is_sysroot` are TRUE.
Next, rather than walking all of the deployments after parsing,
we can inline the `fstatat()` while parsing. The mild ugly
thing about this is assigning to the sysroot member variable while
parsing, but I will likely clean that up later, just wanted to avoid
rewriting everything in one go.
Closes: #1497
Approved by: jlebon
In ostree_repo_abort_transaction, if we pass a cancellable and it gets
canceled, then the function may fail to fully clean up the transaction
state. This was happening e.g. when the ostree_repo_pull_with_options
call got cancelled.
To fix this, as suggested by Colin Walters, we set the passed
cancellable as NULL, in order for it to be ignored.
https://github.com/ostreedev/ostree/issues/1491Closes: #1492
Approved by: jlebon
Ensure that the metadata object is built up with the signatures from all keys
passed to ostree_repo_add_gpg_signature_summary(). Previously only the signature
from the last key would end up in the metadata.
Closes: #1488Closes: #1489
Approved by: jlebon
When a new object is added to the repository, create a
$PAYLOAD-SHA256.payload-link symlink file as well. The target of the
symlink is the checksum of the object that was added the repository.
Whenever we add a new object file, in addition to lookup if the file is
already present with the same checksum we also check if an object with
the same payload is in the repository.
If a file with the same payload is already present in the repository, we
copy it with `glnx_regfile_copy_bytes` that internally attempts to
create a reflink (ioctl (..., FICLONE, ..)) to the target file if the
file system supports it. This enables to have objects that share the
payload but have a different inode and xattrs.
By default the payload-link-threshold value is G_MAXUINT64 that disables
the feature.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Closes: #1443
Approved by: cgwalters
It will be used by successive commits to keep track of the payload
checksum for objects stored in the repository.
The goal is that files having the same payload but different xattrs can
take advantage of reflinks where supported.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Closes: #1443
Approved by: cgwalters
Add some "function global" prefixing in line with what we do in
other places now, and drop the "manual filename" prefixing that
is no longer necessary since
23f7df1500
Closes: https://github.com/ostreedev/ostree/issues/1467Closes: #1485
Approved by: jlebon
The _ostree_repo_get_remote() and _ostree_repo_get_remote_inherited()
methods transfer ownership of the returned OstreeRemote to the caller,
so this commit fixes a few call sites that weren't properly freeing it.
Closes: #1478
Approved by: cgwalters
I think this got changed in a refactor. We definitely want
to total up the amount of space that *would* be freed even
with `--no-prune` AKA `OSTREE_REPO_PRUNE_FLAGS_NO_PRUNE`.
It's actually a bit terrifying this is apparently the first test case for
the `--no-prune` option...
Closes: https://github.com/ostreedev/ostree/issues/1480Closes: #1483
Approved by: jlebon
This updates the gtk-doc comment for OstreeRepoFinderMount to match the
correct flatpak repo path, which was fixed in commit 6db6268df.
Closes: #1473
Approved by: cgwalters
The "ref_original_commits" hash table uses string values, not variants,
so fix the free function passed to g_hash_table_new_full (). Since
g_variant_unref isn't NULL safe, this prevents an assertion failure when
a NULL value is inserted.
Dan Nicholson suggested this patch; I'm just submitting it because he's
busy.
Fixes https://github.com/ostreedev/ostree/issues/1433Closes: #1474
Approved by: cgwalters
If we're booted into a deployment, then any queries for the pending
merge deployment of a non-booted OS will fail due all of them being
considered rollback.
Fix this by filtering by `osname` *before* determining if we've crossed
the booted deployment yet.
Closes: #1472
Approved by: cgwalters
OstreeRepoFinderMount checks mounts for a few well-known directories
such as "ostree/repo" and ".ostree/repo" to try to find remotes. One of
the hard-coded directories is "var/lib/flatpak" but that's the flatpak
directory, not the ostree repo used by flatpak, which is at
"var/lib/flatpak/repo". So this commit changes the path so the repo can
be found.
For recent versions of Endless, flatpak uses /ostree/repo as its
repository, so this commit won't make a difference there. But it may
help on other operating systems.
Closes: #1471
Approved by: cgwalters
Example user story: Jane rebases her OS to a new major version N, and wants to
keep around N-1 even after a few upgrades for a while so she can easily roll
back. I plan to add `rpm-ostree rebase --pin` to opt-in to this for example.
Builds on the new `libostree-transient` group to store pinning state there.
Closes: https://github.com/ostreedev/ostree/issues/1460Closes: #1464
Approved by: jlebon
The `origin/unlocked` and `origin/override-commit` keys are examples of state
that's really transient; we don't want to maintain them across upgrades. Right
now there are bits for this in both `ostree admin upgrade` as well as in
rpm-ostree.
This new API will slightly clean up both cases, but it's really prep for adding
a concept of deployment "pinning" that will live in the new
`libostree-transient` group.
Closes: #1464
Approved by: jlebon
There are enough fixes here, and there are some potentially larger patches
incoming like wmanley's checkout speedups and the payload link that will need
soak time in master.
Closes: #1455
Approved by: jlebon
Since f4d1334e19 the primary pull code maintains a
maximum queue. In that commit message I said `Note that I kept an assertion.`.
But I think this is wrong since while it covers a lot of the normal cases, if
one is e.g. trying to fetch a ton of refs, the primary pull code doesn't yet
queue those. While it'd be nice to queue those, it isn't worth carrying
extra assertions in the backends that can still trigger.
Closes: https://github.com/ostreedev/ostree/issues/1451Closes: #1453
Approved by: dbnicholson
There are a few cases for knowing whether a commit has identical
content to another commit. Some people want to do a "promotion workflow",
where the content of a commit on a tesitng branch is then "promoted"
to a production branch with `ostree commit --tree=ref`.
Another use case I just hit in rpm-ostree deals with
[jigdo](https://github.com/projectatomic/rpm-ostree/issues/1081) where we're
importing RPMs on both the client and server, and will be using the
content checksum, since the client/server cases inject different metadata
into the commit object.
Closes: https://github.com/ostreedev/ostree/issues/1315Closes: #1449
Approved by: jlebon
For P2P pulls ostree adds temporary remotes and removes them in
find_remotes_cb(). However, if an OstreeRepoFinderResult gets freed
during the course of that function, the OstreeRemote in the result is
freed but a pointer to it remains in the remotes_to_remove array. This
means that when _ostree_repo_remove_remote() gets called on it at the
end of the function it will fail. In my case the resulting error was
"OSTree-CRITICAL **: _ostree_repo_remove_remote: assertion 'remote->name
!= NULL' failed" but I think it could also seg fault.
This commit adds a reference to the remote so it can be properly removed
when we're finished with it.
Closes: #1450
Approved by: giuseppe
Having the `uncompressed-object-cache` directory in `archive` repos by default
is clutter; the functionality should be considered deprecated.
Now we only create the directory if we're doing a checkout with the cache
enabled.
Closes: #1446
Approved by: jlebon
This is analogous to the filtering support for the commit API: we allow
library users to skip over checking out specific files. This is useful
in some tricky situations where we *know* that the files to be checked
out will conflict with existing files in subtle ways.
One such example is in rpm-ostree support for multilib. There, we want
to allow checking out a package onto an existing tree, but skipping over
files that are not coloured to our preferred value (e.g. not overwriting
an i686 version of `ldconfig` if we already have the `x86_64` version).
See https://github.com/projectatomic/rpm-ostree/pull/1227 for details.
Closes: #1441
Approved by: cgwalters
When we changed around the kernel location in rpm-ostree, we
started installing the kernel into `/boot` as `modules_object_t`,
and the current policy didn't permit that. For maximum compatibility,
relabel installed kernel/initramfs/dtb as `boot_t`.
https://bugzilla.redhat.com/show_bug.cgi?id=1536991Closes: #1444
Approved by: jlebon
SPDX License List is a list of (common) open source
licenses that can be referred to by a “short identifier”.
It has several advantages compared to the common "license header texts"
usually found in source files.
Some of the advantages:
* It is precise; there is no ambiguity due to variations in license header
text
* It is language neutral
* It is easy to machine process
* It is concise
* It is simple and can be used without much cost in interpreted
environments like java Script, etc.
* An SPDX license identifier is immutable.
* It provides simple guidance for developers who want to make sure the
license for their code is respected
See http://spdx.org for further reading.
Signed-off-by: Marcus Folkesson <marcus.folkesson@gmail.com>
Closes: #1439
Approved by: cgwalters
The old documentation had outdated and incomplete annotations, and
didn’t make it very clear that out_remote could legitimately return
NULL.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1437
Approved by: cgwalters
Currently users of the find_remotes_async()/pull_from_remotes_async()
functions have no way to specify a commit hash to use instead of the
latest one available. This commit implements an "override-commit-ids"
option analogous to the one used by ostree_repo_pull_with_options().
It's accomplished by returning OstreeRepoFinderResult objects pointing
to the given commit checksum(s) regardless of which ones were available
from the remotes, but in the future this implementation could be
improved to take into account the commits advertised by the remotes.
One effect of this is that flatpak will have the ability to downgrade
apps that use collection IDs
(https://github.com/flatpak/flatpak/issues/1309).
Closes: #1425
Approved by: pwithnall
Currently we were parsing `opt_filename` twice...I dug through
the history a bit and it looks like it may have been an accident
from refactoring.
What we're fixing here concretely is that using relative subdirectories
like `--filename somesubdir/foo` broke because we were incorrectly
passing the `somesubdir/` again.
Closes: #1423Closes: #1427
Approved by: jlebon
Prep for further work here. This diff is a bit noisy for the delta bits because
the identation was off originally as well.
Closes: #1424
Approved by: jlebon
Much like the (optional) initramfs at
`/usr/lib/ostree-boot/initramfs-<SHA256>` or
`/usr/lib/modules/$kver/initramfs` you can now optionally include a
flattened devicetree (.dtb) file alongside the kernel at
`/usr/lib/ostree-boot/devicetree-<SHA256>` or
`/usr/lib/modules/$kver/devicetree`.
This is useful for embedded ARM systems which need the devicetree file
loaded by the bootloader for the kernel to discover and initialise
hardware. See https://en.wikipedia.org/wiki/Device_tree for more
information.
This patch was mostly produced by copy-pasting code for initramfs handling
and renaming `s/initramfs/devicetree/g`. It's not beautiful, but it is
fairly straightforward.
It may be useful to extend device-tree support in a number ways in the
future. Device trees dependant on many details of the hardware they
support. This makes them unlike kernels, which may support many different
hardware variants as long as the instruction-set matches. This means that
a ostree tree created with a device-tree in this manner will only boot on
a single model of hardware. This is sufficient for my purposes, but may
not be for others'.
I've tested this on my NVidia Tegra TK1 device which has u-boot running
in syslinux-compatible mode.
Closes: #1411
Approved by: cgwalters
The bootloader spec says:
> `devicetree` refers to the binary device tree to use when executing the
> kernel. This also shall be a path relative to the `$BOOT` directory. This
> key is optional. Example:
> `6a9857a393724b7a981ebb5b8495b9ea/3.8.0-2.fc19.armv7hl/tegra20-paz00.dtb`
This is necessary for booting my NVidia Tegra TK1 device. It uses u-boot
with syslinux compatibility. In the syslinux files that come with the
device this is called `FDT`, but u-boot treats `FDT and `DEVICETREE` as
synonyms.
See also: [f43c401 in u-boot].
[f43c401 in u-boot]: http://git.denx.de/?p=u-boot.git;a=commit;h=f43c401b72bb0db43ab0b55c4a79e1f4889d3aa2Closes: #1411
Approved by: cgwalters
In particular I'd like to get the `--copyup` changes out for an rpm-ostree
release that will use them. But there are other good changes here, and let's
keep up a regular release train 🚄 in general.
Closes: #1413
Approved by: jlebon
Previously we were doing e.g. `ot_util_filename_validate()` specifically inline
in dirtree objects, but only *after* writing them into the staging directory (by
default). In (non-default) cases such as not using a transaction, such an object
could be written directly into the repo.
A notable gap here is that `pull-local --untrusted` was *not* doing
this verification, just checksums. We harden that (and also the
static delta writing path, really *everything* that calls
`ostree_repo_write_metadata()` to also do "structure" validation
which includes path traversal checks. Basically, let's try hard
to avoid having badly structured objects even in the repo.
One thing that sucks in this patch is that we need to allocate a "bounce buffer"
for metadata in the static delta path, because GVariant imposes alignment
requirements, which I screwed up and didn't fulfill when designing deltas. It
actually didn't matter before because we weren't parsing them, but now we are.
In theory we could check alignment but ...eh, not worth it, at least not until
we change the delta compiler to emit aligned metadata which actually may be
quite tricky. (Big picture I doubt this really matters much right now
but I'm not going to pull out a profiler yet for this)
The pull test was extended to check we didn't even write a dirtree
with path traversal into the staging directory.
There's a bit of code motion in extracting
`_ostree_validate_structureof_metadata()` from `fsck_metadata_object()`.
Then `_ostree_verify_metadata_object()` builds on that to do checksum
verification too.
Closes: #1412
Approved by: jlebon
While we do protect against path traversal during pull, let's also validate
during checkout; it's a cheap operation and provides good last-mile protection.
Closes: #1412
Approved by: jlebon
Previously when initramfs-* was not found in a deployment's
boot directory, it was assumed that rootfs is prepared for
ostree booting by a kernel patch.
With this patch, the behaviour changes to be - if initramfs-*
is not found, assume that system is using a static
ostree-prepare-root as init process. Booting without initramfs
is a common use case on embedded systems. This approach is
also more convenient, than having to patch the kernel.
Closes: #1401
Approved by: cgwalters
Allways include ostree-repo-pull-private.h to get rid of the following
build error when HAVE_LIBCURL_OR_LIBSOUP is not defined:
src/libostree/ostree-repo-pull.c:1493:1: error: no previous prototype
for '_ostree_repo_verify_bindings' [-Werror=missing-prototypes]
Signed-off-by: Marcus Folkesson <marcus.folkesson@gmail.com>
Closes: #1389
Approved by: cgwalters
Let's do a new release with the locking preview, the http2 disable options and
other misc bugfixes to close out the year.
Closes: #1386
Approved by: jlebon
It'd all be really nice if there was some sort of `O_TMPFILE` for symlinks, but
anyways the way we were doing a generic "make temp file than rename" actually
defeats some of the point of `O_TMPFILE`. It's now fully safe to do "copy to
self", so let's do that for regfiles.
Closes: #1378
Approved by: jlebon
For the [rpm-ostree jigdo ♲📦](https://github.com/projectatomic/rpm-ostree/issues/1081) work.
We're basically doing "pull" via a non-libostree mechanism, and this
should be fully supported. As I mentioned earlier we should try to
have `ostree-repo-pull.c` only use public APIs; this gets us closer
to that.
Closes: #1376
Approved by: jlebon
I want some time to play with this more with different callers and work through
test scenarios. Let's disable the locking by default for now, but make it easy
to enable.
Closes: #1375
Approved by: jlebon
Try and clarify what happens with the prefixes, and that they always
return refspecs.
I’m still not 100% sure this is right.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1347
Approved by: cgwalters
It will be used by the fsck utility in future. We could expose it
publicly in future too, if needed.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1347
Approved by: cgwalters
In the non-`CONSUME` path for regfiles (which happens currently for
`bare-user`), we go to a lot of contortions to make an "object stream",
only to immediately parse it again.
Fixing this will also enable the `G_IS_FILE_DESCRIPTOR_BASED()` fast path in
commit, since the input stream will actually reference the file descriptor and
not be an `_OstreeChainInputStream`.
There's a slight concern here in that we're no longer checksumming *literally*
the object stream passed in for the stream case, but I mention in the comment,
the data should be the same, and if it's not somehow we're not adding risk,
since the checksum is still covering the data we actually care about.
Prep for further changes to break up the `write_content_object()` path into
separate paths for archive, as well as regfile vs symlink in non-archive.
Closes: #1371
Approved by: jlebon
A while ago I did `truncate -s 0 /path/to/repo/00/123.commit`, and expected a
checksum error, but I actually got a validation error due to us loading the
commit into a variant and trying to parse out the parent checksum, etc.
I first started by changing the `load_and_fsck_one_object()` function to
checksum before loading, but the problem is that we do a traverse of all objects
first. Fixing this is going to require an `OSTREE_REPO_COMMIT_TRAVER_FLAG_FSCK`
or something.
In the meantime at least though, let's add a public API to fsck a single object
which *does* checksum cleanly before parsing the object, and change the `fsck`
command to use it.
We then change the fsck binary to do this while iterating over the refs
and finding the commit object. This way we'll at least get a checksum
first for commit objects, even if not dirtree/dirmeta.
Closes: #1364
Approved by: jlebon
This commit fixes an infinite loop that happens if you try to list the
remotes of a repo that has a parent repo set. It also adds a unit test
to ensure the right behavior, which is that both the child remotes and
parent remotes are listed.
Closes: #1366
Approved by: cgwalters
One major thing we can do to speed up local commits is multithreading. In
preparation for that, split up the recursion function so that the subdirectory
case is separate from the content (regfile/symlink) case. Then for non-subdirs,
we can easily peel off worker threads and gather the final checksums and update
the mtree from the main thread.
The diff here looks large but it's pretty straightforward; amazingly this change
compiled the very first time I tried it!
Closes: #1365
Approved by: jlebon
This seems to work around
https://github.com/ostreedev/ostree/issues/1362
Though I'm not entirely sure why yet. But at least with this it'll be easier for
people to work around things locally.
Closes: #1368
Approved by: jlebon
Add exclusive repository locking to all the pruning entry points. This
ensures that objects and deltas will not be removed while another
process is writing to the repository.
Closes: #1343
Approved by: cgwalters
Define an auto cleanup handler for use with repo locking. This is based
on the existing auto transaction cleanup. A wrapper for
ostree_repo_lock_push() is added with it. The intended usage is like so:
g_autoptr(OstreeRepoAutoLock) lock = NULL;
lock = ostree_repo_auto_lock_push (repo, lock_type, cancellable, error);
if (!lock)
return FALSE;
The functions and type are marked to be skipped by introspection since I
can't see them being usable from bindings.
Closes: #1343
Approved by: cgwalters
Currently ostree has no method of guarding against concurrent pruning.
When there are multiple repo writers, it's possible to have a pull or
commit race against a prune and end up with missing objects.
This adds a file based repo locking mechanism. The intention is to take
a shared lock when writing objects and an exclusive lock when deleting
them. In order to make use of the locking throughout the library in a
fine grained fashion, the lock acts recursively with a stack of lock
states. If the lock becomes exclusive, it will stay in that state until
the stack is unwound past the initial exclusive push. The file locking
is similar to GLnxLockFile in that it uses open file descriptor locks
but falls back to flock when needed.
The lock also attempts to be thread safe by storing the lock state in
thread local storage with GPrivate. This means that each thread will
have an independent lock for each repository it opens. There are some
drawbacks to that, but it seemed impossible to manage the lock state
coherently in the face of multithreaded access.
The API is a push/pop interface in accordance with the recursive nature
of the locking. The push interface uses an enum that's translated to
LOCK_SH or LOCK_EX as needed. Both interfaces use an internal timeout
field to decide whether to manage the lock in a blocking or non-blocking
fashion. The intention is to allow ostree applications as well as
administrators to control this timeout. For now, the default is a 30
second timeout.
Note that the timeout is handled synchronously in thread since the lock
is maintained in thread local storage. I.e., the thread that acquires
the lock needs to be the same thread that runs the operation. There may
be a way to offer an asynchronous version, but it's not clear exactly
how that would work since it would likely involve a separate thread that
invokes a callback when the locking operation completes.
https://bugzilla.gnome.org/show_bug.cgi?id=759442Closes: #1343
Approved by: cgwalters
I was getting a bare `error: Creating temp file: No such file or directory` when
debugging `test-concurrency.py`; with this I get
`error: Writing content object: Creating temp file: No such file or directory`
which helps me pin it down.
Closes: #1343
Approved by: cgwalters
For rpm-ostree I'd like to do importing in parallel with threads; the code is
*almost* ready for that except today it calls
`ostree_repo_transaction_set_ref()`.
Looking at the code, there's really a "transaction" struct here,
not just stats. Let's lift that struct out, and move the refs
into it under the existing lock.
Clarify the documentation around multithreading for various functions.
Closes: #1358
Approved by: jlebon
Time to cut a new release, we've got the libcurl cleanup ordering patch which
several people have hit, along with safe early fixes for tmpdir cleanup. Let's
try to land the locking PR early next cycle.
Closes: #1359
Approved by: jlebon
I was seeing the `Writing OSTree commit...` phase of rpm-ostree
being very slow lately. This turns out to be more fallout from
https://github.com/ostreedev/ostree/pull/1170
AKA commit: 8fe4536
Loading the xattrs is slow on my system (F27AW, XFS+LVM, NVMe). I haven't fully
traced through why, but AIUI at least on XFS the xattrs are often stored outside
of the inode so it's a little bit like doing an `open()+read()`. Plus there's
the LSM overhead, etc.
The thing is that for rpm-ostree's package layering use case, we
basically always want to treat the on-disk state as canonical. (There's
a subtle case here if one does overrides for something that contains
policy but we'll fix that).
Anyways, so we're in a state now where we do the slow but correct thing by
default, which seems sane. But let's allow the app to opt-in to telling us
"really trust devino". The difference between a `stat()` + hash table lookup
versus the full xattr load on my test case of `rpm-ostree install
./tree-1.7.0-10.fc27.x86_64.rpm` is absolutely dramatic; consistently on the
order of 10s without this support, and <1s with (800ms).
Closes: #1357
Approved by: jlebon
This squashes the last race condition I was actively hitting while running
`test-concurrency.py` in a loop. The race is when process A finds a tmpdir to
reuse, and goes to lock it. Meanwhile process B deletes it and unlocks the lock.
Process A then succeeds at grabbing a lock, but the tmpdir is deleted.
Closes: #1352
Approved by: dbnicholson
Previously we'd delete the tmpdir in `rename_pending_loose_objects()`
but do the unlock inside `ostree_repo_commit_transaction()`. Move
them into the same place in the latter function for consistency.
Doesn't fix anything, just a cleanup while reading the code and
working on `test-concurrency.py`.
Closes: #1352
Approved by: dbnicholson
Prep for future work here; let's cleanly separate the path for cleaning up the
txn staging directories from the code that cleans up "other stuff". Currently
only the former case uses the `GLnxLockFile` etc.
Closes: #1352
Approved by: dbnicholson
This closes a race condition I was seeing with `test-concurrency.py`. If we
don't have `O_TMPFILE` (or for symlinks) we'll create temporary files;
previously these would be subject to the date-based pruning because we set the
timestamp to 0 for objects.
Having our temporary files also in the txn staging dir ensures that they're
covered by the locking we do for that directory, and it's also generally cleaner
since the lifecycle of all the temporary data for a txn is in one place.
Closes: #1352
Approved by: dbnicholson
This lowers into the commit core what the static delta code
was doing, and improves the API.
The bigger picture issue is that for writing large files, our current "pull" API
where the caller provides a `GInputStream` is very awkward in some scenarios.
For example, we have a whole "libarchive input stream" that is a ~200 line
GObject that boils down to wrapping `archive_read_data()`.
This came more to a head when I was working on rpm-ostree jigdo since I had to
copy that object.
One step we can take after this is to further split `write_content_object()`
into a "write symlink or archive object" versus "write bare content object"
(it already has a mess of conditionals) and teach the latter case to call
this.
The eventual goal here is to make this API public.
Closes: #1355
Approved by: jlebon
For situations where fsync is disabled, there's basically
no reason to do the whole "staging directory" dance. Just
write directly into the repo.
Today I use `fsync=false` for my build/cache repos.
I briefly considered not allocating a tmpdir at all
in this case, but we actually do want the txn tmpdir
for the non-`O_TMPFILE` case.
Part of https://github.com/ostreedev/ostree/issues/1184Closes: #1354
Approved by: giuseppe
When using dynamic remotes (LAN and USB), we cannot use their name with
the common remote related ops (ostree_repo_remote_...) because ostree
doesn't keep this type of remotes in its internal hash table.
Unfortunately this means that we cannot access the URL of those remotes
either (in order to e.g. set the right URL for those remotes in
Flatpak).
Since the URL is actually stored in a key file that belongs to the
OstreeRemote, then we can simply allow users access to it through a
getter.
So this patch adds a method that allows to return the URL directly from
the OstreeRemote without having to go through the OstreeRepo.
The test-repo-finder-config is also updated by this patch to check if
the URL is correct.
Closes: #1353
Approved by: cgwalters
They don't play nicely currently with HTTP2 where we may
have lots of requests queued.
https://github.com/ostreedev/ostree/issues/878#issuecomment-347228854
In practice anyways I think issues here are better solved on a higher level -
e.g. apps today can use an overall timeout on pulls and if they exceed the limit
set the cancellable.
Closes: #1349
Approved by: jlebon
If a newly allocated tmpdir can't be locked, set initialized to FALSE so
that glnx_tmpdir_cleanup doesn't delete it when new_tmpdir goes out of
scope.
Closes: #1346
Approved by: cgwalters
Another tmpdir user may have deleted an existing tmpdir between the time
the current user called readdir and tried to open it.
Closes: #1346
Approved by: cgwalters
By default, unless it’s const, an (out) GHashTable will be assumed to be
(transfer full). That means the binding needs to free all the items in
the hash table, plus the table itself.
However, all the GHashTables we use have free functions set already, so
freeing the hash table will free its items. This results in a
double-free.
Fix that by ensuring we annotate such (out) hash tables as (transfer
container). Also annotate some other hash tables as (transfer none)
where appropriate, for clarity.
This fixes OSTree.Repo.list_collection_refs() in the Python bindings.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1341
Approved by: dbnicholson
This reverts commit 519b30b7e1. Now that
the experimental GIR is being built correctly and OstreeRemote is a real
boxed type, this can be exposed again.
Closes: #1337
Approved by: pwithnall
Now that g-ir-scanner is being told about ENABLE_EXPERIMENTAL_API, it
can include these types correctly. Drop the __GI_SCANNER__ guards in the
header files so that all the declarations are found.
After this, you can actually construct the types normally:
>>> OSTree.CollectionRef.new('com.example.Foo', 'bar')
<OSTree.CollectionRef object at 0x7f2bba4c7528 (OstreeCollectionRef at 0x55c033ff2f30)>
Closes: #1337
Approved by: pwithnall
Without this, you can't really use OstreeRemote as a GObject, which is a
requirement for bindings.
This was found when attempting to include OstreeRemote in the GIR, and
g-ir-scanner wasn't able to link it's temporary object due to an
"undefined reference to `ostree_remote_get_type'" error.
Closes: #1337
Approved by: pwithnall
Fedora switched to 'xz' compress kernel modules, and recently
[RHEL7 did too](https://bugzilla.redhat.com/show_bug.cgi?id=1367496).
This compression defeats bsdiff.
While we have a "rollsum-able" test, we don't have a "bsdiff-able" test as it'd
be very expensive (we'd have to bsdiff, then apply it and compare the result).
Let's do the tactical quick fix here and just not try to delta files ending in
`.xz.`. This avoids us using bsdiff pointlessly for over 4000 files, which is
quite a notable speed increase for generating deltas.
Closes: #1333
Approved by: jlebon
This allows more explicit handling of commit state in code using
libostree, rather than hard-coding a commit state of 0 for ‘normal’.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1335
Approved by: cgwalters
Reorder cleanup functions so that curl_multi_cleanup() runs before
self->sockets is destroyed. This avoids an assert and invalid memory
access in sock_cb where self->sockets is dereferenced during
curl_multi_cleanup().
Closes: https://github.com/ostreedev/ostree/issues/1331Closes: #1332
Approved by: cgwalters
A tricky thing here that caused this to go past a lot of our tests
is that the code was mostly OK if there was an available delta from
an older commit. But this case broke if we e.g. had a new OS
deployment and did a `--require-static-deltas` pull, i.e. the initial
state.
I cleaned up our "find static delta state" function to return an enumeration,
and extended it with an "already have the commit" state. A problem
I then hit is that we've historically fetched detached metadata for
non-delta pulls, even if the commit hasn't changed. I decided not to
do that for `--require-static-deltas` pulls for now; otherwise the
code gets notably more complex.
Closes: https://github.com/ostreedev/ostree/issues/1321Closes: #1323
Approved by: jlebon
The main thing here is that a ton of stuff has happened in gnulib since we
imported `parse-datetime.y`. I cherry-picked a little bit of it, but that
upstream doesn't seem to build with `-Wundef`, so I just deleted some hunks.
(Note I reindented the warnings consistently)
Update submodule: libglnx
Closes: #1320
Approved by: jlebon
Since ostree_remote_get_type is not made available to g-ir-scanner, it
treats OstreeRemote as a bare struct. That's not kosher for bindings and
it issues the following warning:
src/libostree/ostree-repo-pull.c:5560: Warning: OSTree:
ostree_repo_resolve_keyring_for_collection: return value: Invalid
non-constant return of bare structure or union; register as boxed type
or (skip)
For now, just skip this API for bindings.
Closes: #1322
Approved by: pwithnall
g-ir-scanner was spitting this warning:
src/libostree/ostree-core.c:281: Warning: OSTree:
ostree_validate_collection_id: unknown parameter 'rev' in
documentation comment, should be 'collection_id'
Closes: #1322
Approved by: pwithnall
It's the slowest part, let's show admins something. This "update every 10%" code
was copied from the fsck command; obviously a better approach would be "progress
every N seconds" but doing that somewhat accurately requires making things
async; not worth it here yet.
Closes: #1314
Approved by: jlebon
I didn't fully spelunk this, but from what `static-delta-generate-crosscheck.sh`
had, we appeared to be doing this before, and it's clearly useful for local
testing rather than needing to spin up a HTTP server.
Closes: #1313
Approved by: jlebon
First, the manual crosscheck script bitrotted; it got caught up
in the "use libtest repo creation wrapper" bit, and also it
seems like at some point `pull --require-static-deltas` changed
meaning when dealing with `file:///` repos. I have more work to
unwind that.
Next, I'm seeing a delta failure which looks like a static delta
miscompilation with rollsums; change the compiler to print out
the source object too, which helped me debug this.
And finally in the processing code, fix incorrect error prefixing, which was
misleading.
Closes: #1311
Approved by: ashcrow
Fixes: 93457071cb "lib/deltas: Use pread() instead of lseek()+read()"
Caught this when trying to test alex's patch locally. I am going to review our
static delta pulls and try to get something more comprehensive locally. But in
the meantime this patch is clearly right.
Closes: #1312
Approved by: jlebon
Directly when we allocate a new part we finish the old one,
writing the compressed data to a temporary file and generating
the delta header for it.
When all these are done we loop over them and collect the headers,
sizes and either copy the tempfile data into the inlined superblock
or link the tempfiles to disk with the proper names.
Closes: #1309
Approved by: cgwalters
This allows us to create the final delta desciptor directly on disk
rather than having it all in memory. This is nice because it can
become quite large if inlined parts are used.
Note however, that we currently generate all the delta parts in
memory before adding them to the delta, so we still keep all individual
parts in memory. Fixing that is the next step.
Closes: #1309
Approved by: cgwalters
This makes the code nicer too. Properly unit testing this though really wants
like a whole set of stuff around parent repos...but we do have coverage of the
non-parent path in the current pull tests.
Closes: https://github.com/ostreedev/ostree/issues/1306Closes: #1308
Approved by: alexlarsson
For example, tmpfs or a cgroup file system. This is basically an
optimisation of the list of file systems we check for repositories,
since we would never expect any of these file systems to be capable of
containing a repository.
Depends on the new API from
https://bugzilla.gnome.org/show_bug.cgi?id=788927.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1307
Approved by: cgwalters
These two code paths tried to propagate errors which had never been set.
Set new errors instead.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1303
Approved by: cgwalters
A GVariantIter* was being passed to a GVariant format string varargs,
rather than a GVariantIter**. This resulted in memory corruption.
So we can continue to reuse ref_map throughout the function, make it a
GVariantIter* rather than a stack-allocated GVariantIter.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1301
Approved by: cgwalters
This is more subtle fallout from:
https://github.com/ostreedev/ostree/pull/1170
AKA commit: 8fe4536257
Before, if we found a devino cache hit, we'd use it unconditionally.
Recall that `bare-user` repositories are very special in that they're the only
mode where the on disk state ("physical state") is not the "real" state. The
latter is stored in the `user.ostreemeta` xattr. (`bare-user` repos are also
highly special in that symlinks are regular files physically, but that's not
immediately relevant here).
Since we now have `bare-user-only` for the "pure unprivileged container" case,
`bare-user` should just be used for "OS builds" which have nonzero uids (and
possibly SELinux labels etc.)
In an experimental tool I'm writing "skopeo2ostree" which imports OCI images
into refs, then squashes them together into a single final commit, we lost the
the `81` group ID for `/usr/libexec/dbus-1/dbus-daemon-launch-helper`.
This happened because the commit code was loading the "physical" disk state,
where the uid/gid are zero because that's the uid I happened to be using. We
didn't just directly do the link speedup because I was using `--selinux-policy`
which caused the xattrs to change, which caused us to re-commit objects from the
physical state.
The unit test I added actually doesn't quite trigger this, but I left
it because "why not". Really testing this requires the installed test
which uses SELinux policy from `/`.
The behavior without this fix looks like:
```
-00755 0 0 12 { [(b'user.ostreemeta', [byte 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x51, 0x00, 0x00, 0x81, 0xed]), (b'security.selinux', b'system_u:object_r:lib_t:s0')] } /usr/lib/dbus-daemon-helper
```
which was obviously totally broken - we shouldn't be picking up the
`user.ostreemeta` xattr and actually committing it of course.
Closes: #1297
Approved by: jlebon
Add some missing annotations and clarify that it always returns an open
repository on success.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1300
Approved by: cgwalters
This is a freeform string useful to track/display when a commit is "derived"
from some other format. For example, in the rpm-ostree test we make a
`vmcheck` ref that conceptually overlays the default ref like
`fedora-atomic:fedora/26/x86_64/atomic-host`.
My current patch sets the source title to e.g.
"Dev overlay on fedora-atomic:fedora/26/x86_64/atomic-host".
Another case I'm working on now is importing OCI images to use
as host images. For that case, the source title is
With this patch we could then set the original OCI image name + tag
as the source name, like:
"oci:cgwalters/demo-custom-fedora-atomic-host:26".
Closes: #1296
Approved by: jlebon
I was trying to use this with pygobject for an OCI+ostree project, and pygobject
rejected simply assigning to the field (understandably, since it can't bind the
lifetime together).
Add a wrapper function, which is still unsafe, but hides that unsafety
where most people shouldn't find it. And if they do...well, sorry,
Rust wasn't invented when ostree was started.
Closes: #1295
Approved by: pwithnall
I'm playing around with some ostree ⇔ OCI/Docker bits, and ran
into this while importing an OCI image that built from the Fedora
base image where `/home` is a regular directory, and I added a layer
that did the ostree bits of moving it to `/var` and leaving a symlink.
OCI/Docker supports this. Now since "process whiteouts" is really the
"enable OCI/Docker" mode, let's only replace dirs if that's enabled.
This leaves the `UNION_FILES` targeted for its original use case
which is unioning components/packages. (Although that use case itself
is now a bit superceded by `UNION_IDENTICAL`, but eh).
Closes: #1294
Approved by: jlebon
This is another OstreeRepoFinder implementation; it returns results from
a given set of URIs. It’s designed to be used for implementing user
overrides to other repo-finders, or for implementing unit tests.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1281
Approved by: mwleeds
Use g_variant_iter_loop() rather than next(), since it automatically
handles freeing the child memory each iteration. Previously, we leaked
it for all but the last iteration.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1293
Approved by: cgwalters
Change the regexp for validating refs to require at least one letter or digit
before allowing the other special chars in the set `[.-_]`. Names that start
with `.` are traditionally Unix hidden files; let's ignore them under the
assumption they're metadata for some other tool, and we don't want to
potentially conflict with the special `.` and `..` Unix directory entries.
Further, names starting with `-` are problematic for Unix cmdline option
processing; there's no good reason to support that. Finally, disallow `_` just
on general principle - it's simpler to say that ref identifiers must start with
a letter or digit.
We also ignore any existing files (that might be previously created refs) that
start with `.` in the `refs/` directory - there's a Red Hat tool for content
management that injects `.rsync` files, which is why this patch was first
written.
V1: Update to ban all refs starting with a non-letter/digit, and
also add another call to `ostree_validate_rev` in the pull
code.
Closes: https://github.com/ostreedev/ostree/issues/1285Closes: #1286
Approved by: jlebon
The way `_ostree_repo_open_content_bare()` did both looking for the object and
possibly creating a new fd was just weird and inconsistent with e.g. the pull
code where we always call `has_object()` first.
Just call `has_object()` in the delta paths that used this too, making the
implementation right now a thin wrapper around
`glnx_open_tmpfile_linkable_at()`, but this is prep for a later patch which does
more.
Closes: #1283
Approved by: jlebon
This is another case where making an input stream out of a memory buffer is a
bit silly; just hash the `GBytes` directly.
Closes: #1287
Approved by: jlebon
A side effect of commit 8fe4536257 is that
we started listing all xattrs even for files with device/inode matches;
further, we did that using the dfd/name which means we went through
the `/proc` path, which is slower and uglier.
Noticed this in strace while looking at adoption code.
Closes: #1280
Approved by: jlebon
That's why the syscall was invented, so let's use it. Just noticed while reading
the code while working on another patch.
Closes: #1270
Approved by: jlebon
This isn't perfect, but at least we fix an error-overwrite error, and in
practice `ostree admin unlock` isn't wrapped by `rpm-ostree` yet, so spew to
stderr is OK.
Closes: https://github.com/ostreedev/ostree/issues/1273Closes: #1279
Approved by: guyshapiro
For checkouts that are on the same device, for regular files we can simply
"adopt" existing files. This is useful in the "build from subtrees" pattern that
happens with e.g. `rpm-ostree install` as well as flatpak and gnome-continuous.
New files are things like an updated `ldconfig` cache, etc. And particularly for
`rpm-ostree` we always regenerate the rpmdb, which for e.g. this workstation is
`61MB`.
We probably should have done this from the start, and instead had a `--copy`
flag to commit, but obviously we have to be backwards compatible.
There's more to do here - the biggest gap is probably for `bare-user` repos,
which are often used with things like `rpm-ostree compose tree` for host
systems. But we can do that later.
Closes: #1272
Approved by: jlebon
I was working on a patch to do build on the work done to
import content objects async to do the same for metadata, but right
now we basically rely on writing them first to do the GPG verification
when scanning.
Things will be cleaner for that if we can pass the commit object directly into
`scan_commit_object()` and consistently use `gpg_verify_unwritten_commit()`.
We're careful here to continue to do it both ways (but at most one time), to
account for the case where a bad commit has been pulled and written - we need to
keep failing GPG verification there.
Closes: #1269
Approved by: jlebon
Prep for a later patch to do GPG verification before writing commit objects;
`_ostree_repo_gpg_verify_with_metadata()` already handles this, and so dropping
this gives us consistent error messages.
Closes: #1269
Approved by: jlebon
There's a subtle issue going on with the way we use `UNION_IDENTICAL`
now in rpm-ostree. Basically, the crux of the issue is that we checkout
the whole tree from the system repo, but then overlay packages by
checking out from the pkgcache repo. This is an easy way to break the
assumption that we will be merging hardlinks from the same repo.
This ends up causing issues like:
https://github.com/projectatomic/rpm-ostree/issues/1047
There, `vim-minimal` is already part of the host and has an object for
`/usr/share/man/man1/ex.1.gz`. `vim-common` has that same file, but
because it's unpacked in the pkgcache repo first, the hardlinks are not
the same.
There are a few ways we *could* work around this in rpm-ostree itself,
e.g. by re-establishing hardlinks when we do the content pull into the
system repo, but it still felt somewhat hacky. Let's just do this the
proper way and fall back to checksumming the target file if needed,
which is what librpm does as well in this case. Note that we only
checksum if they're not hard links, but they're the same size.
Closes: #1258
Approved by: cgwalters
Let's react to `Ctrl-C` faster here. Noticed while I was doing an update on my
desktop and playing with cancellation.
Closes: #1266
Approved by: jlebon
This is like `ostree_checksum_file` but fd-relative. This will be used
by https://github.com/ostreedev/ostree/pull/1258.
AFAICT, we actually didn't have any tests that check the `checksum` CLI.
Add a basic one here to test the old code as well as the new code.
Closes: #1263
Approved by: cgwalters
This works around an (IMO) SpiderMonkey bug - it tries to
clean up in a shared library destructor, but doesn't install a
`pthread_atfork()` handler to unset its state.
Closes: https://github.com/ostreedev/ostree/issues/1262Closes: #1264
Approved by: dbnicholson
This ends up a lot better IMO. This commit is *mostly* just
`s/glnx_close_fd/glnx_autofd`, but there's also a number of hunks like:
```
- if (self->sysroot_fd != -1)
- {
- (void) close (self->sysroot_fd);
- self->sysroot_fd = -1;
- }
+ glnx_close_fd (&self->sysroot_fd);
```
Update submodule: libglnx
Closes: #1259
Approved by: jlebon
It's no longer called directly by the pull code, so make it static.
The goal here is to have the pull and local-fs commit paths use higher level
more efficient APIs, and eventually make those APIs public.
Closes: #1257
Approved by: jlebon
This simplifies a lot of code; the header function was structured
to write to an input stream, but many callers only wanted the checksum,
so it's simpler (and error-free) to simply allocate a whole buffer
and checksum that.
For the callers that want to write it, it's also still simpler to allocate the
buffer and write the whole thing rather than having this function do the
writing.
A lot of the complexity here again is a legacy of the packfile code, which is
dead.
This is prep for faster regfile commits where we can avoid `G{In,Out}putStream`.
Closes: #1257
Approved by: jlebon
Nothing was using the `bytes_written` data (we always discard partially written
tmpfiles), so simplify everything by dropping it. Further, we always passed an
offset of `0`, so drop that argument too. (I believe that this was previously
used by the "pack files" code that we deleted long ago)
Second, we had an unnecessary internal wrapper for this function; drop that too.
Closes: #1257
Approved by: jlebon
If the filesystem is already frozen, FIFREEZE returns EBUSY, and if the
filesystem is already thawed, FITHAW returns EINVAL. It's very unlikely
these issues would arise on a real ostree system since the sysroot would
be locked during the freeze/thaw cycle.
However, when multiple fake sysroots are used during the test suite (run
as root), the tests could race to run the freeze/thaw cycle without
locking. Furthermore, there's no reason why an independent process might
be trying to freeze the filesystem while ostree was deploying. Ignore
but warn for these errors since there's not much ostree can do about it,
anyways.
Closes: #1260
Approved by: cgwalters
The faster (OpenSSL/GnuTLS) code lived in a `GInputStream` wrapper, and that
adds a lot of weight (GObject + vtable calls). Move it into a simple
autoptr-struct wrapper, and use it in the metadata path, so we're
now using the faster checksums there too.
This also drops a malloc there as the new API does hexdigest in place to a
buffer.
Prep for more work in the commit path to avoid `GInputStream` for local file
commits, and ["adopting" files](https://github.com/ostreedev/ostree/pull/1255).
Closes: #1256
Approved by: jlebon
For many cases of commit, we can actually optimize things by simply "adopting"
the object rather than writing a new copy. For example, in rpm-ostree package
layering.
We can only make that optimization though if we take ownership of the file. This
commit hence adds an API where a caller tells us to do so. For now, that just
means we `unlink()` the files/dirs as we go, but we can now later add the
"adopt" optimization.
Closes: #1255
Approved by: jlebon
What the deltas code is doing is weird/unfortunate. The name
`ot_variant_read()` conflicts too much with `ot_variant_read_fd()`.
Since nothing else uses it, move it into the deltas code.
Closes: #1254
Approved by: jlebon
A lot of the libostree code is honestly too complex for its
own good (this is mostly my fault). The way we do HTTP writes
is still one of those. The way the fetcher writes tempfiles,
then reads them back in is definitely one of those.
Now that we've dropped the "partial object" bits in:
https://github.com/ostreedev/ostree/pull/1176 i.e. commit
0488b4870e
we can simplify things a lot more by having the fetcher
return an `O_TMPFILE` rather than a filename.
For trusted archive mirroring, we need to enable linking
in the tmpfiles directly.
Otherwise for at least content objects they're compressed, so we couldn't link
them in. For metadata, we need to do similar logic to what we have around
`mmap()` to only grab a tmpfile if the size is large enough.
Closes: #1252
Approved by: jlebon
Buried in this large patch is a logical fix:
```
- if (!map)
- return glnx_throw_errno_prefix (error, "mmap");
+ if (map == (void*)-1)
+ return glnx_null_throw_errno_prefix (error, "mmap");
```
Which would have helped me debug another patch I was working
on. But it turns out that actually correctly checking for
errors from `mmap()` triggers lots of other bugs - basically
because we sometimes handle zero-length variants (in detached
metadata). When we start actually returning errors due to
this, things break. (It wasn't a problem in practice before
because most things looked at the zero size, not the data).
Anyways there's a bigger picture issue here - a while ago
we made a fix to only use `mmap()` for reading metadata from disk
only if it was large enough (i.e. `>16k`). But that didn't
help various other paths in the pull code and others that were
directly doing the `mmap()`.
Fix this by having a proper low level fs helper that does "read all data from
fd+offset into GBytes", which handles the size check. Then the `GVariant` bits
are just a clean layer on top of this. (At the small cost of an additional
allocation)
Side note: I had to remind myself, but the reason we can't just use
`GMappedFile` here is it doesn't support passing an offset into `mmap()`.
Closes: #1251
Approved by: jlebon
Appease Coverity by using the same condition for both the ternary check
and the if-condition later on. It should be smart enough to figure out
that `dir_enum == NULL` implies that `dfd_iter != NULL` from the
assertion at the top of the function.
Coverity CID: #1457318Closes: #1250
Approved by: cgwalters
Spotted while reading through the code, it looks like the
copy_detached_metadata() call is accidentally omitted if a hardlink
already exists for the .commit object.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1242
Approved by: cgwalters
This commit adds debug output whenever libostree reads GPG keys, which
can come from different locations in the file system. This is especially
helpful in debugging "GPG signatures found, but none are in trusted
keyring" errors, which in my case was caused by OSTree looking in
/usr/local/share/ostree/trusted.gpg.d/ rather than
/usr/share/ostree/trusted.gpg.d/.
Closes: #1241
Approved by: cgwalters
I'm regretting a bit having the `guint8*csum` variant of checksums
except for the serialized form. Once we start doing processing
it's easier to just have it remain hex.
Do an on-stack conversion for the metadata scanning function; this
drops a malloc and also just looks nicer.
Also add some long-awaited function comments to the two.
Closes: #1240
Approved by: jlebon
These shouldn’t change the bloom filter’s behaviour at all, but make it
a bit more obvious what the programmatical limitations are on the sizes
it can deal with.
In reality, those sizes should never be reached because they won’t fit
in a DNS-SD record.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1239
Approved by: cgwalters
I was reading the pull code for the last release, and spotted
a bug in commit f923c2e1ea - in
the case where the ref doesn't exist, we don't set an error,
tripping an assertion in the main code.
The previous code wanted the ref to always exist, so just flip back the boolean
for "ignore noent". I moved the `g_strchomp()` just into the HTTP path - if a
local repo is corrupted in this way it's something to fix in that repo.
Closes: #1238
Approved by: pwithnall
Compiling with -Wconversion warns on this line, as the conversion from
guint64 to guint8 is implicit (but safe: there is no bug here, since the
implicit cast is applied after the modulus arithmetic).
Make the cast explicit to silence -Wconversion.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1231
Approved by: cgwalters
There was an implicit cast from guint64 to gsize (which is 32-bit on
armhf, for example) before the modulus arithmetic which safely narrows
the index.
Fix that by using a guint64 intermediate variable and making the cast
explicit.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1231
Approved by: cgwalters