This option allows a repo to explicitly opt out of adding new remotes in
a remotes configuration directory. This currently defaults to true for
system repos and false for non-system repos to maintain legacy behavior
that non-system repos don't add remotes in a configuration directory.
That would be problematic for flatpak, which specifies a remotes config
dir but adds remotes in ways that are incompatible with it.
So, what this really does is allow system repos to control whether they
want to add remotes in the config dir or not. That's important if your
flatpak repo is the system repo like at Endless.
Closes: #1134Closes: #1155
Approved by: cgwalters
Before commit e0346c1, a non-system repo could specify
remotes-config-dir and have remotes read from there. However, adding
remotes would only be done in the config dir for a system repo. Restore
that by respecting remotes-config-dir when no sysroot is found and
adding back the ostree_repo_is_system() check when adding remotes.
Closes: #1133Closes: #1151
Approved by: cgwalters
There were some important ones there like a random `syncfs()`. The remaining
users are mostly blocked on the "fstatat enoent" case, I'll wait to port those.
Closes: #1150
Approved by: jlebon
Add keys from the signing homedir to the GpgVerifier used to look
for duplicate signatures. This will allow signatures from subkeys
to be canonicalised and recognised as already signed despite the
differing key ID, avoiding duplicate signatures.
Closes: https://github.com/ostreedev/ostree/issues/608Closes: #1092
Approved by: cgwalters
We have `ot_ensure_unlinked_at()` for the "ignore ENOENT" case, and
`glnx_unlinkat()` otherwise. Port all in-tree callers to one or the other as
appropriate.
Just noticed an unprefixed error in the refs case and decided to do a tree-wide
check.
Closes: #1142
Approved by: jlebon
I'd mostly been skipping the GPG functions due to lack of autoptr for a few
things, but I noticed these bits were straightforward.
Closes: #1136
Approved by: jlebon
The vast majority of invocations of `ot_gpgme_error_to_gio_error()` were paired
with `g_prefix_error()`; let's combine them for the same reason we do
`glnx_throw_errno_prefix()`. For the few cases that don't we might as well add
some prefix.
I also changed it to `return FALSE` in prep for more style porting.
Closes: #1135
Approved by: jlebon
However, they weren't showing up in the output HTML and I have
no idea why; I looked at what we're doing and it looks close enough
to what's going on in `GDBusConnection` that I was using as a reference.
I'm not going to spend a lot of time to debug it right now.
Closes: #1140
Approved by: jlebon
When working with collections it can be useful to see remote refs rather
than just local and mirrored ones. This commit changes the "ostree refs
-c" output to include remote refs, and includes remote refs with
collection IDs in summary file generation as well. The former behavior
is consistent with how "ostree refs" works, and the latter behavior is
useful in facilitating P2P updates even when mirrors haven't been
configured.
To accomplish this, OstreeRepoListRefsExtFlags was extended with an
EXCLUDE_REMOTES flag. This was done rather than an INCLUDE_REMOTES flag
so that existing calls to ostree_repo_list_refs_ext continue to have the
same behavior. This flag was added to ostree_repo_list_collection_refs
(which is an experimental API break).
Also, add unit tests for the "refs -c" and summary file behavior, and
update relevant tests.
Closes: #1069
Approved by: cgwalters
Things like https://sourceware.org/libabigail/manual/abidiff.html
look interesting but in a brief look I couldn't work out
how to conveniently use them for quick ABI sanity checking without
doing a diff from a previous build (which we could do but would be
more involved).
This way will at least catch struct ABI breaks on x86_64 which
I think we'd be most likely to do accidentally when trying
to use one of the previous unused values.
I found the hole values via gdb's `pahole` command.
Closes: #1108
Approved by: jlebon
This essentially completes our fd-relative conversion.
While here, I cleaned up the semantics of `ostree_repo_create()` and
`ostree_repo_create_at()` to be more atomic - basically various scripts were
testing for the `objects` subdirectory, so let's formalize that.
Closes: #820
Approved by: jlebon
Add a new error domain for GPG signing/verification errors, and use it
throughout libostree for describing verification errors. This replaces
various uses of G_IO_ERROR_FAILED, and one instance of
G_IO_ERROR_NOT_FOUND (for which some code in ot-builtin-show.c had to be
changed to ensure it was still handled correctly).
The use of a separate error domain allows failures in GPG operations to
be handled separately from network failures (where the summary file
could not be found to be downloaded, for example) or timeouts.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1064Closes: #1071
Approved by: mbarnes
Part of cleaning up our usage of libglnx; we want to use what's in GLib where we
can.
Had to change a few .c files to `#include ostree.h` early on to pick up
autoptrs for the core types.
Closes: #1040
Approved by: jlebon
Prep for dropping `GLNX_DEFINE_CLEANUP_FUNCTION` from libglnx
in favor of using GLib's `G_DEFINE_AUTO_CLEANUP_FREE_FUNC()`.
Closes: #1042
Approved by: jlebon
Mostly for the latest `-Wmaybe-uninitialized` fix, but while here also port some
places to newer APIs.
Update submodule: libglnx
Closes: #1027
Approved by: jlebon
Using the error prefixing in the delta processing allows us to
do new code style. Also strip trailing whitespace.
Use error prefixing in a few other random places. I didn't
hunt for all of them, just testing out the new API.
Use `glnx_fchmod()`. Also note I dropped one `fchmod (tmpf, 0600)`
which is no longer necessary.
Update submodule: libglnx
Closes: #1011
Approved by: jlebon
(Note this PR was reverted in <https://github.com/ostreedev/ostree/pull/902>;
this version should be better)
Using `${sysroot}` to mean the physical storage root: We don't want to write to
`${sysroot}/etc/ostree/remotes.d`, since nothing will read it, and really
`${sysroot}` should just have `/ostree` (ideally). Today the Anaconda rpmostree
code ends up writing there. Fix this by adding a notion of "physical" sysroot.
We determine whether the path is physical by checking for `/sysroot`, which
exists in deployment roots (and there shouldn't be a `${sysroot}/sysroot`).
In order to unit test this, I added a `--sysroot` argument to `remote add`.
However, doing this better would require reworking the command line parsing for
the `remote` argument to support specifying `--repo` or `--sysroot`, and I
didn't quite want to do that yet in this patch.
This second iteration of this patch fixes the bug we hit the first time;
embarassingly enough I broke `ostree remote list` finding system remotes.
The fix is to have `ostree_repo_open()` figure out whether it's the same
as `/ostree/repo` for now.
Down the line...we might consider having the `ostree remote` command line itself
instatiate an `OstreeSysroot` by default, but this maximizes compatibility; we
just have to pay a small cost that `ostree` usage outside of that case like
`ostree static-delta` in a releng Jenkins job or whatever will do this `stat()`
too.
Closes: https://github.com/ostreedev/ostree/issues/892Closes: #1008
Approved by: mbarnes
And in general, if for some reason we can't write `user.` xattrs, provide an
error immediately rather than doing it during a later pull. This way the failure
cause is a lot more obvious.
Related: https://github.com/ostreedev/ostree/issues/991Closes: #993
Approved by: jlebon
For ostree-as-host, we're the superuser, so we'll blow past
any reserved free space by default. While deltas have size
metadata, if one happens to do a loose fetch, we can fill
up the disk.
Another case is flatpak: the system helper has similar concerns
here as ostree-as-host, and for `flatpak --user`, we also
want to be nice and avoid filling up the user's quota.
Closes: https://github.com/ostreedev/ostree/issues/962Closes: #987
Approved by: jlebon
An inverted condition in _ostree_repo_add_remote() was causing the
OstreeRepoFinder to delete precisely the wrong remote
configurations from memory once it was finished. It’s supposed to delete
the ones which it transiently added; but was instead deleting all the
existing remote configurations.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #985
Approved by: cgwalters
It's more natural for a few calling places. Prep for patches to go the other
way, which in turn are prep for adding a commit filter v2 that takes `struct
stat`.
`ot_gfile_type_for_mode()` was only used in this function, so inline it here.
Closes: #974
Approved by: jlebon
There's lots of mechanically replacing `OtTmpFile` with `GLnxTmpfile`;
the biggest changes are in the commit path. Symlink commits are now
very clearly separated from regular files. Symlinks are `OtCleanupUnlinkat`,
and regular files are `GLnxTmpfile`.
The commit codepath separates those as `_ostree_repo_commit_path_final()` and
`_ostree_repo_commit_tmpf_final()`. A nice aspect of all of this is that they
both *consume* the temporary on success. This avoids an extra spurious
`unlink()` call.
One of the biggest bits of code motion is in `commit_loose_regfile_object()`,
which no longer needs to care about symlinks. For the most parth though it's
just removing conditionals.
Update submodule: libglnx
Closes: #958
Approved by: jlebon
These are tuples of (collection ID, ref name) which are a globally-unique
form of local ref. They use OstreeCollectionRef as an identifier, and hence
need to be accessed using new API, as the existing API uses string
identifiers and sometimes accepts refspecs. Remote names are not
supported as part an OstreeCollectionRef.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #924
Approved by: cgwalters
Add {get,set}_collection_id() methods to OstreeRepo and some documentation
about the concept of a collection ID which globally identifies an
upstream repository. See the documentation for more details.
This will be used in future commits. For now, the new API is marked as
experimental (--enable-experimental-api).
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #924
Approved by: cgwalters
This will make some future additions to regenerate_summary() easier.
This commit introduces no functional changes.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #924
Approved by: cgwalters
This is followon work from previous cleanups. Basically
`stat_bare_content_object()` was the `fstatat()` logic
and `ostree_repo_read_bare_fd()` was the `openat()` implementation;
they duplicated some bits to find the object in staging, recurse
into parent etc.
Further, I wanted an internal-only version of this API which didn't allocate
`GFileInfo`/`GInputStream` but used a plain `fd` and `struct stat` to avoid
mallocs.
The end version here I think looks a lot nicer, since we deduplicate the various
`open()` calls in the different cases for example.
Closes: #952
Approved by: jlebon
Prep for future cleanup patches (in particular I want an internal-only
version at first that uses a fd+`struct stat`) to avoid allocations.
The new version avoids lots of deep nesting of conditionals as well
by hoisting the "not found" handling to an early return.
There's a bit of code duplication between the two cases but it's
quite worth the result.
Closes: #951
Approved by: jlebon
There are a few places in the code where ad-hoc validation was being
performed. Might as well formalise it a bit more.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #948
Approved by: cgwalters
Change the annotation of the out parameters on ostree_repo_load_file
from `(allow-none)` to `(optional) (nullable)`. `allow-none` is
ambiguous, since these parameters can be both NULL on input and set to
NULL on return.
Closes: #939
Approved by: cgwalters
Thinking about the problem of flatpak converting from `bare-user` to `bare-user-only`
"in place" by creating a new repo and doing a `pull-local`, I realized
that we can optimize this process by doing hardlinks for both metadata
and regular files. The repo formats are *almost* compatible, the
exception being symlinks.
An earlier patch caused us to do hardlinks for metadata, this patch takes things
to the next step and special cases this specific conversion. In this case we
need to parse the source object to determine whether or not it's a symlink.
Closes: #922
Approved by: alexlarsson
Our previous logic for import-via-hardlink only tried if the repo modes match,
but we *can* hardlink metadata between e.g. `archive` and `bare-user` repos, and
that's quite useful thing to do. Our documentation encourages converting to/from
those repo modes locally for build systems.
Closes: #922
Approved by: alexlarsson
Before this, if one had repos of matching mode but different owners,
which could happen if one e.g. makes a `bare` non-root repo in
`/ostree/deploy/$stateroot/var/tmp`, every time we tried to call `linkat()`
we'd get `EPERM` and fall back to a copy.
Fix this by saving the repo owner uid, and avoid trying to call `linkat()` if we
know it's going to fail. Of course most commonly in this scenario we'll
immediately fail trying to `chown` the files to `0`, but this is prep for a
future patch to improve `bare-user` → `bare-user-only` imports where we'll be a
bit more sophisticated.
Closes: #922
Approved by: alexlarsson
Its often the case that we want to look at objects inside a commit,
before the objects the transaction is finished. For instance:
https://github.com/flatpak/flatpak/pull/837
Which tries to verify the file permissions before committing the
transaction.
And:
1e5ffa926a
Which collects the storage size of the objects so that we can
put the total download size in the commit metadata.
I tried to find all the places where we did reads from the
object directories, and in particular this fixes:
- `ostree_repo_load_file()` for `bare` repos (`archive` was already working).
- `ostree_repo_query_object_storage_size()`
- Applying deltas that reference not-yet-commited objects
Closes: #916
Approved by: cgwalters
This came up in: https://github.com/ostreedev/ostree/pull/881
Basically doing streaming for metadata is dumb. Split up the metadata/content
paths so we pass metadata around as `GVariant`. This drops the last internal
caller of `ostree_repo_write_metadata_stream_trusted()` which was the dumb
function mentioned.
Closes: #923
Approved by: jlebon
If there are no deltas to be listed in the summary file, don’t bother
including the key for them in the additional metadata section of the
file. This saves a few bytes in some cases.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #911
Approved by: cgwalters
This makes it a bit more easily separable from the rest of the code in
the function. No functional changes.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #911
Approved by: cgwalters
Copying xattrs when manipulating the GPG keyring for a repository
causes errors when the underlying filesystem doesn't support writing
xattrs - overlayfs is a common example. It also causes the selinux
attributes of the keyring files to be copied from the temporary
location instead of properly inherited from the destination directory
(ending up, for example, as unconfined_u:object_r:user_tmp_t:s0, rather
than unconfined_u:object_r:data_home_t:s0)
Closes: #910
Approved by: cgwalters
This reverts commit 1eff3e8343. There
are a few issues with it. It's not a critical thing for now, so
let's ugly up the git history and revisit when we have time to
debug it and add more tests.
Besides the below issue, I noticed that the simple `ostree remote add`
now writes to `/ostree/repo/config` because we *aren't* using the
`--sysroot` argument.
Closes: https://github.com/ostreedev/ostree/issues/901Closes: #902
Approved by: mike-nguyen
Using `${sysroot}` to mean the physical storage root: We don't want to write to
`${sysroot}/etc/ostree/remotes.d`, since nothing will read it, and really
`${sysroot}` should just have `/ostree` (ideally). Today the Anaconda rpmostree
code ends up writing there. Fix this by adding a notion of "physical" sysroot.
We determine whether the path is physical by checking for `/sysroot`, which
exists in deployment roots (and there shouldn't be a `${sysroot}/sysroot`).
In order to unit test this, I added a `--sysroot` argument to `remote add`.
However, doing this better would require reworking the command line parsing for
the `remote` argument to support specifying `--repo` or `--sysroot`, and I
didn't quite want to do that yet in this patch.
Closes: https://github.com/ostreedev/ostree/issues/892Closes: #896
Approved by: jlebon
This is prep for introducing a fd-relative `ostree_repo_new_at()`.
Previously, `ostree_repo_is_system()` compared `GFile` paths, but
there's a much simpler check we can do first - if this repository
was created via `OstreeSysroot`, it must be a system repo.
Closes: #886
Approved by: jlebon
Make it an internal, not static, API; like _ostree_repo_add_remote(). It
will be used in many the same situations.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #875
Approved by: cgwalters
Return whether the remote already existed. This is an internal API, so
it’s not an API break. The return value will be useful in upcoming
commits for working out whether to later remove a remote again.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #875
Approved by: cgwalters
Add a name argument to the internal OstreeRemote constructor,
since this member (and several derived from it) is non-nullable,
and hence must always be set at construction time.
This changes the only call sites of the constructor to use the new API,
which is internal.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #875
Approved by: cgwalters
Follow up to a previous patch that addressed a double-close; I
realized we already had a helper for doing "open dfd iter, do nothing
if we get ENOENT". Raise it to libotuil, and port all consumers.
Closes: #863
Approved by: jlebon
Previously it was static to ostree-repo.c. Make it usable throughout
libostree so it can be used by an upcoming commit, but also expose the
typedef and reference counting functions so that opaque OstreeRemote
pointers can be used by user code, in anticipation of exposing more of
its API publicly in future.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #832
Approved by: cgwalters
• Commit timestamps, so it’s easy to work out whether a given commit is
newer than the one we have locally
• Summary file timestamp, so it’s easy to work out whether the summary
file is more up to date than another summary file
• Summary file expiry time, so clients can work out when they should
expect the summary file to next be updated, and hence can query for
it at roughly the right time
The expiry time requires input from the user, so is currently never set
automatically. Programs using libostree can set it if they wish.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #826
Approved by: cgwalters
If we're freeing the segment, it's basically always better to use
`autoptr()`. Fewer lines, more reliable, etc.
Noticed an instance of this in the pull code while reviewing a different PR,
decided to do a grep for it and fix it tree wide.
Closes: #836
Approved by: pwithnall
The keyring isn't large, so let's just fall back to copying it
rather than requiring `renameat()`.
Prep for `ostree_repo_open_at()`.
Closes: #821
Approved by: jlebon
Use the new well-known `status` key for OstreeAsyncProgress to get and
set the status atomically with other keys in an OstreeAsyncProgress
instance.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #819
Approved by: cgwalters
This will eliminate most of the potential races in progress reporting.
ostree_repo_pull_default_console_progress_changed() still calls three
getters, so there may still be races there, however.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #819
Approved by: cgwalters
I happened to glance at the top of my most recent patch and
noticed that I used an `throw_errno()` function in a non-errno place.
I scanned the patch for other instances of this but didn't find one.
Closes: #811
Approved by: jlebon
I was planning to change some of the object loading code in the
future, so here's some porting.
Note that I rewrote `_ostree_repo_has_loose_object()` since it
used an error return across multiple functions.
Honestly I'm not sure about this `TEMP_FAILURE_RETRY()` business...
in reality we're going to end up with a ton of code linked in
process that doesn't do it. Unix sucks =( But I'm keeping
what was there out of consistency.
Closes: #809
Approved by: jlebon
This did a `closedir` in the `goto out` section before, but it
turns out more nicely if we follow the usual pattern of doing
the `open(O_DIRECTORY)` in the callee function and handle `ENOENT`
there.
Closes: #809
Approved by: jlebon
I was reading a strace the other day and noticed we were loading the same
`.dirmeta` object many times. Unlike the other object types, `.dirmeta` objects
don't accumulate much over time; there are only so many directory metadata types.
(Without SELinux involved it'd probably be 5-6 I'd guess offhand).
For `fedora-atomic/25/x86_64/docker-host` there are currently 34 `.dirmeta` in
the tree.
But how many times during a checkout did we load those 34 dirmeta objects?
With a quick strace:
```
$ strace -s 2048 -f -o strace.log ostree --repo=repo-build checkout -U fedora-atomic/25/x86_64/docker-host host-test-checkout
$ grep dirmeta strace.log | wc -l
7165
```
After, as you'd expect, we just loaded `34` from disk. We do
6 system calls (`openat+fstat+fstat+read+read+close`) per dirmeta,
so we dropped a total of 42780 system calls - which is about 20% of the total
system calls made.
`perf record` tells me that we're spending ~40 of our time in the kernel during
a checkout, so reducing syscall traffic helps. Though most of that appears to be
in the VFS and XFS layers for `linkat` (which isn't surprising).
So how much did perf improve? Well, on my workstation, I get a lot of
fluctuation in timing, sometimes by 30%, so this was well within the noise. But
it's well worth speeding up checkout, and I think this optimization will shine
more as we improve performance elsewhere.
Closes: #795
Approved by: jlebon
These are leftovers from the packfile code and should have been
deleted in commit: 2a0601efc7
I noticed this now since I wanted to add a new type of caching.
Closes: #795
Approved by: jlebon
`perf record ostree checkout ...` for a bare-user repo was telling
me we were spending a good 13% of our time in the depchain of `ot_lgexattrat()`.
The problem here is that traversing the `/proc` path turns out to be
somewhat expensive - there are LSM (SELinux) checks, etc.
For regular files, opening and just getting the xattr, then closing is still
quite cheap. For symlinks, we'll always need to open anyways.
This appears to shave about ~0.1 seconds off of a checkout of
`fedora-atomic/25/x86_64/docker-host` on my workstation.
Oh, and this was the last user of `ot_lgexattrat()` so we can kill it, which is
nice - the xattr code should really live in libglnx.
Closes: #796
Approved by: jlebon
I was planning to change one here, decided to do a conversion
of some of the simpler functions in this file to keep up momentum.
Closes: #776
Approved by: jlebon
This mode is similar to bare-user, but does not store the permission,
ownership (uid/gid) and xattrs in an xattr on the file objects in the
repo. Additionally it stores symlinks as symlinks rather than as
regular files+xattrs, like the bare mode. The later is needed because
we can't store the is-symlink in the xattr.
This means that some metadata is lost, such as the uid. When reading a
repo like this we always report uid, gid as 0, and no xattrs, so
unless this is true in the commit the resulting repository will
not fsck correctly.
However, it the main usecase of the repository is to check out with
--user-mode, then no information is lost, and the repository can
work on filesystems without xattrs (such as tmpfs).
Closes: #750
Approved by: cgwalters
There are a lot of things suboptimal about this approach, but
on the other hand we need to get our CI back up and running.
The basic approach is to - in the test suite, detect if we're on overlayfs. If
so, set a flag in the repo, which gets picked up by a few strategic places in
the core to turn on "ignore xattrs".
I also had to add a variant of this for the sysroot work.
The core problem here is while overlayfs will let us read and
see the SELinux labels, it won't let us write them.
Down the line, we should improve this so that we can selectively ignore e.g.
`security.*` attributes but not `user.*` say.
Closes: https://github.com/ostreedev/ostree/issues/758Closes: #759
Approved by: jlebon
I've seen code in a few places that I think on balance is definitely better this
way. Some of our functions have huge variable declaration sections.
This change includes one small example where we could start using declarations
after statements.
A concern I had was - how does this interact with `__attribute__((cleanup))` and
early returns? I tested it, and AFAICS the behavior is what you'd expect - the
cleanup function isn't called if its variable isn't reachable.
Closes: #718
Approved by: jlebon
It's just simpler, and I'm not sure people are going to care
much about the difference by default.
We already folded in the fallback sizes into the download totals, so folding in
the count makes things consistent; previously you could see e.g.
`3/3 parts, 100MB/150MB` and be confused.
Closes: #678
Approved by: giuseppe
There were a few bugs here.
- We need to keep track of the size of the delta parts we've already processed,
in order to make progress reliable at all in the face of interruptions. Add
a new `fetched-delta-part-size` async progress variable for this.
- The total before disregarded what we'd already downloaded, which was confusing.
Now, a progress percentage is `fetched/total`.
- Correctly handle "unknown bytes/sec" in the progress display.
However, to be fully correct we need to show the fallback objects too. That
would require tracking in the pull code when we fetch an object as a fallback
versus "normally". This would be simpler really if we could assume in a run we
were *only* processing a delta, but currently we don't do that.
Related: https://github.com/ostreedev/ostree/issues/475Closes: #678
Approved by: giuseppe
Clarify the documentation for functions like
ostree_repo_get_remote_boolean_option(), stating what out_value will be
set to on error.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #676
Approved by: cgwalters
The libcurl backend does all the work in the main thread/loop, which
seems to starve the idle scanning worker more. With the libcurl
backend, we're a lot more likely to have at least one outstanding
metadata request.
But it can more easily transiently happen with libcurl that all of our current
fetches are content. To be accurate here, just show Estimating if we're scanning
too.
Closes: #654
Approved by: jlebon
The gzip default is 6. When I was writing this code, I chose 9 under
the assumption that for long-term archival, the extra compression was
worth it.
Turns out level 9 is really, really not worth it. Here's run at level 9
compressing the current Fedora Atomic Host into archive:
```
ostree --repo=repo pull-local repo-build fedora-atomic/25/x86_64/docker-host
real 2m38.115s
user 2m31.210s
sys 0m3.114s
617M repo
```
And here's the new default level of 6:
```
ostree --repo=repo pull-local repo-build fedora-atomic/25/x86_64/docker-host
real 0m53.712s
user 0m43.727s
sys 0m3.601s
619M repo
619M total
```
As you can see, we run almost *three times* faster, and we take up *less
than one percent* more space.
Conclusion: Using level 9 is dumb. And here's a run at compression level 1:
```
ostree --repo=repo pull-local repo-build fedora-atomic/25/x86_64/docker-host
real 0m24.073s
user 0m17.574s
sys 0m2.636s
643M repo
643M total
```
I would argue actually many people would prefer even this for "devel" repos.
For production repos, you want static deltas anyways. (However, perhaps
we should support a model where generating a delta involves re-compressing
fallback objects with a bit stronger compression level).
Anyways, let's make everyone's life better and switch the default to 6.
Closes: #671
Approved by: jlebon
For a long time we've cached the remote configs in the repo, which
mostly makes sense for the `repo/config` file, but less sense
for `/etc/ostree/remotes.d`, because we want to support admins
interactively editing them.
One can delete the repo instance and create a new one, but that's a bit ugly.
Let's introduce an API for this so rpm-ostree can reload remotes after
admins/scripts edit them in `/etc`. We also might as well reload
any other entries in the config.
Structurually now, `ostree_repo_open()` deals with file descriptors, and then
calls `ostree_repo_reload_config()`. Except for the uncompressed cache, which is
the only thing that deals with FDs that can be configured. But we want to delete
that anyways.
No tests, since...we don't have a daemon in this codebase, don't want to shave
that yak just today.
Closes: #662
Approved by: jlebon
I was working on https://bugzilla.redhat.com/show_bug.cgi?id=1393545
and it was annoying that I couldn't know what the new (unsigned)
commit has was until verification succeeded. I could pull it
manually without GPG, but then it'd be sitting in the repo.
Now:
```
Updating from: fedora-atomic:fedora-atomic/25/x86_64/docker-host
Receiving metadata objects: 0/(estimating) -/s 0 bytes
error: Commit 2fb89decd2cb5c3bd73983f0a7b35c7437f23e3aaa91698fab952bb224e46af5: GPG verification enabled, but no signatures found (use gpg-verify=false in remote config to disable)
```
Closes: #663
Approved by: giuseppe
Without the element-type annotations, bindings don't know how to handle
the elements of the hash table. Since the table is created with destroy
functions, the caller does not own the elements, so transfer container
is used.
Closes: #635
Approved by: cgwalters
ostree_object_name_serialize returns a floating ref, so sink it before
adding it to the hash table so it can properly be freed later when the
hash table is destroyed.
This is particularly a problem for pygobject, which sinks the refs on
variants as it marshals them to native python types. If the ref isn't
already sunk, then the ref count won't increase and a critical warning
will be raised when both the hash table and pygobject try to unref it.
Closes: #635
Approved by: cgwalters
I was having this thought today about making more of the OS readonly,
and ultimately if we got to the point where all ostree operations are
through the repo and sysroot dfds, we could have rpm-ostree be the
only process holding those fds open, and have a read-only bind mount
on top.
Anyways, we're not there, likely won't be soon, but this gets us
closer to being fully fd relative.
Closes: #628
Approved by: jlebon
These are out parameters, so add the (out) annotation and switch
(nullable) to (optional) since the latter is used for the purpose of
optional out parameters.
Closes: #629
Approved by: cgwalters
glnx_make_lock_file requires that the dfd passed in survives the
lifetime of the lock. Since dfd_iter.fd gets cleaned up after the
function returns, this isn't the case. dfd_iter.fd should be equivalent
to tmpdir_dfd, since we iter on ".", and that survives past the
function, so just use that instead.
Closes: #591
Approved by: cgwalters
`-fsanitize=address` complained that the `refcount > 0` assertions
were reading without atomics. We can fix this by reworking them
to read the previous value.
Closes: #582
Approved by: jlebon
Conceptually we've been moving towards having our GPG verification
paths be per-remote. The code internally supports this, but we
didn't expose an API to use it conveniently.
This came up when trying to add a new `gpgkeypath` option, since
right now rpm-ostree manually finds keyrings for the remote, and
hence it wasn't looking at the keypath, and said "Unknown key"
in status.
Adding an API fixes this nicely.
Closes: #576
Approved by: giuseppe
For Project Atomic, we already have RPM signatures which use files in
`/etc/pki/rpm-gpg`. It's convenient to simply bind the OSTree remote
configuration to those file paths, rather than having duplicate key
data.
This does mean that we need to parse the files for verification, so we
end up importing them into the verifier's temporary keyring, which is
a bit ugly, but it's what other projects do.
Closes: https://github.com/ostreedev/ostree/issues/573Closes: #575
Approved by: giuseppe
I was doing a chain of mirroring like A -> B -> C
And repo B had A as a remote. When I added B as
a remote to C, the summary file of B had a ref
upstream:foo/bar/baz, which caused all pulls from
B to C to fail, since the summary file is only
expected to have refs, not refspecs.
Closes: https://github.com/ostreedev/ostree/issues/561Closes: #565
Approved by: jlebon
Found by valgrind memcheck. g_variant_new_from_bytes takes a ref to the
bytes, so we need to release the original ref.
Signed-off-by: Simon McVittie <smcv@debian.org>
Closes: #556
Approved by: cgwalters
g_variant_get_strv is (transfer container): the caller is expected to
free the array, but not the individual strings.
Leak found with valgrind memcheck.
Signed-off-by: Simon McVittie <smcv@debian.org>
Closes: #556
Approved by: cgwalters
0 was used as an "unset" flag for tmp_dir_fd, which is technically
incorrect. For cache_dir_fd, -1 was used as the sentinal but 0
was checked for, resulting in close(-1).
Closes: #507
Approved by: cgwalters
When doing a prune, we should not try to delete objects in parent
repos, since it'll fail. There is a bigger discussion about the
semantics of `parent=` to be had, but this will fix trying to use
`ostree prune --repo=/ostree/repo/extensions/rpmostree/pkgcache`.
Closes: https://github.com/ostreedev/ostree/issues/467Closes: #471
Approved by: jlebon
The documentation says this is ignored, implying that you should pass
NULL to it. However, the function immediately returns in this case even
though the argument isn't used anywhere.
Closes: #458
Approved by: cgwalters
I forgot to actually remove `config_file` in the previous
commit, the txn lock hasn't been used in a long time, and
for the uncompressed cache, everything uses the fd already.
Closes: #433
Approved by: giuseppe
In general we want to support "idempotentcy" or "state
synchronization" across interruption. If a repo is only partially
created due to a crash or whatever, it's hard for a user to know that.
Let's just make `ostree_repo_create()` idempotent. Since all we're
doing is a set of `mkdirat()` invocations, it's quite simple.
This also involved porting to fd-relative, which IMO makes the
code a lot clearer.
Closes: #422
Approved by: 14rcole
In flatpak i was using a parent repo, and it failed to update
with ENOENT when dispatching an set-read-source opcode, because the
object it referenced was in the parent repo.
This fixes that by making _ostree_repo_read_bare_fd look
at parent_repo.
Closes: #362
Approved by: cgwalters
Import `gs_file_enumerator_iterate()` for the next six months or
so...after RHEL 7.3 is released I'm strongly considering hard
requiring 2.46 or so.
Likely at some point we should figure out how to share more "glib
backport" code with NetworkManager at least.
Closes: #341
Approved by: jlebon
The previous code was subject to a divide by zero if less than a
second had passed. Rework it so we only do the divide if more than a
second has passed.
Closes: #349
Approved by: Mathnerd314
I find the "-z2" is really a long ago relic of the past when I changed
the format. We no longer have anything to do with the original
`archive`, so let's start allowing people to type `--mode=archive`
which just looks saner.
At some point later I'll update the docs too, but it'll be an annoying
transition period as we'll have to say "On older OSTree, use -z2" etc.
Closes: #346
Approved by: giuseppe
Just noticed this while reading some code, we didn't have many manual
`out: close()` bits left, this pushes us over the edge to autocleanup
almost everywhere.
Closes: #332
Approved by: jlebon
As the docs say, `g_regex_match()` still allocates a match even if it
returns `FALSE`. Using `g_autoptr` is just plain better.
Closes: #292
Approved by: krnowak
This changes around a few things that didn't work for me:
* Section names seem to be ostree-* instead of libostree-*
* Also XML files are ostree-* (they didn't show up at all)
- gtk-doc doesn't seem to parse const _OSTREE_PUBLIC correctly
* pull documentation is now on the actual functions rather than stubs
* Update gitignore with some more files
And there some changes to make gtk-doc give fewer warnings (not finished)
Closes: #327
Approved by: cgwalters
This centralizes the ifdef's in one file, which will make it
easier to write new pull backends.
ostree-repo-pull.c is now built unconditionally
Closes: #327
Approved by: cgwalters
1 is a better choice than 0 because some programs use 0
as a special value; for example, GNU Tar warns of an
"implausibly old timestamp" with 0.
Closes: #330
Approved by: cgwalters
This kills another GSystem consumer...I think down the line I'd like
to do something like "detect whether file is > 1k if so, mmap,
otherwise just readall()" so we can use this helper in more places.
Closes: #319
Approved by: jlebon
We have a lot of "allow_noent" type wrapper functions since
a common pattern is to allow files to not exist, but still
throw cleanly on other issues.
This is another instance of that, and cleans up duplicated error
handling code.
Part of this is prep for moving away from `GFile` consumers.
Closes: #319
Approved by: jlebon
On 32-bit systems the modifier for printing 64bit values should be
%llu instead of %lu. Just use appriopriate macros that do the right
thing.
Closes: #329
Approved by: giuseppe
The "no atime" thing was mostly useful only before "relative atime"
updates landed. Users who care about performance will turn it off
entirely anyways.
Closes: #316
Approved by: jlebon
In addition to generic fd relative porting,
this is a necessary preparatory step for libglnx porting, because
when I tried to use `g_mapped_file_new` I hit an issue with
it using a different error domain from GIO.
Thankfully libglnx consistently uses the GIO error domain, and here
we're now using it for the `open()` call.
Closes: #317
Approved by: jlebon
Bug 765429 said that not having a time estimate can be annoying
when working with large pulls.
There isn't any complex time estimation logic here - we just take
the number of bytes remaining and do a linear projection of
the bytes per second rate at the current point in time.
Closes: #318
Approved by: cgwalters
This can be useful for validating the 3rd party data that is put in
the extensions directory and is signed with the same keys as commits
or the summary file.
Closes: #310
Approved by: cgwalters
Moved out setting up a GPG verifier to a separate function, as I would
like to use it for the any data verification function in the following
commit.
Closes: #310
Approved by: cgwalters
I plan to add a function for verifying any data which may return the
error about lack of trusted signatures, so let's avoid the redundancy
and put the check in the separate function.
Closes: #310
Approved by: cgwalters
Apparently I got the bracketing wrong in
862e6ecdcc58f025696b1394adfc0fcf7322df23:
src/libostree/ostree-repo.c: In function 'ostree_repo_delete_object':
src/libostree/ostree-repo.c:3538:11: warning: missing braces around
initializer [-Wmissing-braces]
g_auto(GVariantBuilder) builder = {0,};
Closes: #298
Approved by: cgwalters
This tries to avoid leaking GVariantBuilders and GVariants in some
situations. The leaks were usually happening when some error occurred
or because of unclear variant ownership situation.
The former is mostly about making sure that g_variant_builder_clear is
called on builders that didn't finish their variant building process.
The latter is surely more work - sometimes the result of
g_variant_builder_end() should not be passed directly to a function,
but rather stored in a g_autoptr(GVariant), sunk and then passed to a
function. IMO, with an advent of g_autoptr, GVariants should be always
sunk instead of relying on some receiver function sinking it. This
would make an easy-to-follow policy of always sinking your
variants. Functions could then assume that the passed variant is
already sunk. These leaks are still happenning in commands, but they
are less harmful, since that code will not be used by some daemon as a
library routine.
Closes: #291
Approved by: cgwalters
This adds a _with_options variant of the
ostree_repo_remote_fetch_summary function, so we can tell the fetcher
to use a specific URL instead taking it from the remote config.
Closes: #290
Approved by: cgwalters
The base URI created in this line was always (sans erroneous
situations) overwritten in the code block below without freeing it
previously, so it leaked.
Closes: #290
Approved by: cgwalters
It's very useful for third-party applications to have someplace to store
their data guaranteed to be on the same device as the repo (thus
ensuring hardlinks) while still being shielded away from any of OSTree's
timely garbage collections.
We create a new "extensions/" subdirectory where apps can include
whatever they wish in "extensions/myapp/". This subdirectory is
completely unmanaged by ostree.
NB: I didn't bother making it a member of the OstreeRepo proper since we
don't really use it for anything else yet.
Closes: #286
Approved by: cgwalters
I ended up deciding to move this one into libglnx, seems like
something other libglnx-using software might want to do, even though
xdg-app doesn't right now.
Closes: #282
Approved by: jlebon
We were arbitrarily only deleting content after exactly one day. Some
use cases may want something else; make it configurable.
Closes: #170
Approved by: jlebon
We had a policy of cleaning up all files in `$repo/tmp` older
than one day, but we should really clean up previous bootid staging
directories too, as they can potentially take up a lot of disk space.
https://bugzilla.gnome.org/show_bug.cgi?id=760531Closes: #170
Approved by: jlebon
Setting this causes commit to error out. There are other ways we
could do this in a more sophisticated fashion, such as via SystemTap
etc. But this has low-tech applicablity, works as non-root.
The reason I'm adding this is so that we can add test cases for
cleanup of the `tmp/staging-` directory.
Closes: #170
Approved by: jlebon
I have a cache drive I often mount read-only, and the previous commit
for opening `tmp/cache` broke since `errno == EROFS`, not `EPERM`.
It turns out we already had the concept of a "writable" repo, so just
piggy back off that.
Closes: #281
Approved by: giuseppe
To GLnxConsoleRef. There were some subtleties here, for example we
used to reference `GSConsole` inside the progress changed function,
which at first seems like an ABI hazard, because e.g. rpm-ostree or
xdg-app could still be passing a `GSConsole` instance there. Luckily,
it turns out to be compatible to just start calling libglnx here.
Another issue was that due to libglnx's use of the cleanup function,
we needed to ensure we always called `ostree_async_progress_finish()`
*before* the cleanup function was invoked.
Closes: #280
Approved by: giuseppe
In the case we have a repo with a parent, and the child repo has a
remote called "foo", but some option is unset. Then when we look up
the parent repo for a value before using the default we will fail due
to the parent not having the "foo" remote. As soon as we find the
requested remote at some point in the hierarchy we need to ignore further
errors and use the default value.
Closes: #274
Approved by: giuseppe
We looked for and locked old temporary directories so we can
reuse them if not in use. However, once we found one that
we can reuse we didn't stop iterating, and eventually we
reached the end. This means we can lock multiple dirs.
Closes: #273
Approved by: giuseppe
There was some leftover intermediate cruft here I noticed
while reviewing another patch:
- We had an output `GFile*` for that was never used
- We required the caller to allocate the loose pathbuf, but
none of them ever reused it
- We had an extra intermediate function
Also while looking at this, I'm now uncertain whether some of the
callers of `_ostree_repo_has_loose_object` should really be invoking
`ostree_repo_has_object()`, but let's leave that aside for now.
Closes: #272
Approved by: alexlarsson
This allows you to replace the default
$sysroot/$sysconfdir/ostree/repos.d string value, and to use a similar
feature for repos that are not the system repo.
In particular, this allows us to support /etc/xdg-app/remotes.d for
xdg-app.
Closes: #247
Approved by: cgwalters
Otherwise we get undefined behaviour if the client didn't explicitly set
any flags.
Also, add documentation for all the other options supported by
ostree_repo_pull_with_options().
Closes: #252
Approved by: cgwalters
These are useful for ostree users (like xdg-app) that have custom
options for remotes. In particular they are useful when we later make them
all respect self->parent_repo.
Closes: #236
Approved by: cgwalters
⚠️ There is a notable spiked pit trap here around
`posix_fallocate()` and `errno`. This has bit other projects,
see e.g.
7bb87460e6
Otherwise the port was straightforward.
It used to be allowed to run something like "ostree remote refs" on
a read-only (e.g. system) repo. However, the summary cache caused that to
break. This commit just makes it not save the cache if we get some kind
of permission error when writing it. It'll still work, even without the
cache.
https://bugzilla.gnome.org/show_bug.cgi?id=763855
It allows an optimization to skip the download of the summary file
if its .sig file is unchanged.
Downloading the .sig file is much cheaper than downloading the summary
file from repositories with many branches.
https://bugzilla.gnome.org/show_bug.cgi?id=762973
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
I plan to use this in rpm-ostree at least for two reasons:
- To find the mtime on the repo
- To use the tmp/ directory to stage content (but we should eventually
add a better API)
I don't know why we didn't do this a long time ago. This extends the
pull API to allow grabbing a specific commit, and will set the branch
to it. There's some support for this in the deploy engine, but there
are a lot of reasons to support it for raw pulls (such as subset
mirroring cases).
In fact I'm thinking we should also have the override-version logic
here too.
NOTE: One thing I debated here is inventing a new syntax on the
command line. Git doesn't seem to have this functionality (probably
because it'd be rarely used). The '@' character at least doesn't
conflict with anything.
Anyways, I wanted this for some other test cases. Without this,
writing tests that go between different commits is more awkward as one
must generate the content in one repo, then pull downstream, then
generate more content, then pull again. But now I can just keep track
of commit IDs and do exactly what I want without synchronizing the
tests.
I'd like to incrementally convert all of `ostree-repo*.c` to
fd-relative usage, so that we can sanely introduce
`ostree_repo_new_at()` which doesn't involve GFile.
This one is medium risk, but passes the test suite.
Originally, a lot of the `fsync()` calls here were added for the
wrong reason - I was chasing a bug that ended up being the extlinux
bootloader not parsing 64 bit ext4 filesystems. But since it looked
like corruption, I tried adding a lot more `fsync()` calls.
All we should have to do is use `syncfs()`. If that doesn't work,
it's a kernel bug.
I'm making this change because skipping the individual fsyncs can be a
major performance win - it's easier for the FS to optimize, we do more
in parallel, etc.
https://bugzilla.gnome.org/show_bug.cgi?id=757117
The tmp directory is lazily created for each fetcher instance, since
it may require superuser permissions and some instances only need
_ostree_fetcher_request_uri_to_membuf() which keeps everything in
memory buffers.
This way two pulls will not use the same tmpdir and accidentally
overwrite each other. However, consecutive OstreeFetchers will reuse
the tmpdirs, so that we can properly resume downloading large objects.
https://bugzilla.gnome.org/show_bug.cgi?id=757611
Concurrent pulls break since we're sharing the staging directory for
all transactions in the repo. This makes us use a per-transaction directory.
However, in order for resumes to work we first look for existing
staging directories and try to aquire an exclusive lock for them. If
we can't find any staging directory or they are all already locked,
then we create a new one.
https://bugzilla.gnome.org/show_bug.cgi?id=757611
This creates a subdirectory of the tmp dir with a selected prefix,
and takes a lockfile to ensure that nobody else is using the same directory.
However, if a directory with the same prefix already exists and is
not locked that is used instead.
The later is useful if you want to support some kind of resumed operation
on the tmpdir.
touch reused dirs
https://bugzilla.gnome.org/show_bug.cgi?id=757611
When a commit is deleted and the repo is configured to use tombstone
commits, create one. Delete the tombstone file only if the commit is
pulled again.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Had a rare situation where I had no libsoup development files, so I
took the opportunity to fix the build errors. Ugly, but works now.
Would be nice if libsoup could be a hard dependency since we rarely
ever test a configuration without it.
xdg-app was hanging for me with v2015.8, but worked with v2015.7.
I narrowed things down to the GMainLoop/context commit, in which
we started pushing a temporary main context for synchronous
requests internally.
That's never really going to work with libsoup - there needs
to be a single main context which works on the socket. Furthermore,
clients couldn't get progress messages that way.
For *other* internal uses where we added APIs that talk to the remote
repo, we cleanly push a temporary main context.
(Note that I kind of snuck in a change here around the GError handling
in pulls that isn't strictly related but came up in testing)
First of all, what we were doing with having GMainLoop in the internal
APIs is wrong. Synchronous APIs should always create their own main
context and not iterate the caller's. Doing the latter creates
potential for evil reentrancy issues. Sync API should block, async
API is for not blocking.
Now that's out of the way, fix the pull code to do the clean
```
while (termination_condition (state))
g_main_context_iteration (mainctx, TRUE);
```
model for looping. This is a lot easier to understand and ultimately
more reliable than having other code call `g_main_loop_quit()`, as the
loop condition is in exactly one place.
We can also remove the idle source which only fired once.
Note we have to add a hack here to discard the synchronous session and
create a new one which we only use async.
https://bugzilla.gnome.org/show_bug.cgi?id=753336