This reverts commit f1d9196076.
Since libglnx.h does not get installed, it can't be included in
ostree-autocleanups.h, which is included by ostree.h.
Closes: #1615
Approved by: jlebon
Currently the API that allows P2P operations (e.g. pulling an ostree ref
from a LAN or USB source) is hidden behind the configure flag
--enable-experimental-api. This commit makes the API public and makes
that flag essentially a no-op (leaving it in place in case we want to
use it again in the future). The P2P API has been tested over the last
several months and proven to work.
This means that since we're no longer using the "experimental" feature
flag, P2P builds of Flatpak will fail when using versions of OSTree from
this commit onwards, until Flatpak is patched in the near future. If you
want to build Flatpak < 0.11.8 with P2P enabled and link against OSTree
2018.6, you'll have to patch Flatpak. However, since Flatpak won't yet
have a hard dependency on OSTree 2018.6, it needs a new way to determine
if the P2P API in OSTree is available, so this commit adds a "p2p"
feature flag. This way the feature set is more semantically correct than
if we had continued to use the "experimental" feature flag.
In addition to making the P2P API public, this commit makes the P2P unit
tests run by default, removes the f27-experimental CI instance that's no
longer needed, changes a few man pages to reflect the changes, and
updates the bash completion script to accept the new commands and
options.
Closes: #1596
Approved by: cgwalters
This commit includes libglnx.h in ostree-autocleanups.h, so we get the
g_autoptr backports wherever they're needed. Also, remove the "#include
libglnx.h" lines elsewhere that are no longer needed.
Closes: #1596
Approved by: cgwalters
This should give a more insightful error message if the user provides
a UID which is present on multiple keys.
This happens if you have an old key in your keyring which you are not
actively using any more, e.g. because it is too old. You still have
your old keys in your keyring, because you want to read old email
encrypted for that key, though.
The gpgme function used by ostree right now complains if a UID is found
on multiple keys:
https://www.gnupg.org/documentation/manuals/gpgme/Listing-Keys.html#index-gpgme_005fget_005fkey
The used API is too simple for that use case.
Note that it would be nicer if ostree picked the only valid signing key out
of the available keys rather than using the simplistic gpgme_get_key
function. It be nicer, of course, if there was such a gpgme function.
Closes: #1579
Approved by: cgwalters
The code has been sitting around for a while but since I disabled
it by default, I doubt anyone is really using it or relying on it.
This patch and turns on locking by default, and also drops the
API which was only public in the experimental API builds.
Conceptually these are two distinct things, and we
may actually want to split up the patches.
I don't think this will break anyone, but it's hard to say for sure.
It's also going to be hard to find out until we actually release
I suspect...
But anyone who is broken should be able to add `locking=false` into
their repo config. On the flip side Endless has been shipping with
this enabled and it is reported to help.
The reason to drop the APIs: I'm a bit concerned about the interactions over time
between libostree's use of the API and any apps that start using it.
For example, if an app specifies a SHARED lock in their code, then
later internally we decide to temporarily grab an `EXCLUSIVE`, but the
app had a second thread/process that was `EXCLUSIVE` already, and
that process was waiting on the first bit of code, then we could
deadlock. I can't think of a real world situation where this would happen
yet though.
We are likely to in the future have say `fsck` take an external lock,
`checkout` grab a shared one, etc.
Closes: #1555
Approved by: jlebon
Ensure that the metadata object is built up with the signatures from all keys
passed to ostree_repo_add_gpg_signature_summary(). Previously only the signature
from the last key would end up in the metadata.
Closes: #1488Closes: #1489
Approved by: jlebon
When a new object is added to the repository, create a
$PAYLOAD-SHA256.payload-link symlink file as well. The target of the
symlink is the checksum of the object that was added the repository.
Whenever we add a new object file, in addition to lookup if the file is
already present with the same checksum we also check if an object with
the same payload is in the repository.
If a file with the same payload is already present in the repository, we
copy it with `glnx_regfile_copy_bytes` that internally attempts to
create a reflink (ioctl (..., FICLONE, ..)) to the target file if the
file system supports it. This enables to have objects that share the
payload but have a different inode and xattrs.
By default the payload-link-threshold value is G_MAXUINT64 that disables
the feature.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Closes: #1443
Approved by: cgwalters
It will be used by successive commits to keep track of the payload
checksum for objects stored in the repository.
The goal is that files having the same payload but different xattrs can
take advantage of reflinks where supported.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Closes: #1443
Approved by: cgwalters
The _ostree_repo_get_remote() and _ostree_repo_get_remote_inherited()
methods transfer ownership of the returned OstreeRemote to the caller,
so this commit fixes a few call sites that weren't properly freeing it.
Closes: #1478
Approved by: cgwalters
Having the `uncompressed-object-cache` directory in `archive` repos by default
is clutter; the functionality should be considered deprecated.
Now we only create the directory if we're doing a checkout with the cache
enabled.
Closes: #1446
Approved by: jlebon
SPDX License List is a list of (common) open source
licenses that can be referred to by a “short identifier”.
It has several advantages compared to the common "license header texts"
usually found in source files.
Some of the advantages:
* It is precise; there is no ambiguity due to variations in license header
text
* It is language neutral
* It is easy to machine process
* It is concise
* It is simple and can be used without much cost in interpreted
environments like java Script, etc.
* An SPDX license identifier is immutable.
* It provides simple guidance for developers who want to make sure the
license for their code is respected
See http://spdx.org for further reading.
Signed-off-by: Marcus Folkesson <marcus.folkesson@gmail.com>
Closes: #1439
Approved by: cgwalters
Previously we were doing e.g. `ot_util_filename_validate()` specifically inline
in dirtree objects, but only *after* writing them into the staging directory (by
default). In (non-default) cases such as not using a transaction, such an object
could be written directly into the repo.
A notable gap here is that `pull-local --untrusted` was *not* doing
this verification, just checksums. We harden that (and also the
static delta writing path, really *everything* that calls
`ostree_repo_write_metadata()` to also do "structure" validation
which includes path traversal checks. Basically, let's try hard
to avoid having badly structured objects even in the repo.
One thing that sucks in this patch is that we need to allocate a "bounce buffer"
for metadata in the static delta path, because GVariant imposes alignment
requirements, which I screwed up and didn't fulfill when designing deltas. It
actually didn't matter before because we weren't parsing them, but now we are.
In theory we could check alignment but ...eh, not worth it, at least not until
we change the delta compiler to emit aligned metadata which actually may be
quite tricky. (Big picture I doubt this really matters much right now
but I'm not going to pull out a profiler yet for this)
The pull test was extended to check we didn't even write a dirtree
with path traversal into the staging directory.
There's a bit of code motion in extracting
`_ostree_validate_structureof_metadata()` from `fsck_metadata_object()`.
Then `_ostree_verify_metadata_object()` builds on that to do checksum
verification too.
Closes: #1412
Approved by: jlebon
I want some time to play with this more with different callers and work through
test scenarios. Let's disable the locking by default for now, but make it easy
to enable.
Closes: #1375
Approved by: jlebon
A while ago I did `truncate -s 0 /path/to/repo/00/123.commit`, and expected a
checksum error, but I actually got a validation error due to us loading the
commit into a variant and trying to parse out the parent checksum, etc.
I first started by changing the `load_and_fsck_one_object()` function to
checksum before loading, but the problem is that we do a traverse of all objects
first. Fixing this is going to require an `OSTREE_REPO_COMMIT_TRAVER_FLAG_FSCK`
or something.
In the meantime at least though, let's add a public API to fsck a single object
which *does* checksum cleanly before parsing the object, and change the `fsck`
command to use it.
We then change the fsck binary to do this while iterating over the refs
and finding the commit object. This way we'll at least get a checksum
first for commit objects, even if not dirtree/dirmeta.
Closes: #1364
Approved by: jlebon
This commit fixes an infinite loop that happens if you try to list the
remotes of a repo that has a parent repo set. It also adds a unit test
to ensure the right behavior, which is that both the child remotes and
parent remotes are listed.
Closes: #1366
Approved by: cgwalters
Define an auto cleanup handler for use with repo locking. This is based
on the existing auto transaction cleanup. A wrapper for
ostree_repo_lock_push() is added with it. The intended usage is like so:
g_autoptr(OstreeRepoAutoLock) lock = NULL;
lock = ostree_repo_auto_lock_push (repo, lock_type, cancellable, error);
if (!lock)
return FALSE;
The functions and type are marked to be skipped by introspection since I
can't see them being usable from bindings.
Closes: #1343
Approved by: cgwalters
Currently ostree has no method of guarding against concurrent pruning.
When there are multiple repo writers, it's possible to have a pull or
commit race against a prune and end up with missing objects.
This adds a file based repo locking mechanism. The intention is to take
a shared lock when writing objects and an exclusive lock when deleting
them. In order to make use of the locking throughout the library in a
fine grained fashion, the lock acts recursively with a stack of lock
states. If the lock becomes exclusive, it will stay in that state until
the stack is unwound past the initial exclusive push. The file locking
is similar to GLnxLockFile in that it uses open file descriptor locks
but falls back to flock when needed.
The lock also attempts to be thread safe by storing the lock state in
thread local storage with GPrivate. This means that each thread will
have an independent lock for each repository it opens. There are some
drawbacks to that, but it seemed impossible to manage the lock state
coherently in the face of multithreaded access.
The API is a push/pop interface in accordance with the recursive nature
of the locking. The push interface uses an enum that's translated to
LOCK_SH or LOCK_EX as needed. Both interfaces use an internal timeout
field to decide whether to manage the lock in a blocking or non-blocking
fashion. The intention is to allow ostree applications as well as
administrators to control this timeout. For now, the default is a 30
second timeout.
Note that the timeout is handled synchronously in thread since the lock
is maintained in thread local storage. I.e., the thread that acquires
the lock needs to be the same thread that runs the operation. There may
be a way to offer an asynchronous version, but it's not clear exactly
how that would work since it would likely involve a separate thread that
invokes a callback when the locking operation completes.
https://bugzilla.gnome.org/show_bug.cgi?id=759442Closes: #1343
Approved by: cgwalters
I was getting a bare `error: Creating temp file: No such file or directory` when
debugging `test-concurrency.py`; with this I get
`error: Writing content object: Creating temp file: No such file or directory`
which helps me pin it down.
Closes: #1343
Approved by: cgwalters
For rpm-ostree I'd like to do importing in parallel with threads; the code is
*almost* ready for that except today it calls
`ostree_repo_transaction_set_ref()`.
Looking at the code, there's really a "transaction" struct here,
not just stats. Let's lift that struct out, and move the refs
into it under the existing lock.
Clarify the documentation around multithreading for various functions.
Closes: #1358
Approved by: jlebon
This squashes the last race condition I was actively hitting while running
`test-concurrency.py` in a loop. The race is when process A finds a tmpdir to
reuse, and goes to lock it. Meanwhile process B deletes it and unlocks the lock.
Process A then succeeds at grabbing a lock, but the tmpdir is deleted.
Closes: #1352
Approved by: dbnicholson
If a newly allocated tmpdir can't be locked, set initialized to FALSE so
that glnx_tmpdir_cleanup doesn't delete it when new_tmpdir goes out of
scope.
Closes: #1346
Approved by: cgwalters
Another tmpdir user may have deleted an existing tmpdir between the time
the current user called readdir and tried to open it.
Closes: #1346
Approved by: cgwalters
This makes the code nicer too. Properly unit testing this though really wants
like a whole set of stuff around parent repos...but we do have coverage of the
non-parent path in the current pull tests.
Closes: https://github.com/ostreedev/ostree/issues/1306Closes: #1308
Approved by: alexlarsson
This ends up a lot better IMO. This commit is *mostly* just
`s/glnx_close_fd/glnx_autofd`, but there's also a number of hunks like:
```
- if (self->sysroot_fd != -1)
- {
- (void) close (self->sysroot_fd);
- self->sysroot_fd = -1;
- }
+ glnx_close_fd (&self->sysroot_fd);
```
Update submodule: libglnx
Closes: #1259
Approved by: jlebon
Buried in this large patch is a logical fix:
```
- if (!map)
- return glnx_throw_errno_prefix (error, "mmap");
+ if (map == (void*)-1)
+ return glnx_null_throw_errno_prefix (error, "mmap");
```
Which would have helped me debug another patch I was working
on. But it turns out that actually correctly checking for
errors from `mmap()` triggers lots of other bugs - basically
because we sometimes handle zero-length variants (in detached
metadata). When we start actually returning errors due to
this, things break. (It wasn't a problem in practice before
because most things looked at the zero size, not the data).
Anyways there's a bigger picture issue here - a while ago
we made a fix to only use `mmap()` for reading metadata from disk
only if it was large enough (i.e. `>16k`). But that didn't
help various other paths in the pull code and others that were
directly doing the `mmap()`.
Fix this by having a proper low level fs helper that does "read all data from
fd+offset into GBytes", which handles the size check. Then the `GVariant` bits
are just a clean layer on top of this. (At the small cost of an additional
allocation)
Side note: I had to remind myself, but the reason we can't just use
`GMappedFile` here is it doesn't support passing an offset into `mmap()`.
Closes: #1251
Approved by: jlebon
This commit adds debug output whenever libostree reads GPG keys, which
can come from different locations in the file system. This is especially
helpful in debugging "GPG signatures found, but none are in trusted
keyring" errors, which in my case was caused by OSTree looking in
/usr/local/share/ostree/trusted.gpg.d/ rather than
/usr/share/ostree/trusted.gpg.d/.
Closes: #1241
Approved by: cgwalters
In particular I'd like to get the copy fix in, since it might affect users for
the keyring bits.
Update submodule: libglnx
Closes: #1225
Approved by: jlebon
For the old `OSTREE_REPO_MODE_ARCHIVE_Z2`. Use it mostly tree
wide except for the repo finder tests (to avoid conflicting with
some outstanding PRs).
Just noted another user coming in some of those tests and wanted to do a
cleanup.
Closes: #1209
Approved by: jlebon
We added a `.dir-locals.el` in commit: 9a77017d87
There's no need to have it per-file, with that people might think
to add other editors, which is the wrong direction.
Closes: #1206
Approved by: jlebon
Add a hash function for OstreeRepo instances, which relies on the repo
being open, and hence being able to hash the device and inode of its
root directory.
Add unit tests for this and ostree_repo_equal().
Signed-off-by: Philip Withnall <withnall@endlessm.com>
https://github.com/ostreedev/ostree/issues/1191Closes: #1205
Approved by: cgwalters
Such an evil bug 🙈. I was just reading an strace trying to figure out what was
going on, and noticed we had the `XXXXXX` in the lockfile name. It was only
after that I realized that that this might *be* the cause of the skopeo issue.
This is another case where we definitely need more test coverage of things that
actually use the API multiple times in process; might look at dusting off the
work for the rpm-ostree test.
Closes: https://github.com/ostreedev/ostree/issues/1196Closes: #1204
Approved by: jlebon
Conceptually `ostree-repo-pull.c` should be be written using
just public APIs; we theoretically support building without HTTP
for people who just want to use the object store portion and
do their own fetching.
We have some nontrivial behaviors in the pull layer though; one
of those is the "bareuseronly" verification. Make a new internal
API that accepts flags, move it into `commit.c`. This
is prep for further work in changing object import to support
reflinks.
Closes: #1193
Approved by: jlebon
I saw in a stack trace that the main thread was calling `exit()` even while
worker threads were alive and doing sha256/write/fsync etc. for objects.
The stack trace was a SEGV as the main thread was calling into library
`atexit()` handlers and we were a liblz4 destructor:
```
#0 0x00007f2db790f8d4 _fini (liblz4.so.1)
#1 0x00007f2dbbae1c68 __run_exit_handlers (libc.so.6)
```
(Why that library has a destructor I don't know offhand, can't find
it in the source in a quick look)
Anyways, global library destructors and worker threads continuing simply don't
mix. Let's wait for our outstanding operations before we exit. This is also a
good idea for projects using libostree as a shared library, as we don't want
worker threads outliving operations.
Our existing pull corruption tests exercise coverage here.
I added a new `caught-error` status boolean to the progress API, and use it the
commandline to tell the user that we're waiting for outstanding ops.
Closes: #1185
Approved by: jlebon
We have a lot of layers of abstraction here; let's fold in the `trusted`
conditional into the call, since that's all the public API we're using does
anyways.
Prep for a future patch around object copying during imports.
Closes: #1187
Approved by: jlebon
This will compare their root directory inodes to see if they are the
same repository on disk. A convenience method for the users of the
public API who can’t access OstreeRepo.inode.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1179
Approved by: cgwalters
Update libglnx, which is mostly port the repo stagedir code
to the new tmpdir API. This turned out to require some
libglnx changes to support de-allocating the tmpdir ref while
still maintaining the on-disk dir.
Update submodule: libglnx
Closes: #1172
Approved by: jlebon
Doing this in prep for libglnx tmpdir porting, but I think we should also do
this because the partial fetch code IMO was never fully baked; among other
things it was never integrated into the scheme we came up with for "boot id
sync" that we use for complete/staged objects.
There's a lot of complexity here that while we have some coverage for, I think
we need to refocus on the core functionality. The libcurl backend doesn't have
an equivalent to this today.
In particular for small objects, this is simply overly complex. The downside is
clearly for large objects like FAH's 61MB initramfs; not being able to resume
fetches of those is unfortunate.
In practice though, I think most people should be using deltas, and we need to
make sure deltas work for large objects anyways.
Further ultimately the peer-to-peer work should help a lot for people
with truly unreliable connections.
Closes: #1176
Approved by: jlebon