Rather than having separate write_ref calls, make clients start a
transaction, add some refs, and then commit it. While this doesn't
make it 100% atomic, it makes it easier for us to use an atomic
model, and it means we don't do as much I/O updating the summary
file and such.
https://bugzilla.gnome.org/show_bug.cgi?id=707644
An earlier version of this API acted like git in that some objects
would be staged in a temporary directory which would be then committed
in one go by moving files around. The API doesn't match most users
expectations though, as while the stage is nice as a high-level API
it isn't really suited for low-level APIs.
While the stage was removed, the APIs were never renamed. Rename
them now so that they match expectations.
https://bugzilla.gnome.org/show_bug.cgi?id=707644
Update libgsystem submodule for a bugfix.
This is both more efficient from a kernel perspective, and avoids us
calling gs_file_get_path_cached() on tmp_dir constantly, which
triggered another bug due to lack of locking.
It's now isolated almost entirely to ostree-core.c, except
ostree-repo.c needs to know how to create archive-z2 file headers. So
give it a private API for that.
See the new comment in the source; basically if we're fetching content
over http, then someone with the capability to MITM the network could
create a transient setuid binary on disk with arbitrary content. If
they also had a process running on the system (such as an application)
it could be escalated to root.
https://bugzilla.gnome.org/show_bug.cgi?id=707139
Pull the cleanup code to a helper function, and ensure we delete
leftover temporary files also when aborting a transaction. Mainly
this will happen if a local 'ostree commit' fails.
While we're here, also change it to use gs_shutil_rm_rf() which also
handles directories, should we start using those.
Reviewed-by: Jeremy Whiting <jpwhiting@kde.org>
It turns out every builtin (with one special exception) that takes a
repo argument did the same thing; let's just centralize it. The
special exception was "ostree init --repo=foo" where foo is expected
to *not* actually be a repo. In that case, simply skip the
ostree_repo_check() invocation.
https://bugzilla.gnome.org/show_bug.cgi?id=706762
We'll always have "bare" mode for keeping files-as-hardlinks as root.
But "archive" was my second attempt at a format for non-root file
storage, used by the gnome-ostree buildsystem which runs as non-root.
It was really handy to have a "tar" like mode where I can create
tarballs as a user, that contain files owned by root for example.
The "archive" mode stored content files as two pieces in the
filesystem; ".file" contained metadata, and ".filecontent" was the
actual content, uncompressed. The nice thing about this was that to
check out a tree as non-root, you could just hardlink into the repo.
However, archive was fairly bad for serving via HTTP; it required
*two* HTTP requests per content object, greatly magnifing the already
inefficient fetch process. So "archive-z2" was introduced.
To allow gnome-ostree to still check out trees as a user, the
"uncompressed-object-cache" was introduced, and that's how things have
been working for a while.
So we should just be able to kill this code. Specifically note just
how much better the stage_object() function became.
https://bugzilla.gnome.org/show_bug.cgi?id=706057
We have APIs to load metadata as variants, and files as parsed
content/info/xattrs, but for some cases such as static deltas, all we
want is to operate on all objects in their canonical representation.
https://bugzilla.gnome.org/show_bug.cgi?id=706031
While the actual commit object format is presently the same, for a
number of reasons we'd like to change it fairly radically. Among
other things, we need to drop our a{sv} types in objects, to protect
against GVariant changing format.
Since now gnome-ostree now longer uses related objects, and nothing
ever used metadata, just drop them both.
If pull is interrupted, we may have downloaded an arbitrary subset of
the requested objects. Previously, we handled this by scanning for
all objects each time.
However, there's an easy optimization - this patch creates a lock file
in the repo. If we don't see that file when starting a pull, we know
we don't need to stat() every file; presence of a dirtree object for
example implies the existence of everything it references.
When multiple threads need to uncompress an object, there was
a race condition where thread A could get EEXIST, unlink,
then thread B calls linkat(), then thread A tries to link() but
fails.
We can just loop in this case.