Do not write directly to objects/ but maintain pulled files under tmp/
with a "tmpobject-$CHECKSUM.$OBJTYPE" name until they are syncfs'ed to
disk.
Move them under objects/ at ostree_repo_commit_transaction cleanup
time.
Before (test done on a local network):
$ LANG=C sudo time ./ostree --repo=repo pull origin master
0 metadata, 3 content objects fetched; 83820 KiB; 4 delta parts
fetched, transferred in 417 seconds
16.42user 6.73system 6:57.19elapsed 5%CPU (0avgtext+0avgdata
248428maxresident)k
24inputs+794472outputs (0major+233968minor)pagefaults 0swaps
After:
$ LANG=C sudo time ./ostree --repo=repo pull origin master
0 metadata, 3 content objects fetched; 83820 KiB; 4 delta parts
fetched, transferred in 9 seconds
14.70user 2.87system 0:09.99elapsed 175%CPU (0avgtext+0avgdata
256168maxresident)k
0inputs+794472outputs (0major+164333minor)pagefaults 0swaps
https://bugzilla.gnome.org/show_bug.cgi?id=728065
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
subscription-manager has a daemon that runs in a confined domain,
and it doesn't have permission to write usr_t, which is the default
label of /ostree/deploy/$osname/deploy.
A better long term fix is probably to move the origin file into the
deployment root as /etc/ostree/origin.conf or so.
In the meantime, let's ensure the .origin files are labeled as
configuration.
If an object already existed and we somehow tried to pull it, the
caller would still expect a returned checksum.
This appears to happen with static deltas for some reason; we might be
including duplicate metadata objects. Regardless, this is a bug that
should be fixed.
We have a chain of checksums from the root up until here. While doing
checksums of the objects individually would be a good redundancy check
for test cases and the like, when doing a pull there's no good reason
to burn cycles on SHA256.
This caused deadlocks and/or EMFILE due to the interaction between
threads and fds. What we really want here is a better pull-based
model for parsing content objects.
Another idea would be to change static deltas so that content objects
have a special opcode that includes their metadata first, and then do
rollsums etc. only over actual content.
This will avoid too many open files at the same time that could cause
an EMFILE error.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
(cherry picked from commit bc092b06f0e34e93f7d6102957bf55fd7ffd1b9e)
See projectatomic/rpm-ostree#42 for rationale. There are two high
level use cases:
- If the OS comes unconfigured, this is a way to point it at a repo of your choice.
- To switch between repositories while keeping the same branch easily.
At some point, we might want to expose a uniform way to refer
to deployments by an index. At the moment undeploy is the only
command that does.
I plan to introduce another command which optionally takes an index,
so prepare a helper function for this.
This is a noticeable cleanup, and fixes another big user of GFile* in
performance/security sensitive codepaths.
I'm specifically making this change because the static deltas code was
leaking temporary files, and cleaning that up nicely would be best if
we were fd relative.
This will be used for a later change to use openat() for the fetching
code. Note that we drop the code to use mmap() - it was an attempt to
avoid keeping a fd open, but we do correctly close anyways.
You create these with something like:
ostree static-delta generate --empty --to=master
These will be automatically used during pull if no previous revision
exists in the target repo.
These work very much like the normal static deltas except they
are named just by the "to" revision. I.e:
deltas/94/f7d2dc23759dd21f9bd01e6705a8fdf98f90cad3e0109ba3f6c091c1a3774d
for a from-scratch to 94f7d2dc23759dd21f9bd01e6705a8fdf98f90cad3e0109ba3f6c091c1a3774d delta.
https://bugzilla.gnome.org/show_bug.cgi?id=721799
The current layout uses a prefix of two bytes as the initial dir
and a second directory inside that with the superblock. This
updates the list code to handle that.
https://bugzilla.gnome.org/show_bug.cgi?id=721799
Regression from 86764dbf00
This function is kind of fiendish now that we have 3 cases, each of
which want to be optimized somewhat to only load what's necessary
(e.g. don't open the file if we don't have an output for stream
requested).
Clean things up so that BARE_USER and BARE are separate conditionals
that share as much as possible, and fix the bug that asserted we
were in BARE mode.
I tested this by running test-basic-user.sh by hand.