If pull is interrupted, we may have downloaded an arbitrary subset of
the requested objects. Previously, we handled this by scanning for
all objects each time.
However, there's an easy optimization - this patch creates a lock file
in the repo. If we don't see that file when starting a pull, we know
we don't need to stat() every file; presence of a dirtree object for
example implies the existence of everything it references.
When multiple threads need to uncompress an object, there was
a race condition where thread A could get EEXIST, unlink,
then thread B calls linkat(), then thread A tries to link() but
fails.
We can just loop in this case.
This seems to work around a likely Linux kernel VFS bug, where I
randomly see ENOENT on link() when we *definitely* called mkdir() at
an earlier point in time.
This is an incompatible change to archive-z, thus it is now renamed to
archive-z2 and ostree will no longer parse archive-z.
I noticed in perf that we were spending some time zlib-decompressing
file headers, which is just inefficient. Rather than do this, keep
the headers uncompressed, and just zlib-compress content.
This gives us something closer to the advantages of archive and
archive-z when using the latter. Concretely we get deduplication
among multiple checkouts, along with the "devino" hash table trick
during commits to avoid checksumming content again.
This is enabled by default.
For similar reasons as metadata, this avoids having the main thread
blocked in fdatasync(), and even better - we can achieve much higher
parallelism if we have multiple threads blocked on fdatasync().
This is where loose content objects are stored as one compressed file,
instead of the two separate ones for regular archive mode. This mode
would be suitable for HTTP servers, beause only one HTTP request is
necessary, and the result would be compressed.
Cleanly separate metadata/content APIs, rather than defaulting to
raw streams. This helps most use cases.
Also, drop support for staging content without knowing the total
length. This complicated the code, and for things like streaming
HTTP, we should be able to figure this out from Content-Length.
They're not a large efficiency win at the moment, because we don't
do any delta compression.
At the moment, they simply served to compress data, but we will change
the archive mode to do that by default.
I run builds on my laptop, but it also crashes about 1/4 of the time
while suspending. It's definitely undesrirable to get e.g. empty
.dirtree objects because they corrupt builds. Concretely, I was
getting empty contents committed for xorg-util-macros.
Now, we used to write out temporary files using g_file_replace() which
does a fsync() during close, but then switched to a more "manual"
g_file_append_to().
We could switch back to g_file_replace(), but the problem is, we don't
want to call fsync() on temporary files in the case where we already
have the object. Attempting to add an object we already have is a
*very* common case.
This is both the old and new code sequence for the case where an
object is already stored:
open(temp, O_WRONLY)
write() write() write()
close()
lstat(objects/3a/9fe332...) = 0
unlink(temp)
In the *new* code, here's the case where an object *isn't* stored:
open(temp, O_WRONLY)
write() write() write()
close()
lstat(objects/3a/9fe332...) = -1
open(temp, O_RDONLY)
fdatasync()
close()
rename(temp, objects/3a/9fe332)
Compare with the *old* code path for when an object isn't stored:
open(temp, O_WRONLY)
write() write() write()
close()
lstat(objects/3a/9fe332...) = -1
link(temp, objects/3a/9fe332)
unlink(temp)
The problem with this is we really need to fdatasync(). Also doing
just rename() instead of the weird link()/unlink() helps us express to
the filesystem that we want atomic semantics. For example, BTRFS has
special handling for rename().
We really don't have a sane story for private files. This is a
defensive step ensuring that with old versions of gnome-ostree,
components that mistakenly have un-world-readable files don't break
pulls.
As the manual page doesn't say, but the in-code kernel documentation
shows, hardlinking for normal users can fail for a variety of
reasons (including very common situations such as non regular file
or non writable file), if the owner of the file does not match
the user linking (e.g. when checking out a shadow repo with a root-
owned master).
If that happens, fail back silently to copying instead of aborting
the whole operation.
https://bugzilla.gnome.org/show_bug.cgi?id=682298
This can be a large performance win in certain circumstances:
* Cold buffer cache (we don't block the whole process)
* Requiring a copy instead of hardlink
The previous fix to just ignore symbolic links for hard linking isn't
really good enough, since it can happen for empty files too.
Since this is an optimization, when we get EMLINK, let's instead just
fall back to copying. This also applies to EXDEV.
Rather than always doing:
1) make temporary link
2) unlink() target
3) rename()
Just try making the link, and only do the second two if the file
already exists. This reduces system call traffic a lot.
By default, when doing a commit, scan all of our loose objects and
build up a (device,inode) -> checksum hash. Then when we're doing a
commit, if we see a file with that (device,inode) pair, we can avoid
checksumming it.
This will allow us to use hard links again for user-mode checkouts,
rather than the hackish link cache. It was pretty silly anyways to
have file objects be stored with just a small metadata header
prepended, but uncompressed.
Either they should be hardlinkable, or compressed (in pack files).
Rather than passing xattr/file_info for all objects, change the API to
assume we're passing the defined object stream for each type. Namely,
for OSTREE_OBJECT_TYPE_FILE, we're now giving the "archive file" data.
This significantly cleans up the code for committing to archive mode
repositories, at the cost of having to (at present) create an
intermediate temporary file when committing to raw repositories.
This will be useful for ostbuild; a user can create their own archive
mode repository which transparently inherits objects from the
root-owned one in /ostree.
This is a convenient way to have a lookaside directory of hard links,
which can greatly speed up checkouts. In the future we probably want
to push this down into the repository.
Having the archived vs not distinction in the object system wasn't
useful in light of pack files. In fact, we should probably move
towards generating a pack file per commit by default.
Don't expose GChecksum in APIs. Add a new stream class which allows
us to pass an input stream somewhere, but gather a checksum as it's
read.
Move some bits of the internals towards binary csums.
Previously we had the "staged" state to ensure we didn't add a commit
object without the associated dirtree, etc. However it's
easier/better to just ensure in the pull command that we have all
referenced objects.
Also change pull to download metadata first. This will allow adding
a progress bar later.
Expose the lower-level functionality in libostree, change checkout
builtin to be a higher level driver. This will allow us to more
easily improve the "checkout" builtin..
We want to support both "bare" lookups where "foo" can be local, or in
any remote, as well as prefixed ones for a specific remote.
This fixes ostree-pull noticing that nothing has changed.
Before we were creating randomly-named temporary files in repo/tmp
when downloading via pull, but that means if the download process is
interrupted, we have to redownload everything again.
Let's still keep the concept of a "transaction" where files are
stored in the repository as atomically as possible (i.e. we
do a bunch of rename() calls), but now we also have an explicit
"tmp/pending/objects" directory that contains named objects.
This allows us to then skip redownloading things that are pending.
The builder wants the ability to mark a given file as e.g. setuid. To
implement this, the repo now has a callback-based API when importing a
directory to modify or remove items.
The commit tool accepts a "statoverride" file as input which looks like:
+mode /path/to/file
If multiple files have the same hash, we need to ensure we're not
overwriting other tempfiles in the same transaction. Instead
just delete them, since we know they're in the repo.
The tar files we're making of artifacts don't include parent
directories. Now we could change the builder to make them, but we can
also just autocreate them on import. Mode 0755 with no xattrs seems
OK here.
It's pretty trivial to map a previously existing commit tree into a
mutable tree too. While we're here change the command line arguments
for commit so that we can now properly overlay any combination of
directory, commit, or tarfile.
Rather than offering high level "commit directory", instead perform
operations on a mtree. Commits are treated more like regular objects.
Change the commit builtin to drive this all at a lower level.
The tar import code forced the resuscitation of a hackish "FileTree"
data type for representing an in-memory tree. Split this out
into an OstreeMutableTree class for future use by any other in-memory
tree construction.
ostbuild will generate two artifacts: foo-runtime.tar.gz and
foo-devel.tar.gz in the general case. When committing to the devel
tree, it'd be lame (i.e. slower and not atomic) to have to commit
twice.
This will allow us to have hardlink checkouts of archives. A key use
case here is an archive repo of an OS (with root-owned files etc.)
where we want to do builds in a user tree.
A positive side effect of doing things this way is that now the SHA256
checksums for a given file should be identical regardless of whether
it's stored in an archive or bare repository.
It's too confusing that we call the mode "archive" but the actual
files ".packfile". Also, git already has a "packfile" that serves a
totally different purpose.
Note this change makes it so we no longer call link() from an import
filesystem tree to the repository. This is a Good Thing really; it
makes local FS commits slower, but also less prone to corruption.
We really want the ability to take a .tar.gz and directly import
it into a repository, without creating a temporary filesystem tree.
First, doing it this way is significantly faster. Also, this allows
us to handle importing tar files with e.g. uid 0 files into packed
repositories as non-root, which is very useful for tests and builds.
We never actually dropped into the bits to write metadata as packfiles,
because such a thing doesn't exist.
Also add a GInputStream-based API for writing packfiles.
The default is always ignore_exists. Also port the internals here
to use more GIO code, and stop using *at syscall variants since they're
only useful if used 100%.
This commit originally was to port ostree_stat_and_checksum_file() to
GFile*, but I noticed that the checksum code was reading data in host
endianness. Fix that while we're here.
This invalidates all existing repositories.
This necessitated a large set of changes.
We now support an "archive" mode for repositories. In this mode,
files are stored "packed" rather than hard linked. This allows one to
e.g. store an OSTree repository with root-owned files as non-root. It
is also used as the basis for serving repositories via HTTP.
While doing this I realized that GVariant is endianness-dependent; I
decided to just store all data in big endian.