1 is a better choice than 0 because some programs use 0
as a special value; for example, GNU Tar warns of an
"implausibly old timestamp" with 0.
Closes: #330
Approved by: cgwalters
We have a lot of "allow_noent" type wrapper functions since
a common pattern is to allow files to not exist, but still
throw cleanly on other issues.
This is another instance of that, and cleans up duplicated error
handling code.
Part of this is prep for moving away from `GFile` consumers.
Closes: #319
Approved by: jlebon
In addition to generic fd relative porting,
this is a necessary preparatory step for libglnx porting, because
when I tried to use `g_mapped_file_new` I hit an issue with
it using a different error domain from GIO.
Thankfully libglnx consistently uses the GIO error domain, and here
we're now using it for the `open()` call.
Closes: #317
Approved by: jlebon
This tries to avoid leaking GVariantBuilders and GVariants in some
situations. The leaks were usually happening when some error occurred
or because of unclear variant ownership situation.
The former is mostly about making sure that g_variant_builder_clear is
called on builders that didn't finish their variant building process.
The latter is surely more work - sometimes the result of
g_variant_builder_end() should not be passed directly to a function,
but rather stored in a g_autoptr(GVariant), sunk and then passed to a
function. IMO, with an advent of g_autoptr, GVariants should be always
sunk instead of relying on some receiver function sinking it. This
would make an easy-to-follow policy of always sinking your
variants. Functions could then assume that the passed variant is
already sunk. These leaks are still happenning in commands, but they
are less harmful, since that code will not be used by some daemon as a
library routine.
Closes: #291
Approved by: cgwalters
- Make hardlink handling more generic. The previous strategy worked for
tar archives, but not for cpio. It now works for both.
- Add support for SEL labeling (through the OstreeRepoCommitModifier)
- Add support for xattr_callback (through the OstreeRepoCommitModifier)
- Add support for filter (through the OstreeRepoCommitModifier)
- Add a use_ostree_convention option
Closes: #275
Approved by: cgwalters
The filesystem commit code will never give us potentially hostile
filenames, and when importing from archives, we do some validation.
However, we should be extra paranoid and also add error messages in
the mtree in case someone tries to import a hostile
libarchive-supported format.
Closes: #283
Approved by: jlebon
We were arbitrarily only deleting content after exactly one day. Some
use cases may want something else; make it configurable.
Closes: #170
Approved by: jlebon
We had a policy of cleaning up all files in `$repo/tmp` older
than one day, but we should really clean up previous bootid staging
directories too, as they can potentially take up a lot of disk space.
https://bugzilla.gnome.org/show_bug.cgi?id=760531Closes: #170
Approved by: jlebon
Setting this causes commit to error out. There are other ways we
could do this in a more sophisticated fashion, such as via SystemTap
etc. But this has low-tech applicablity, works as non-root.
The reason I'm adding this is so that we can add test cases for
cleanup of the `tmp/staging-` directory.
Closes: #170
Approved by: jlebon
There was some leftover intermediate cruft here I noticed
while reviewing another patch:
- We had an output `GFile*` for that was never used
- We required the caller to allocate the loose pathbuf, but
none of them ever reused it
- We had an extra intermediate function
Also while looking at this, I'm now uncertain whether some of the
callers of `_ostree_repo_has_loose_object` should really be invoking
`ostree_repo_has_object()`, but let's leave that aside for now.
Closes: #272
Approved by: alexlarsson
in write_directory_content_to_mtree_internal dfd_iter can be NULL,
for instance if commiting from --tree=ref=FOO. Don't blindly de-ref
it to avoid crashing.
Closes: #256
Approved by: cgwalters
In some cases (such as `ostree-sysroot-cleanup.c`), the surrounding
code would be substantially cleaner if it was also ported to
fd-relative, but I'm going to do that in a separate patch.
That way these patches are easier to review for mechanical
correctness. I used an Emacs keyboard macro as the poor man's
[Coccinelle](http://coccinelle.lip6.fr/).
⚠️ There is a notable spiked pit trap here around
`posix_fallocate()` and `errno`. This has bit other projects,
see e.g.
7bb87460e6
Otherwise the port was straightforward.
A fast way to generate new OSTree content using an existing
tree is to checkout (as hard links), add/replace files, then
call `ostree_repo_scan_hardlinks()`, then commit.
But `ostree_repo_scan_hardlinks()` scans the entire repo, which
can be slow if you have a lot of content.
All we really need is a mapping of (device,inode) -> checksum
just for the objects we checked out, then use that mapping
for commits.
This patch adds API so that callers can create a mapping via
`ostree_repo_devino_cache_new()`, then pass it to
`ostree_repo_checkout_tree_at()` which will populate it, and then
`ostree_repo_write_directory_to_mtree()` can consume it.
I plan to use this in rpm-ostree for package layering work.
Notes:
- The old `ostree_repo_scan_hardlinks()` API still works.
- I tweaked the cache to be a set with the checksum colocated with
the key, to avoid a separate malloc block per entry.
https://github.com/GNOME/ostree/pull/167
Concurrent pulls break since we're sharing the staging directory for
all transactions in the repo. This makes us use a per-transaction directory.
However, in order for resumes to work we first look for existing
staging directories and try to aquire an exclusive lock for them. If
we can't find any staging directory or they are all already locked,
then we create a new one.
https://bugzilla.gnome.org/show_bug.cgi?id=757611
ostree_repo_write_commit_with_time() converts the timestamp to
big-endian byte order.
ostree_repo_write_commit() was also doing this when calling
ostree_repo_write_commit_with_time(), resulting in a corrupted
commit object (timestamp bytes were backwards).
Recent regression in 14ffd7022a
Do not delete a .commitmeta file after removing the last metadata entry.
This way a client will pull the empty .commitmeta file and overwrite old
metadata as expected.
https://bugzilla.gnome.org/750459
Also renames OstreeRepoTrustedContentBareCommit to
OstreeRepoContentBareCommit so that it can be used by both.
This will be needed when we introduce checksum verification of objects
in static deltas.
When a commit is deleted and the repo is configured to use tombstone
commits, create one. Delete the tombstone file only if the commit is
pulled again.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
This originally was a way that we detected the case where a pull was
interrupted. Later, we added `.commitpartial` files which also cover
this case.
See also https://github.com/GNOME/ostree/pull/85
We still want to honor their existence (and unlink them) in case an
old version of ostree was in use, but I believe it's safe to stop
creating them now.
The only case where this would break is if you have a version of
ostree that predates commitpartial in your rollback history, but such
old versions are no longer in use by operating systems I support at
least.
Closes: https://github.com/GNOME/ostree/pull/100
I was hitting a bug in libguestfs/guestmount/FUSE where it blew up
with EINVAL on directories containing lots of files (more than
32000?). We really want to use prefixed subdirs just like the real
objects/ directory does.
This allows us to share more code between the paths, is more
efficient, etc.
gnome-continuous uses the ostree_repo_scan_hardlinks() mode to
avoid re-checksumming everything. However, when I ported the commit
code to use openat() and friends, this optimization was lost.
Re add it. The difference is about 15s versus 5 minutes.
When doing a pull --mirror from an archive-z2 repository into another
archive-z2 repository, currently we gunzip/checksum/gzip each content
object. The re-gzip process in particular is fairly expensive.
This does assume that the upstream content is trusted and correct.
It'd be nice in the future to do at least a CRC check, if not the full
checksum. (Could we append CRC data to the end of filez objects?)
We could also choose to only do this optimization if fetching over
TLS.
before: 1626 metadata, 20320 content objects fetched; 299634 KiB transferred in 62 seconds
after : 1626 metadata, 20320 content objects fetched; 299634 KiB transferred in 11 seconds