By default, unless it’s const, an (out) GHashTable will be assumed to be
(transfer full). That means the binding needs to free all the items in
the hash table, plus the table itself.
However, all the GHashTables we use have free functions set already, so
freeing the hash table will free its items. This results in a
double-free.
Fix that by ensuring we annotate such (out) hash tables as (transfer
container). Also annotate some other hash tables as (transfer none)
where appropriate, for clarity.
This fixes OSTree.Repo.list_collection_refs() in the Python bindings.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1341
Approved by: dbnicholson
This reverts commit 519b30b7e1. Now that
the experimental GIR is being built correctly and OstreeRemote is a real
boxed type, this can be exposed again.
Closes: #1337
Approved by: pwithnall
Now that g-ir-scanner is being told about ENABLE_EXPERIMENTAL_API, it
can include these types correctly. Drop the __GI_SCANNER__ guards in the
header files so that all the declarations are found.
After this, you can actually construct the types normally:
>>> OSTree.CollectionRef.new('com.example.Foo', 'bar')
<OSTree.CollectionRef object at 0x7f2bba4c7528 (OstreeCollectionRef at 0x55c033ff2f30)>
Closes: #1337
Approved by: pwithnall
Without this, you can't really use OstreeRemote as a GObject, which is a
requirement for bindings.
This was found when attempting to include OstreeRemote in the GIR, and
g-ir-scanner wasn't able to link it's temporary object due to an
"undefined reference to `ostree_remote_get_type'" error.
Closes: #1337
Approved by: pwithnall
I wanted to inspect a summary file the other day and was saddened to
find it was broken:
$ ostree summary --raw
error: No option specified; use -u to update summary
Fix the test to do the normal thing of passing just --raw without
--view. It's legal to pass --raw and --view, but it shouldn't be a
requirement.
Closes: #1336
Approved by: cgwalters
Fedora switched to 'xz' compress kernel modules, and recently
[RHEL7 did too](https://bugzilla.redhat.com/show_bug.cgi?id=1367496).
This compression defeats bsdiff.
While we have a "rollsum-able" test, we don't have a "bsdiff-able" test as it'd
be very expensive (we'd have to bsdiff, then apply it and compare the result).
Let's do the tactical quick fix here and just not try to delta files ending in
`.xz.`. This avoids us using bsdiff pointlessly for over 4000 files, which is
quite a notable speed increase for generating deltas.
Closes: #1333
Approved by: jlebon
This allows more explicit handling of commit state in code using
libostree, rather than hard-coding a commit state of 0 for ‘normal’.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1335
Approved by: cgwalters
Reorder cleanup functions so that curl_multi_cleanup() runs before
self->sockets is destroyed. This avoids an assert and invalid memory
access in sock_cb where self->sockets is dereferenced during
curl_multi_cleanup().
Closes: https://github.com/ostreedev/ostree/issues/1331Closes: #1332
Approved by: cgwalters
A tricky thing here that caused this to go past a lot of our tests
is that the code was mostly OK if there was an available delta from
an older commit. But this case broke if we e.g. had a new OS
deployment and did a `--require-static-deltas` pull, i.e. the initial
state.
I cleaned up our "find static delta state" function to return an enumeration,
and extended it with an "already have the commit" state. A problem
I then hit is that we've historically fetched detached metadata for
non-delta pulls, even if the commit hasn't changed. I decided not to
do that for `--require-static-deltas` pulls for now; otherwise the
code gets notably more complex.
Closes: https://github.com/ostreedev/ostree/issues/1321Closes: #1323
Approved by: jlebon
The main thing here is that a ton of stuff has happened in gnulib since we
imported `parse-datetime.y`. I cherry-picked a little bit of it, but that
upstream doesn't seem to build with `-Wundef`, so I just deleted some hunks.
(Note I reindented the warnings consistently)
Update submodule: libglnx
Closes: #1320
Approved by: jlebon
Creating the symlink will cause make to try to build rofiles-fuse, which
will fail if it's disabled. Normally I wouldn't disable rofiles-fuse,
but it's triggering a hang in our ARM Xenial builder's kernel in splice.
I'm sure that's fuse's fault, but for now I just need to disable
rofiles-fuse there and found --disable-rofiles-fuse didn't actually
work.
Closes: #1325
Approved by: cgwalters
Since ostree_remote_get_type is not made available to g-ir-scanner, it
treats OstreeRemote as a bare struct. That's not kosher for bindings and
it issues the following warning:
src/libostree/ostree-repo-pull.c:5560: Warning: OSTree:
ostree_repo_resolve_keyring_for_collection: return value: Invalid
non-constant return of bare structure or union; register as boxed type
or (skip)
For now, just skip this API for bindings.
Closes: #1322
Approved by: pwithnall
g-ir-scanner was spitting this warning:
src/libostree/ostree-core.c:281: Warning: OSTree:
ostree_validate_collection_id: unknown parameter 'rev' in
documentation comment, should be 'collection_id'
Closes: #1322
Approved by: pwithnall
When compiling libostree, OSTREE_ENABLE_EXPERIMENTAL_API is managed via
config.h. However, g-ir-scanner can't use that since it gets confused
about the namespace of all the random macros. It won't include the
experimental APIs unless the macro is defined through another means.
Without this, none of the experimental APIs were being included in the
gir data.
Closes: #1322
Approved by: pwithnall
ostree-enumtypes.c includes ostree-enumtypes.h, so make needs to be told
about the dependency. Without it, parallel make could try to build
ostree-enumtypes.c before the header file exists.
I hit this when running `make -j OSTree-1.0.gir`.
Closes: #1322
Approved by: pwithnall
It's the slowest part, let's show admins something. This "update every 10%" code
was copied from the fsck command; obviously a better approach would be "progress
every N seconds" but doing that somewhat accurately requires making things
async; not worth it here yet.
Closes: #1314
Approved by: jlebon
We never ended up using this, and I'm going to revisit this in another patch
with a different approach that has useful *content* and not just a lot of files.
Closes: #1314
Approved by: jlebon
I didn't fully spelunk this, but from what `static-delta-generate-crosscheck.sh`
had, we appeared to be doing this before, and it's clearly useful for local
testing rather than needing to spin up a HTTP server.
Closes: #1313
Approved by: jlebon
First, the manual crosscheck script bitrotted; it got caught up
in the "use libtest repo creation wrapper" bit, and also it
seems like at some point `pull --require-static-deltas` changed
meaning when dealing with `file:///` repos. I have more work to
unwind that.
Next, I'm seeing a delta failure which looks like a static delta
miscompilation with rollsums; change the compiler to print out
the source object too, which helped me debug this.
And finally in the processing code, fix incorrect error prefixing, which was
misleading.
Closes: #1311
Approved by: ashcrow
Fixes: 93457071cb "lib/deltas: Use pread() instead of lseek()+read()"
Caught this when trying to test alex's patch locally. I am going to review our
static delta pulls and try to get something more comprehensive locally. But in
the meantime this patch is clearly right.
Closes: #1312
Approved by: jlebon
Directly when we allocate a new part we finish the old one,
writing the compressed data to a temporary file and generating
the delta header for it.
When all these are done we loop over them and collect the headers,
sizes and either copy the tempfile data into the inlined superblock
or link the tempfiles to disk with the proper names.
Closes: #1309
Approved by: cgwalters
This allows us to create the final delta desciptor directly on disk
rather than having it all in memory. This is nice because it can
become quite large if inlined parts are used.
Note however, that we currently generate all the delta parts in
memory before adding them to the delta, so we still keep all individual
parts in memory. Fixing that is the next step.
Closes: #1309
Approved by: cgwalters
This is similar to GVariantBuilder in that it constructs variant
containers, but it writes it directly to a file descriptor rather
than keep the entier thing in memory. This is useful to create large
variants without using a lot of memory.
Closes: #1309
Approved by: cgwalters