As detailed in
gitlab.gnome.org/GNOME/glib/-/issues/600#note_877282, volatile
isn't actually needed in these contexts because the atomic operations
already give us strong enough guarantees. In GCC 11, this triggers a
diagnostic due to the volatile qualifier getting dropped anyway.
There is a WIP to do the same in glib:
https://gitlab.gnome.org/GNOME/glib/-/merge_requests/1719
This obsoletes this downstream patch:
https://src.fedoraproject.org/rpms/ostree/c/b8c5a6fb
Add support in the soup and curl fetchers to send the `If-None-Match`
and `If-Modified-Since` request headers, and pass on the `ETag` and
`Last-Modified` response headers.
This currently introduces no functional changes, but once call sites
provide the appropriate integration, this will allow HTTP caching to
happen with requests (typically with metadata requests, where the data
is not immutable due to being content-addressed). That should reduce
bandwidth requirements.
Signed-off-by: Philip Withnall <pwithnall@endlessos.org>
On at least one user's computer, g_getenv("http_proxy") returns the
empty string, so check for that and treat it as no proxy rather than
printing a warning.
See https://github.com/flatpak/flatpak/issues/2790Closes: #1835
Approved by: cgwalters
Use the same G_IO_ERROR_* values for HTTP status codes in both fetchers.
The libsoup fetcher still handles a few more internal error codes than
the libcurl one; this could be built on in future.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1594
Approved by: jlebon
This allows the retry code in ostree-repo-pull.c to recover from (for
example) timeouts at the libsoup layer in the stack, as well as from the
GSocket layer in the stack.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1594
Approved by: jlebon
We do already have `http-headers`, which potentially could be used to
allow clients to completely override the field, but it seems like the
more common use case is simply to append.
Closes: #1496
Approved by: cgwalters
Since f4d1334e19 the primary pull code maintains a
maximum queue. In that commit message I said `Note that I kept an assertion.`.
But I think this is wrong since while it covers a lot of the normal cases, if
one is e.g. trying to fetch a ton of refs, the primary pull code doesn't yet
queue those. While it'd be nice to queue those, it isn't worth carrying
extra assertions in the backends that can still trigger.
Closes: https://github.com/ostreedev/ostree/issues/1451Closes: #1453
Approved by: dbnicholson
SPDX License List is a list of (common) open source
licenses that can be referred to by a “short identifier”.
It has several advantages compared to the common "license header texts"
usually found in source files.
Some of the advantages:
* It is precise; there is no ambiguity due to variations in license header
text
* It is language neutral
* It is easy to machine process
* It is concise
* It is simple and can be used without much cost in interpreted
environments like java Script, etc.
* An SPDX license identifier is immutable.
* It provides simple guidance for developers who want to make sure the
license for their code is respected
See http://spdx.org for further reading.
Signed-off-by: Marcus Folkesson <marcus.folkesson@gmail.com>
Closes: #1439
Approved by: cgwalters
A lot of the libostree code is honestly too complex for its
own good (this is mostly my fault). The way we do HTTP writes
is still one of those. The way the fetcher writes tempfiles,
then reads them back in is definitely one of those.
Now that we've dropped the "partial object" bits in:
https://github.com/ostreedev/ostree/pull/1176 i.e. commit
0488b4870e
we can simplify things a lot more by having the fetcher
return an `O_TMPFILE` rather than a filename.
For trusted archive mirroring, we need to enable linking
in the tmpfiles directly.
Otherwise for at least content objects they're compressed, so we couldn't link
them in. For metadata, we need to do similar logic to what we have around
`mmap()` to only grab a tmpfile if the size is large enough.
Closes: #1252
Approved by: jlebon
We added a `.dir-locals.el` in commit: 9a77017d87
There's no need to have it per-file, with that people might think
to add other editors, which is the wrong direction.
Closes: #1206
Approved by: jlebon
Doing this in prep for libglnx tmpdir porting, but I think we should also do
this because the partial fetch code IMO was never fully baked; among other
things it was never integrated into the scheme we came up with for "boot id
sync" that we use for complete/staged objects.
There's a lot of complexity here that while we have some coverage for, I think
we need to refocus on the core functionality. The libcurl backend doesn't have
an equivalent to this today.
In particular for small objects, this is simply overly complex. The downside is
clearly for large objects like FAH's 61MB initramfs; not being able to resume
fetches of those is unfortunate.
In practice though, I think most people should be using deltas, and we need to
make sure deltas work for large objects anyways.
Further ultimately the peer-to-peer work should help a lot for people
with truly unreliable connections.
Closes: #1176
Approved by: jlebon
Part of cleaning up our usage of libglnx; we want to use what's in GLib where we
can.
Had to change a few .c files to `#include ostree.h` early on to pick up
autoptrs for the core types.
Closes: #1040
Approved by: jlebon
Using the error prefixing in the delta processing allows us to
do new code style. Also strip trailing whitespace.
Use error prefixing in a few other random places. I didn't
hunt for all of them, just testing out the new API.
Use `glnx_fchmod()`. Also note I dropped one `fchmod (tmpf, 0600)`
which is no longer necessary.
Update submodule: libglnx
Closes: #1011
Approved by: jlebon
Currently in Fedora we don't sign summaries, and every use of
`rpm-ostree` would emit to the journal an error when we failed
to fetch it.
Fix this by having `OSTREE_FETCHER_REQUEST_OPTIONAL_CONTENT` tell the fetcher
not to journal 404 errors. While fixing this, we had a mix of two booleans vs
the flags; fix things so we consistently use the flags in the fetcher and pull
code.
Closes: #1004
Approved by: mbarnes
Use the new macros introduced recently in libglnx to make iterating over
hash tables cleaner. This is just a start, it does not migrate the whole
tree.
Update submodule: libglnx
Closes: #971
Approved by: cgwalters
The summary file can get large, but it compresses well (something
which is not true of other files in the ostree repo which are
already compressed). By sending Accept-Encoding: gzip (and
handling the compressed results) we send a lot less data.
I set up the flathub repo (http://flathub.org/repo) to enable
gzip for the summary file (only), and the result is that the
331514 byte large summary was transferred in 122889 bytes.
On my (fast) network this decreased the time i took to do
"flatpak remote-ls flathub" by about 100msec.
This fixes https://github.com/ostreedev/ostree/issues/802Closes: #882
Approved by: cgwalters
It was reported that in the range request handling, we called `remove_pending()`
twice (once in processing it, and once potentially in the local_error cleanup),
and this could be viewed as a use-after-free. However, right now the range
cleanup and `local_error` being set are mututally exclusive.
Further, the task object already holds a strong reference, so I observed the
refcount was 2. For both of these reasons, there is no use-after-free in
practice.
Reported-By: "Siddharth Sharma" <siddharth@redhat.com>
Closes: #774
Approved by: jlebon
Particularly when HTTP requests fail, I really want a lot more information.
We could theoretically stuff it into the `GError` message field, but
that gets ugly *fast*.
Using the systemd journal allows us to log things in a structured fashion.
Right now e.g. rpm-ostree won't be aware of this additional information,
but I think we could teach it to be down the line.
In the short term, users can learn to find it from `systemctl status rpm-ostreed`
or `journalctl -b -r -u rpm-ostreed`, etc.
One thing I'd like to do next is log successful fetches of e.g. commit objects
as well with more information about the originating server (things like the
final URL if we were redirected, did we use TLS pinning, what was the negotiated
TLS version+cipher, etc).
Closes: #708
Approved by: jlebon
For rpm-ostree, we already link to libcurl indirectly via librepo, and
only having one HTTP library in process makes sense.
Further, libcurl is (I think) more popular in the embedded space. It
also supports HTTP/2.0 today, which is a *very* nice to have for OSTree.
This seems to be working fairly well for me in my local testing, but it's
obviously brand new nontrivial code, so it's going to need some soak time.
The ugliest part of this is having to vendor in the soup-url code. With
Oxidation we could follow the path of Firefox and use the
[Servo URL parser](https://github.com/servo/rust-url). Having to redo
cookie parsing also sucked, and that would also be a good oxidation target.
But that's for the future.
Closes: #641
Approved by: jlebon