The timeout timer should always be one-shot, so let's just always
destroy it in the callback. The main context has its own ref on it, so
it won't be freed behind its back.
This *should* fix a leak that was brought up in
https://bugzilla.redhat.com/show_bug.cgi?id=1891761.
Reported-by: Milan Crha <mcrha@redhat.com>
This makes it testable, and increases its test coverage too 100% of
lines, as measured by `make coverage`.
Signed-off-by: Philip Withnall <pwithnall@endlessos.org>
Add support in the soup and curl fetchers to send the `If-None-Match`
and `If-Modified-Since` request headers, and pass on the `ETag` and
`Last-Modified` response headers.
This currently introduces no functional changes, but once call sites
provide the appropriate integration, this will allow HTTP caching to
happen with requests (typically with metadata requests, where the data
is not immutable due to being content-addressed). That should reduce
bandwidth requirements.
Signed-off-by: Philip Withnall <pwithnall@endlessos.org>
We have a `http2=[0|1]` remote config option; let's have the
`--disable-http2` build option define the default for that. This way
it's easy to still enable http2 for testing even if
we have it disabled by default.
Closes: #1798
Approved by: jlebon
Just include the whole URL that failed if libcurl failed with something
elementary like CURLE_COULDNT_CONNECT or CURLE_COULDNT_RESOLVE_HOST.
Closes: #1731Closes: #1732
Approved by: cgwalters
Use the same G_IO_ERROR_* values for HTTP status codes in both fetchers.
The libsoup fetcher still handles a few more internal error codes than
the libcurl one; this could be built on in future.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1594
Approved by: jlebon
We do already have `http-headers`, which potentially could be used to
allow clients to completely override the field, but it seems like the
more common use case is simply to append.
Closes: #1496
Approved by: cgwalters
Since f4d1334e19 the primary pull code maintains a
maximum queue. In that commit message I said `Note that I kept an assertion.`.
But I think this is wrong since while it covers a lot of the normal cases, if
one is e.g. trying to fetch a ton of refs, the primary pull code doesn't yet
queue those. While it'd be nice to queue those, it isn't worth carrying
extra assertions in the backends that can still trigger.
Closes: https://github.com/ostreedev/ostree/issues/1451Closes: #1453
Approved by: dbnicholson
SPDX License List is a list of (common) open source
licenses that can be referred to by a “short identifier”.
It has several advantages compared to the common "license header texts"
usually found in source files.
Some of the advantages:
* It is precise; there is no ambiguity due to variations in license header
text
* It is language neutral
* It is easy to machine process
* It is concise
* It is simple and can be used without much cost in interpreted
environments like java Script, etc.
* An SPDX license identifier is immutable.
* It provides simple guidance for developers who want to make sure the
license for their code is respected
See http://spdx.org for further reading.
Signed-off-by: Marcus Folkesson <marcus.folkesson@gmail.com>
Closes: #1439
Approved by: cgwalters
This seems to work around
https://github.com/ostreedev/ostree/issues/1362
Though I'm not entirely sure why yet. But at least with this it'll be easier for
people to work around things locally.
Closes: #1368
Approved by: jlebon
They don't play nicely currently with HTTP2 where we may
have lots of requests queued.
https://github.com/ostreedev/ostree/issues/878#issuecomment-347228854
In practice anyways I think issues here are better solved on a higher level -
e.g. apps today can use an overall timeout on pulls and if they exceed the limit
set the cancellable.
Closes: #1349
Approved by: jlebon
Reorder cleanup functions so that curl_multi_cleanup() runs before
self->sockets is destroyed. This avoids an assert and invalid memory
access in sock_cb where self->sockets is dereferenced during
curl_multi_cleanup().
Closes: https://github.com/ostreedev/ostree/issues/1331Closes: #1332
Approved by: cgwalters
A lot of the libostree code is honestly too complex for its
own good (this is mostly my fault). The way we do HTTP writes
is still one of those. The way the fetcher writes tempfiles,
then reads them back in is definitely one of those.
Now that we've dropped the "partial object" bits in:
https://github.com/ostreedev/ostree/pull/1176 i.e. commit
0488b4870e
we can simplify things a lot more by having the fetcher
return an `O_TMPFILE` rather than a filename.
For trusted archive mirroring, we need to enable linking
in the tmpfiles directly.
Otherwise for at least content objects they're compressed, so we couldn't link
them in. For metadata, we need to do similar logic to what we have around
`mmap()` to only grab a tmpfile if the size is large enough.
Closes: #1252
Approved by: jlebon
We added a `.dir-locals.el` in commit: 9a77017d87
There's no need to have it per-file, with that people might think
to add other editors, which is the wrong direction.
Closes: #1206
Approved by: jlebon
Doing this in prep for libglnx tmpdir porting, but I think we should also do
this because the partial fetch code IMO was never fully baked; among other
things it was never integrated into the scheme we came up with for "boot id
sync" that we use for complete/staged objects.
There's a lot of complexity here that while we have some coverage for, I think
we need to refocus on the core functionality. The libcurl backend doesn't have
an equivalent to this today.
In particular for small objects, this is simply overly complex. The downside is
clearly for large objects like FAH's 61MB initramfs; not being able to resume
fetches of those is unfortunate.
In practice though, I think most people should be using deltas, and we need to
make sure deltas work for large objects anyways.
Further ultimately the peer-to-peer work should help a lot for people
with truly unreliable connections.
Closes: #1176
Approved by: jlebon
It looks like `curl_multi_socket_action()` will return an error
if *one* of the requests has an error, but we already check
for that explicitly by iterating over each handle.
In libcurl, the "easy" layer doesn't really make use of this
return value. I did a bit of looking elsewhere; systemd
does check it as a runtime error, not an assertion. librepo
doesn't use the multi interface.
Closes: https://github.com/ostreedev/ostree/issues/1035Closes: #1038
Approved by: jlebon
Currently in Fedora we don't sign summaries, and every use of
`rpm-ostree` would emit to the journal an error when we failed
to fetch it.
Fix this by having `OSTREE_FETCHER_REQUEST_OPTIONAL_CONTENT` tell the fetcher
not to journal 404 errors. While fixing this, we had a mix of two booleans vs
the flags; fix things so we consistently use the flags in the fetcher and pull
code.
Closes: #1004
Approved by: mbarnes
Use the new macros introduced recently in libglnx to make iterating over
hash tables cleaner. This is just a start, it does not migrate the whole
tree.
Update submodule: libglnx
Closes: #971
Approved by: cgwalters
There's lots of mechanically replacing `OtTmpFile` with `GLnxTmpfile`;
the biggest changes are in the commit path. Symlink commits are now
very clearly separated from regular files. Symlinks are `OtCleanupUnlinkat`,
and regular files are `GLnxTmpfile`.
The commit codepath separates those as `_ostree_repo_commit_path_final()` and
`_ostree_repo_commit_tmpf_final()`. A nice aspect of all of this is that they
both *consume* the temporary on success. This avoids an extra spurious
`unlink()` call.
One of the biggest bits of code motion is in `commit_loose_regfile_object()`,
which no longer needs to care about symlinks. For the most parth though it's
just removing conditionals.
Update submodule: libglnx
Closes: #958
Approved by: jlebon
The summary file can get large, but it compresses well (something
which is not true of other files in the ostree repo which are
already compressed). By sending Accept-Encoding: gzip (and
handling the compressed results) we send a lot less data.
I set up the flathub repo (http://flathub.org/repo) to enable
gzip for the summary file (only), and the result is that the
331514 byte large summary was transferred in 122889 bytes.
On my (fast) network this decreased the time i took to do
"flatpak remote-ls flathub" by about 100msec.
This fixes https://github.com/ostreedev/ostree/issues/802Closes: #882
Approved by: cgwalters
It's hard right now to do a full port to the new libglnx tmpfile
API since there are complex cases in the commit path which deal
with symlinks as well.
Let's make things more gradual by introducing the important part (struct with
autocleanup) here in libotutil, port what we can. This will make a future
complete port easier.
Closes: #871
Approved by: jlebon
I noticed an instance of this while working on https://github.com/ostreedev/ostree/pull/861
Which apparently I cargo-culted into the new system generator bits.
Let's break this out as a small concise change.
Closes: #866
Approved by: jlebon
Testing a fetch of `fedora-atomic/.../docker-host` from
an nginx instance over `https://127.0.0.1` using Fedora 25
versions. Average over 3 runs:
Before: ~24.6 seconds
After: ~19 seconds
Speedup: ~30%
Closes: https://github.com/ostreedev/ostree/issues/778Closes: #780
Approved by: jlebon
I had to rebuild `glib` with `-fsanitize=address` in order to get a stack trace
to finally get this one. However, *installing* that glib "system wide"
in my container breaks everything (including `rpm-ostree`, `dnf`, `pkg-config` etc.)
that wasn't built with ASAN.
So my test scenario right now is to extract the libs and do e.g.:
```
make && env LD_LIBRARY_PATH=$HOME/src/distgit/fedora/glib2/asan-libs make check TESTS=tests/test-basic.sh
```
Closes: #719
Approved by: jlebon
Particularly when HTTP requests fail, I really want a lot more information.
We could theoretically stuff it into the `GError` message field, but
that gets ugly *fast*.
Using the systemd journal allows us to log things in a structured fashion.
Right now e.g. rpm-ostree won't be aware of this additional information,
but I think we could teach it to be down the line.
In the short term, users can learn to find it from `systemctl status rpm-ostreed`
or `journalctl -b -r -u rpm-ostreed`, etc.
One thing I'd like to do next is log successful fetches of e.g. commit objects
as well with more information about the originating server (things like the
final URL if we were redirected, did we use TLS pinning, what was the negotiated
TLS version+cipher, etc).
Closes: #708
Approved by: jlebon
For rpm-ostree, we already link to libcurl indirectly via librepo, and
only having one HTTP library in process makes sense.
Further, libcurl is (I think) more popular in the embedded space. It
also supports HTTP/2.0 today, which is a *very* nice to have for OSTree.
This seems to be working fairly well for me in my local testing, but it's
obviously brand new nontrivial code, so it's going to need some soak time.
The ugliest part of this is having to vendor in the soup-url code. With
Oxidation we could follow the path of Firefox and use the
[Servo URL parser](https://github.com/servo/rust-url). Having to redo
cookie parsing also sucked, and that would also be a good oxidation target.
But that's for the future.
Closes: #641
Approved by: jlebon