This cleans up the verification code; it was weird how
we'd get the list of known names and then try to create
an instance from it (and throw an error if that failed, which
couldn't happen).
`ostree-repo-pull.c` is rather monstrous; I plan to split it
up a bit. There's actually already a `pull-private.h` but
that's just for the binding verification API. I think that one
isn't really pull specific. Let's move it into the "catchall"
`repo.c`.
Previously we would pass the `verification-key` and `verification-file`
to all backends, ignoring errors from loading keys until we
found one that worked.
Instead, change the options to be `verification-<engine>-key`
and `verification-<engine>-file`, and then
rework this to use standard error handling; barf explicitly if
we can't load the public keys for example. Preserve
the semantics of accepting the first valid signature. The
first signature error is captured, the others are currently
compressed into a `(and %d more)` prefix.
And now that I look at this more closely there's a lot of
duplication between the two code paths in pull.c for verifying;
will dedup this next.
I'm mainly doing this to sanity check the CI state right now.
However, I also want to more cleanly/clearly distinguish
the "sign" code from the "gpg" code.
Rename one function to include `gpg`.
For the other...I think what it's really doing is using the remote
config, so change it to include `remote` in its name.
If GPG support is disabled in a build time we should to check if any of
options "gpg_verify" or "gpg_verify_summary" is set to TRUE instead
of checking if they are passed via options while pulling from remote.
Fixed the failure with assertion of `ostree find-remotes --pull --mirror`
calling (`tests/test-pull-collections.sh`) if libostree has been compiled
without GPG support.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
This code wasn't written with idiomatic GError usage; it's not standard
to construct an error up front and continually append to its
message. The exit from a function is usually `return TRUE`,
with error conditions before that.
Updating it to match style reveals what I think is a bug;
we were silently ignoring failure to parse key files.
When we're only pulling a subset of the refs available in the remote, it
doesn't make sense to copy the remote's summary (which may not be valid
for the local repo). This makes the check here match the one done
several lines above when we decide whether to error out if there's no
remote summary available.
This extends the fix in https://github.com/ostreedev/ostree/pull/935 for
the case of collection-refs.
Also, add a unit test for this issue, based on the existing one in
pull-test.sh.
Improve error handling for signatures checks -- passthrough real
reasons from signature engines instead of using common messages.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Return the collected errors from signing engines in case if verification
failed for the commit.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Change the API of supporting functions `_load_public_keys()` and
`_ostree_repo_sign_verify()` -- pass repo object and remote name
instead of OtPullData object. This allows to use these functions
not only in pull-related places.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
We don't need anymore stubs for verification options for remotes
in case if ostree built without GPG support.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Usage of 'g_warning()' inside keys loading funcrion lead to false
failure: the key loading attempt for the wrong engine breaks the
pulling process instead of trying to use this key with correct engine.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Add function `_load_public_keys()` to pre-load public keys according
remote's configuration. If no keys configured for remote, then use
system-wide configuration.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Allow to add public and secret key for ed25519 module as based64 string.
This allows to use common API for pulling and builtins without knowledge
of used signature algorithm.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Removed from public `ostree_sign_detached_metadata_append` function.
Renamed `metadata_verify` into `data_verify` to fit to real
functionality.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Return `const char *` instead of copy of the string -- this allow to
avoid unneeded copying and memory leaks in some constructions.
Minor code cleanup and optimisations.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
If `verification-key` is set for remote it is used as a public key for
checking the commit pulled from that remote.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
API changes:
- added function `ostree_sign_add_pk()` for multiple public keys using.
- `ostree_sign_set_pk()` now substitutes all previously added keys.
- added function `ostree_sign_load_pk()` allowed to load keys from file.
- `ostree_sign_ed25519_load_pk()` able to load the raw keys list from file.
- use base64 encoded public and private ed25519 keys for CLI and keys file.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
For repo structure directories like `objects`, `refs`, etc... we should
be more permissive and let the system's `umask` narrow down the
permission bits as wanted.
This came up in a context where we want to be able to have read/write
access on an OSTree repo on NFS from two separate OpenShift apps by
using supplemental groups[1] so we don't require SCCs for running as the
same UID (supplemental groups are part of the default restricted SCC).
[1] https://docs.openshift.com/container-platform/3.11/install_config/persistent_storage/persistent_storage_nfs.html#nfs-supplemental-groups
I was hitting `SIGSEGV` when running `cosa build` and narrowed it down
to #1954. What's happening here is that because we're using the default
context, when we unref it in the out path, it may not actually destroy
the `GSource` if it (the context) is still ref'ed elsewhere. So then,
we'd still get events from it if subsequent operations iterated the
context.
This patch is mostly a revert of #1954, except that we still keep a ref
on the `GSource`. That way it is always safe to destroy it afterwards.
(And I've also added a comment to explain this better.)
We're creating the timer source and then passing ownership to the
context, but because we didn't free the pointer, we would still call
`g_source_destroy` in the exit path. We'd do this right after doing
`unref` on the context too, which would have already destroyed and
unref'ed the source.
Drop that and just restrict the scope of that variable down to make
things more obvious.
Just noticed this after reviewing #1953.
Some gpg-named functions/variables should be used for any signature
system, so remove "gpg_" prefix from them to avoid confusion.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Closes: #1889
Approved by: cgwalters
Do not build the code related to GPG sign and verification if
GPGME support is disabled.
Public functions return error 'G_IO_ERROR_NOT_SUPPORTED' in case if
gpg-related check is rquested.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Closes: #1889
Approved by: cgwalters
There's a valid use case for enabling the timestamp downgrade check
while still also using override commits.
We'll make use of this in Fedora CoreOS, where the agent specifies the
exact commit to upgrade to, while still enforcing that it be newer.
Closes: #1891
Approved by: cgwalters
Rather than wrapping each instance of `sd_journal_*` with
`HAVE_SYSTEMD`, let's just add some convenience macros that are just
no-op if we're not compiling with systemd.
Closes: #1841
Approved by: cgwalters
On at least one user's computer, g_getenv("http_proxy") returns the
empty string, so check for that and treat it as no proxy rather than
printing a warning.
See https://github.com/flatpak/flatpak/issues/2790Closes: #1835
Approved by: cgwalters
Currently the P2P code requires you to trust every remote you have
configured to the same extent, because a remote controlled by a
malicious actor can serve updates to refs (such as Flatpak apps)
installed from other remotes.[1] The way this attack would play out is
that the malicious remote would deploy the same collection ID as the
victim remote, and would then be able to serve updates for it.
One possible remedy would be to make it an error to configure remotes
such that two have the same collection ID but differing GPG keys. I
attempted to do that in Flatpak[2] but it proved difficult because it is
valid to configure two remotes with the same collection ID, and they may
then each want to update their keyrings which wouldn't happen
atomically.
Another potential solution I've considered is to add a `trusted-remotes`
option to ostree_repo_find_remotes_async() which would dictate which
keyring to use when pulling each ref. However the
ostree_repo_finder_resolve_async() API would still remain vulnerable,
and changing that would require rewriting a large chunk of libostree's
P2P support.
So this commit represents a third attempt at mitigating this security
hole, namely to have the client specify which remote to use for GPG
verification at pull time. This way the pull will fail if the commits
are signed with anything other than the keys we actually trust to serve
updates.
This is implemented as an option "ref-keyring-map" for
ostree_repo_pull_from_remotes_async() and
ostree_repo_pull_with_options() which dictates the remote to be used for
GPG verification of each collection-ref. I think specifying a keyring
remote for each ref is better than specifying a remote for each
OstreeRepoFinderResult, because there are some edge cases where a result
could serve updates to refs which were installed from more than one
remote.
The PR to make Flatpak use this new option is here[3].
[1] https://github.com/flatpak/flatpak/issues/1447
[2] https://github.com/flatpak/flatpak/pull/2601
[3] https://github.com/flatpak/flatpak/pull/2705Closes: #1810
Approved by: cgwalters
We have a `http2=[0|1]` remote config option; let's have the
`--disable-http2` build option define the default for that. This way
it's easy to still enable http2 for testing even if
we have it disabled by default.
Closes: #1798
Approved by: jlebon
Currently libostree essentially has two modes when it's pulling refs:
the "legacy" code paths pull only from the Internet, and the code paths
that are aware of collection IDs try to pull from the Internet, the
local network, and mounted filesystems (such as USB drives). The problem
is that while we eventually want to migrate everyone to using collection
IDs, we don't want to force checking LAN and USB sources if the user
just wants to pull from the Internet, since the LAN/USB code paths can
have privacy[1], security[2], and performance[3] implications.
So this commit implements a new repo config option called "repo-finders"
which can be configured to, for example, "config;lan;mount;" to check
all three sources or "config;mount;" to disable searching the LAN. The
set of values mirror those used for the --finders option of the
find-remotes command. This configuration affects pulls in three places:
1. the ostree_repo_find_remotes_async() API, regardless of whether or
not the user of the API provided a list of OstreeRepoFinders
2. the ostree_repo_finder_resolve_async() /
ostree_repo_finder_resolve_all_async() API
3. the find-remotes command
This feature is especially important right now since we soon want to
have Flathub publish a metadata key which will have Flatpak clients
update the remote config to add a collection ID.[4]
This effectively fixes https://github.com/flatpak/flatpak/issues/1863
but I'll patch Flatpak too, so it doesn't pass finders to libostree only
to then have them be removed.
[1] https://github.com/flatpak/flatpak/issues/1863#issuecomment-404128824
[2] https://github.com/ostreedev/ostree/issues/1527
[3] Based on how long the "ostree find-remotes" command takes to
complete, having the LAN finder enabled slows down that step of the
pull process by about 40%. See also
https://github.com/flatpak/flatpak/issues/1862
[4] https://github.com/flathub/flathub/issues/676Closes: #1758
Approved by: cgwalters
There are use cases for libostree as a local content store
for content derived or delivered via other mechanisms (e.g. OCI
images, RPMs, etc.). rpm-ostree today imports RPMs into OSTree
branches, and puts the RPM header value as commit metadata.
Some of these can be quite large because the header includes
permissions for each file. Similarly, some OCI metadata is large.
Since there's no security issues with this, support committing
such content.
We still by default limit the size of metadata fetches, although
for good measure we make this configurable too via a new
`max-metadata-size` value.
Closes: https://github.com/ostreedev/ostree/issues/1721Closes: #1744
Approved by: jlebon
If a ref already exists, we are likely only a few commits behind the
current head of the ref, so it is probably better for bandwidth
consumption to pull the individual objects rather than the from-scratch
delta.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1709
Approved by: cgwalters
Add an invalid-cache test error flag to ensure that the code that checks
for and recovers from a corrupted summary cache is hit. This helps make
sure that the recovery path is actually used without resorting to
G_MESSAGES_DEBUG.
Closes: #1698
Approved by: cgwalters
If for some reason the cached summary doesn't match the cached signature
then fetch the remote summary and verify again. Since commit c4c2b5eb
this is unlikely to happen since the summary will only be cached if it
matches the signature. However, if the summary cache has been corrupted
for any other reason then it's best to be safe and fetch the remote
summary again.
This is essentially the corollary to c4c2b5eb. Where that commit helps
you from getting into the corrupted summary cache in the first place,
this helps you get out of it. Without this the client can get wedged
until a prune or the remote server republishes the summary.
Closes: #1698
Approved by: cgwalters
copy_option() unnecessarily passed ownership of the value
to g_variant_dict_insert_value, but that already refs, so it was leaked.
Closes: #1702
Approved by: cgwalters
Normally, a configured remote will only serve refs with one associated
collection ID, but temporary remotes such as USB drives or LAN peers can
serve refs from multiple collection IDs which may use different GPG
keyrings. So the OstreeRepoFinderMount and OstreeRepoFinderAvahi classes
create dynamic OstreeRemote objects for each (uri, keyring) pair. So if
for example the USB mounted at /mnt/usb serves content from the
configured remotes "eos-apps" and "eos-sdk", the OstreeRepoFinderResult
array returned by ostree_repo_find_remotes_async() will have one result
with a remote called something like
file_mnt_usb_eos-apps.trustedkeys.gpg and the list of refs on the USB
that came from eos-apps, and another result with a remote
file_mnt_usb_eos-sdk.trustedkeys.gpg and the list of refs from eos-sdk.
Unfortunately while OstreeRepoFinderMount and OstreeRepoFinderAvahi
correctly only include refs in a result if the ref uses the associated
keyring, the find_remotes_cb() function used to clean up the set of
results looks at the remote summary file and includes every ref that's
in the intersection with the requested refs, regardless of whether it
uses a different remote's keyring. This leads to an error when you try
to pull from a USB containing refs from different collection IDs: the
pull using the wrong collection ID will error out with "Refspec not
found" and the result with the correct keyring will then be ignored "as
it has no relevant refs or they have already been pulled." So the pull
ultimately fails.
This commit fixes the issue by filtering refs coming from a dynamic
remote, so that only ones with the collection ID associated with the
keyring remote are examined. This only needs to be done for dynamic
remotes because you should be able to pull any ref from a configured
remote using its keyring. It's also only done when looking at the
collection map in the summary file, because LAN/USB remotes won't have a
"main" collection ID set (OSTREE_SUMMARY_COLLECTION_ID).
Closes: #1695
Approved by: pwithnall
I initially was going to add a `G_DEFINE_AUTOPTR_CLEANUP_FUNC` for
`FetchStaticDeltaData`, but it honestly didn't seem worth mucking around
ownership everywhere and potentially getting it wrong.
Discovered by Coverity.
Closes: #1692
Approved by: cgwalters
Currently the API that allows P2P operations (e.g. pulling an ostree ref
from a LAN or USB source) is hidden behind the configure flag
--enable-experimental-api. This commit makes the API public and makes
that flag essentially a no-op (leaving it in place in case we want to
use it again in the future). The P2P API has been tested over the last
several months and proven to work.
This means that since we're no longer using the "experimental" feature
flag, P2P builds of Flatpak will fail when using versions of OSTree from
this commit onwards, until Flatpak is patched in the near future. If you
want to build Flatpak < 0.11.8 with P2P enabled and link against OSTree
2018.6, you'll have to patch Flatpak. However, since Flatpak won't yet
have a hard dependency on OSTree 2018.6, it needs a new way to determine
if the P2P API in OSTree is available, so this commit adds a "p2p"
feature flag. This way the feature set is more semantically correct than
if we had continued to use the "experimental" feature flag.
In addition to making the P2P API public, this commit makes the P2P unit
tests run by default, removes the f27-experimental CI instance that's no
longer needed, changes a few man pages to reflect the changes, and
updates the bash completion script to accept the new commands and
options.
Closes: #1596
Approved by: cgwalters
Use the recently introduced architecture for retrying network requests
on transient failure to do the same for delta superblock requests, now
that they’re queued.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1600
Approved by: jlebon
Just like all the other requests made for delta parts and objects by the
pull code, use a queue for delta superblocks. Currently this doesn’t do
any prioritisation or retries after transient failures, but it could do
in future.
This means that delta superblocks are now subject to the parallel
request limit in the fetcher, which was a problem highlighted here:
https://github.com/ostreedev/ostree/pull/1453#discussion_r168321706.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1600
Approved by: jlebon
Various of the counters already have assertions like this; add some more
for total paranoia.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1594
Approved by: jlebon
Allow network requests to be re-queued if they failed with a transient
error, such as a socket timeout. Retry each request up to a limit
(default: 5), and only then fail the entire pull and propagate the error
to the caller.
Add a new ostree_repo_pull_with_options() option, n-network-retries, to
control the number of retries (including setting it back to the old
default of 0, if the caller wants).
Currently, retries are not supported for FetchDeltaSuperData requests,
as they are not queued. Once they are queued, adding support for retries
should be trivial. A FIXME comment has been left for this.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1594
Approved by: jlebon
This commit rearranges a few things in ostree-repo-pull.c so that OSTree
will successfully compile with experimental API enabled and without
libsoup, libcurl, or avahi:
./autogen.sh --enable-experimental-api --without-soup --without-curl
--without-avahi
This is accomplished with two sets of changes:
1. Move ostree_repo_resolve_keyring_for_collection() so it can be used
even without libsoup or libcurl.
2. Add stub functions for ostree_repo_find_remotes_async() and
ostree_repo_pull_from_remotes_async(), and their _finish() counterparts,
so they return an error when libsoup or libcurl isn't available.
Closes: #1605
Approved by: cgwalters
This introduces no functional changes, but will make upcoming support
for retrying downloads easier to add.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1599
Approved by: jlebon
This introduces no functional changes, but will make upcoming support
for retrying downloads easier to add.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1599
Approved by: jlebon
This introduces no functional changes, but will make upcoming support
for retrying downloads easier to add.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1599
Approved by: jlebon
Rename from `fdata` to `fetch_data` to clarify things and make it
consistent with other similar functionality in the file.
This introduces no functional changes.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1599
Approved by: jlebon
This introduces no functional changes, but does make the code a little
cleaner.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1599
Approved by: jlebon
This introduces no functional changes; just makes the code a bit shorter
in a few places.
https://gcc.gnu.org/onlinedocs/gcc/Conditionals.html
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1599
Approved by: jlebon
This introduces no functional changes, but will make some upcoming
refactoring a little easier.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1599
Approved by: jlebon
This is a normal case when running unit tests in client code
on continuous integration infrastructure. When those tests are
running they will set G_DEBUG=fatal-warnings which will cause
the program to abort if a warning is emitted. Instead, emit
a debug message if the problem was that we couldn't connect to
the daemon.
Closes: #1542
Approved by: jlebon
Currently OstreeRepoFinderResult, a data structure used by pull code
that supports P2P operations, has a hash table mapping refs to checksums
but doesn't include timestamp information. This means that clients have
no way of knowing just from the OstreeRepoFinderResult information if a
commit being offered by a peer remote is an update or downgrade until
they start pulling it. The client could check the summary or the commit
metadata for the timestamps, but this requires adding the temporary
remotes to the repo config, and ostree is already checking timestamps
before returning the results, so I think it makes more sense for them to
be returned rather than leaving it to the client. This limitation is
especially important for offline computers, because for online computers
the latest commit available from any remote is the latest commit,
period.
This commit adds a "ref_to_timestamp" hash table to
OstreeRepoFinderResult that is symmetric to "ref_to_checksum" in that it
shares the same keys. This is an API break, but it's part of the
experimental API, and none of the current users of that (flatpak,
eos-updater, and gnome-software) are affected. See the documentation for
more details on "ref_to_timestamp". One thing to note is the data
structure currently gets initialized in find_remotes_cb(), so only users
of ostree_repo_find_remotes_async() will get them, not users of, say,
ostree_repo_finder_resolve_all_async(). This is because the individual
OstreeRepoFinder implementations don't currently access the timestamps
(but I think this could be changed in the future if there's a need).
This commit will allow P2P support to be added to
flatpak_installation_list_installed_refs_for_update, which will allow
GNOME Software to update apps from USB drives while offline (it's
already possible online).
Closes: #1518
Approved by: cgwalters
In case of some kind of race or other weirdness we might be getting
non-matching versions of summary.sig and summary, where summary.sig
is the latest version. Currently we're saving them to the cache
directly after downloading them successfully, but they will then fail
to gpg validate. Then on the next run we'll keep using the cached files
even if they are incorrect, until summary.sig changes upstream.
This changes the order so that we verify the signatures before saving
to the cache, thus ensuring that we don't end up in a stuck state.
Fixes https://github.com/ostreedev/ostree/issues/1523Closes: #1529
Approved by: cgwalters
In ostree_repo_remote_fetch_summary_with_options(), if no summary is
found on the server and summary verification is enabled, the error
message implies that it's the summary signature that's missing, which is
misleading. This commit adds a more specific error message for the case
of a missing summary, which has the side effect of explicitly checking
for the case that signatures != NULL && summary == NULL after
repo_remote_fetch_summary(), even though that should never happen.
One effect of this is that if you run "flatpak remote-add" with an
incorrect URL you get a more helpful error message, and similarly for
other flatpak operations and other users of ostree.
Closes: #1522
Approved by: cgwalters
In libostree, the phrase "commit metadata" has two meanings-- one is the
first dictionary in a commit GVariant that stores metadata such as ref
bindings, and the other is the commit metadata in the summary file,
which stores the commit size, checksum, and timestamp. In
find_remotes_process_refs(), the entire commit GVariant was being
referred to as commit metadata, so this commit changes the variable
name and a comment to make things more consistent.
Closes: #1528
Approved by: cgwalters
ostree_repo_pull_from_remotes_async() passes along some options to
ostree_repo_pull_with_options(), so document them.
Closes: #1519
Approved by: cgwalters
We do already have `http-headers`, which potentially could be used to
allow clients to completely override the field, but it seems like the
more common use case is simply to append.
Closes: #1496
Approved by: cgwalters
The _ostree_repo_get_remote() and _ostree_repo_get_remote_inherited()
methods transfer ownership of the returned OstreeRemote to the caller,
so this commit fixes a few call sites that weren't properly freeing it.
Closes: #1478
Approved by: cgwalters
The "ref_original_commits" hash table uses string values, not variants,
so fix the free function passed to g_hash_table_new_full (). Since
g_variant_unref isn't NULL safe, this prevents an assertion failure when
a NULL value is inserted.
Dan Nicholson suggested this patch; I'm just submitting it because he's
busy.
Fixes https://github.com/ostreedev/ostree/issues/1433Closes: #1474
Approved by: cgwalters
For P2P pulls ostree adds temporary remotes and removes them in
find_remotes_cb(). However, if an OstreeRepoFinderResult gets freed
during the course of that function, the OstreeRemote in the result is
freed but a pointer to it remains in the remotes_to_remove array. This
means that when _ostree_repo_remove_remote() gets called on it at the
end of the function it will fail. In my case the resulting error was
"OSTree-CRITICAL **: _ostree_repo_remove_remote: assertion 'remote->name
!= NULL' failed" but I think it could also seg fault.
This commit adds a reference to the remote so it can be properly removed
when we're finished with it.
Closes: #1450
Approved by: giuseppe
SPDX License List is a list of (common) open source
licenses that can be referred to by a “short identifier”.
It has several advantages compared to the common "license header texts"
usually found in source files.
Some of the advantages:
* It is precise; there is no ambiguity due to variations in license header
text
* It is language neutral
* It is easy to machine process
* It is concise
* It is simple and can be used without much cost in interpreted
environments like java Script, etc.
* An SPDX license identifier is immutable.
* It provides simple guidance for developers who want to make sure the
license for their code is respected
See http://spdx.org for further reading.
Signed-off-by: Marcus Folkesson <marcus.folkesson@gmail.com>
Closes: #1439
Approved by: cgwalters
Currently users of the find_remotes_async()/pull_from_remotes_async()
functions have no way to specify a commit hash to use instead of the
latest one available. This commit implements an "override-commit-ids"
option analogous to the one used by ostree_repo_pull_with_options().
It's accomplished by returning OstreeRepoFinderResult objects pointing
to the given commit checksum(s) regardless of which ones were available
from the remotes, but in the future this implementation could be
improved to take into account the commits advertised by the remotes.
One effect of this is that flatpak will have the ability to downgrade
apps that use collection IDs
(https://github.com/flatpak/flatpak/issues/1309).
Closes: #1425
Approved by: pwithnall
Prep for further work here. This diff is a bit noisy for the delta bits because
the identation was off originally as well.
Closes: #1424
Approved by: jlebon
Previously we were doing e.g. `ot_util_filename_validate()` specifically inline
in dirtree objects, but only *after* writing them into the staging directory (by
default). In (non-default) cases such as not using a transaction, such an object
could be written directly into the repo.
A notable gap here is that `pull-local --untrusted` was *not* doing
this verification, just checksums. We harden that (and also the
static delta writing path, really *everything* that calls
`ostree_repo_write_metadata()` to also do "structure" validation
which includes path traversal checks. Basically, let's try hard
to avoid having badly structured objects even in the repo.
One thing that sucks in this patch is that we need to allocate a "bounce buffer"
for metadata in the static delta path, because GVariant imposes alignment
requirements, which I screwed up and didn't fulfill when designing deltas. It
actually didn't matter before because we weren't parsing them, but now we are.
In theory we could check alignment but ...eh, not worth it, at least not until
we change the delta compiler to emit aligned metadata which actually may be
quite tricky. (Big picture I doubt this really matters much right now
but I'm not going to pull out a profiler yet for this)
The pull test was extended to check we didn't even write a dirtree
with path traversal into the staging directory.
There's a bit of code motion in extracting
`_ostree_validate_structureof_metadata()` from `fsck_metadata_object()`.
Then `_ostree_verify_metadata_object()` builds on that to do checksum
verification too.
Closes: #1412
Approved by: jlebon
Allways include ostree-repo-pull-private.h to get rid of the following
build error when HAVE_LIBCURL_OR_LIBSOUP is not defined:
src/libostree/ostree-repo-pull.c:1493:1: error: no previous prototype
for '_ostree_repo_verify_bindings' [-Werror=missing-prototypes]
Signed-off-by: Marcus Folkesson <marcus.folkesson@gmail.com>
Closes: #1389
Approved by: cgwalters
For the [rpm-ostree jigdo ♲📦](https://github.com/projectatomic/rpm-ostree/issues/1081) work.
We're basically doing "pull" via a non-libostree mechanism, and this
should be fully supported. As I mentioned earlier we should try to
have `ostree-repo-pull.c` only use public APIs; this gets us closer
to that.
Closes: #1376
Approved by: jlebon
It will be used by the fsck utility in future. We could expose it
publicly in future too, if needed.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
Closes: #1347
Approved by: cgwalters
This seems to work around
https://github.com/ostreedev/ostree/issues/1362
Though I'm not entirely sure why yet. But at least with this it'll be easier for
people to work around things locally.
Closes: #1368
Approved by: jlebon
This reverts commit 519b30b7e1. Now that
the experimental GIR is being built correctly and OstreeRemote is a real
boxed type, this can be exposed again.
Closes: #1337
Approved by: pwithnall
A tricky thing here that caused this to go past a lot of our tests
is that the code was mostly OK if there was an available delta from
an older commit. But this case broke if we e.g. had a new OS
deployment and did a `--require-static-deltas` pull, i.e. the initial
state.
I cleaned up our "find static delta state" function to return an enumeration,
and extended it with an "already have the commit" state. A problem
I then hit is that we've historically fetched detached metadata for
non-delta pulls, even if the commit hasn't changed. I decided not to
do that for `--require-static-deltas` pulls for now; otherwise the
code gets notably more complex.
Closes: https://github.com/ostreedev/ostree/issues/1321Closes: #1323
Approved by: jlebon
Since ostree_remote_get_type is not made available to g-ir-scanner, it
treats OstreeRemote as a bare struct. That's not kosher for bindings and
it issues the following warning:
src/libostree/ostree-repo-pull.c:5560: Warning: OSTree:
ostree_repo_resolve_keyring_for_collection: return value: Invalid
non-constant return of bare structure or union; register as boxed type
or (skip)
For now, just skip this API for bindings.
Closes: #1322
Approved by: pwithnall
I didn't fully spelunk this, but from what `static-delta-generate-crosscheck.sh`
had, we appeared to be doing this before, and it's clearly useful for local
testing rather than needing to spin up a HTTP server.
Closes: #1313
Approved by: jlebon
Change the regexp for validating refs to require at least one letter or digit
before allowing the other special chars in the set `[.-_]`. Names that start
with `.` are traditionally Unix hidden files; let's ignore them under the
assumption they're metadata for some other tool, and we don't want to
potentially conflict with the special `.` and `..` Unix directory entries.
Further, names starting with `-` are problematic for Unix cmdline option
processing; there's no good reason to support that. Finally, disallow `_` just
on general principle - it's simpler to say that ref identifiers must start with
a letter or digit.
We also ignore any existing files (that might be previously created refs) that
start with `.` in the `refs/` directory - there's a Red Hat tool for content
management that injects `.rsync` files, which is why this patch was first
written.
V1: Update to ban all refs starting with a non-letter/digit, and
also add another call to `ostree_validate_rev` in the pull
code.
Closes: https://github.com/ostreedev/ostree/issues/1285Closes: #1286
Approved by: jlebon
This is another case where making an input stream out of a memory buffer is a
bit silly; just hash the `GBytes` directly.
Closes: #1287
Approved by: jlebon
I was working on a patch to do build on the work done to
import content objects async to do the same for metadata, but right
now we basically rely on writing them first to do the GPG verification
when scanning.
Things will be cleaner for that if we can pass the commit object directly into
`scan_commit_object()` and consistently use `gpg_verify_unwritten_commit()`.
We're careful here to continue to do it both ways (but at most one time), to
account for the case where a bad commit has been pulled and written - we need to
keep failing GPG verification there.
Closes: #1269
Approved by: jlebon