These allow the `summary` and `summary.sig` files to be cached at a
higher layer (for example, flatpak) between related pull operations (for
example, within a single flatpak transaction). This avoids
re-downloading `summary.sig` multiple times throughout a transaction,
which increases the transaction’s latency and introduces the possibility
for inconsistency between parts of the transaction if the server changes
its `summary` file part-way through.
In particular, this should speed up flatpak transactions on machines
with high latency network connections, where network round trips have a
high impact on the latency of an overall operation.
Signed-off-by: Philip Withnall <withnall@endlessm.com>
This is the dual of 1f3c8c5b3d
where we output more detail when signapi fails to validate.
Extend the API to return a string for success, which we output
to stdout.
This will help the test suite *and* end users validate that the expected
thing is happening.
In order to make this cleaner, split the "verified commit" set
in the pull code into GPG and signapi verified sets, and have
the signapi verified set contain the verification string.
We're not doing anything with the verification string in the
pull code *yet* but I plan to add something like
`ostree pull --verbose` which would finally print this.
One OpenShift user saw this from rpm-ostree:
```
client(id:cli dbus:1.583 unit:machine-config-daemon-host.service uid:0) added; new total=1
Initiated txn UpdateDeployment for client(id:cli dbus:1.583 unit:machine-config-daemon-host.service uid:0): /org/projectatomic/rpmostree1/rhcos
Txn UpdateDeployment on /org/projectatomic/rpmostree1/rhcos failed: File header size 4294967295 exceeds size 0
```
which isn't very helpful. Let's add some error
prefixing here which would at least tell us which
object was corrupted.
The goal here is to move the code towards a model
where the *client* can explicitly specify which signature types
are acceptable.
We retain support for `sign-verify=true` for backwards compatibility.
But in that configuration, a missing public key is just "no signatures found".
With `sign-verify=ed25519` and no key configured, we can
explicitly say `No keys found for required signapi type ed25519`
which is much, much clearer.
Implementation side, rather than maintaining `gboolean sign_verify` *and*
`GPtrArray sign_verifiers`, just have the array. If it's `NULL` that means
not to verify.
Note that currently, an explicit list is an OR of signatures, not AND.
In practice...I think most people are going to be using a single entry
anyways.
There's a lot of historical baggage associated with GPG verification
and `ostree pull` versus `ostree pull-local`. In particular nowadays,
if you use a `file://` remote things are transparently optimized
to e.g. use reflinks if available.
So for anyone who doesn't trust the "remote" repository, you should
really go through through the regular
`ostree remote add --sign-verify=X file://`
path for example.
Having a mechanism to say "turn on signapi verification" *without*
providing keys goes back into the "global state" debate I brought
up in https://github.com/ostreedev/ostree/issues/2080
It's just much cleaner architecturally if there is exactly one
path to find keys: from a remote config.
So here in contrast to the GPG code, for `pull-local` we explictily
disable signapi validation, and the `ostree_repo_pull()` API just
surfaces flags to disable it, not enable it.
The way `timestamp-check` works might be too restrictive in some
situations. Essentially, we need to support the case where users want to
pull an older commit than the current tip, but while still guaranteeing
that it is newer than some even older commit.
This will be used in Fedora CoreOS. For more information see:
https://github.com/coreos/rpm-ostree/pull/2094https://github.com/coreos/fedora-coreos-tracker/issues/481
Previously in the pull code, every time we went to verify
a commit we would re-initialize an `OstreeSign` instance
of each time, re-parse the remote configuration
and re-load its public keys etc.
In most cases this doesn't matter really because we're
pulling one commit, but if e.g. pulling a commit with
history would get a bit silly.
This changes things so that the pull code initializes the
verifiers once, and reuses them thereafter.
This is continuing towards changing the code to support
explicitly configured verifiers, xref
https://github.com/ostreedev/ostree/issues/2080
This cleans up the verification code; it was weird how
we'd get the list of known names and then try to create
an instance from it (and throw an error if that failed, which
couldn't happen).
`ostree-repo-pull.c` is rather monstrous; I plan to split it
up a bit. There's actually already a `pull-private.h` but
that's just for the binding verification API. I think that one
isn't really pull specific. Let's move it into the "catchall"
`repo.c`.
Previously we would pass the `verification-key` and `verification-file`
to all backends, ignoring errors from loading keys until we
found one that worked.
Instead, change the options to be `verification-<engine>-key`
and `verification-<engine>-file`, and then
rework this to use standard error handling; barf explicitly if
we can't load the public keys for example. Preserve
the semantics of accepting the first valid signature. The
first signature error is captured, the others are currently
compressed into a `(and %d more)` prefix.
And now that I look at this more closely there's a lot of
duplication between the two code paths in pull.c for verifying;
will dedup this next.
I'm mainly doing this to sanity check the CI state right now.
However, I also want to more cleanly/clearly distinguish
the "sign" code from the "gpg" code.
Rename one function to include `gpg`.
For the other...I think what it's really doing is using the remote
config, so change it to include `remote` in its name.
If GPG support is disabled in a build time we should to check if any of
options "gpg_verify" or "gpg_verify_summary" is set to TRUE instead
of checking if they are passed via options while pulling from remote.
Fixed the failure with assertion of `ostree find-remotes --pull --mirror`
calling (`tests/test-pull-collections.sh`) if libostree has been compiled
without GPG support.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
This code wasn't written with idiomatic GError usage; it's not standard
to construct an error up front and continually append to its
message. The exit from a function is usually `return TRUE`,
with error conditions before that.
Updating it to match style reveals what I think is a bug;
we were silently ignoring failure to parse key files.
When we're only pulling a subset of the refs available in the remote, it
doesn't make sense to copy the remote's summary (which may not be valid
for the local repo). This makes the check here match the one done
several lines above when we decide whether to error out if there's no
remote summary available.
This extends the fix in https://github.com/ostreedev/ostree/pull/935 for
the case of collection-refs.
Also, add a unit test for this issue, based on the existing one in
pull-test.sh.
Improve error handling for signatures checks -- passthrough real
reasons from signature engines instead of using common messages.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Return the collected errors from signing engines in case if verification
failed for the commit.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Change the API of supporting functions `_load_public_keys()` and
`_ostree_repo_sign_verify()` -- pass repo object and remote name
instead of OtPullData object. This allows to use these functions
not only in pull-related places.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
We don't need anymore stubs for verification options for remotes
in case if ostree built without GPG support.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Usage of 'g_warning()' inside keys loading funcrion lead to false
failure: the key loading attempt for the wrong engine breaks the
pulling process instead of trying to use this key with correct engine.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Add function `_load_public_keys()` to pre-load public keys according
remote's configuration. If no keys configured for remote, then use
system-wide configuration.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Allow to add public and secret key for ed25519 module as based64 string.
This allows to use common API for pulling and builtins without knowledge
of used signature algorithm.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Removed from public `ostree_sign_detached_metadata_append` function.
Renamed `metadata_verify` into `data_verify` to fit to real
functionality.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Return `const char *` instead of copy of the string -- this allow to
avoid unneeded copying and memory leaks in some constructions.
Minor code cleanup and optimisations.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
If `verification-key` is set for remote it is used as a public key for
checking the commit pulled from that remote.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
API changes:
- added function `ostree_sign_add_pk()` for multiple public keys using.
- `ostree_sign_set_pk()` now substitutes all previously added keys.
- added function `ostree_sign_load_pk()` allowed to load keys from file.
- `ostree_sign_ed25519_load_pk()` able to load the raw keys list from file.
- use base64 encoded public and private ed25519 keys for CLI and keys file.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
For repo structure directories like `objects`, `refs`, etc... we should
be more permissive and let the system's `umask` narrow down the
permission bits as wanted.
This came up in a context where we want to be able to have read/write
access on an OSTree repo on NFS from two separate OpenShift apps by
using supplemental groups[1] so we don't require SCCs for running as the
same UID (supplemental groups are part of the default restricted SCC).
[1] https://docs.openshift.com/container-platform/3.11/install_config/persistent_storage/persistent_storage_nfs.html#nfs-supplemental-groups
I was hitting `SIGSEGV` when running `cosa build` and narrowed it down
to #1954. What's happening here is that because we're using the default
context, when we unref it in the out path, it may not actually destroy
the `GSource` if it (the context) is still ref'ed elsewhere. So then,
we'd still get events from it if subsequent operations iterated the
context.
This patch is mostly a revert of #1954, except that we still keep a ref
on the `GSource`. That way it is always safe to destroy it afterwards.
(And I've also added a comment to explain this better.)
We're creating the timer source and then passing ownership to the
context, but because we didn't free the pointer, we would still call
`g_source_destroy` in the exit path. We'd do this right after doing
`unref` on the context too, which would have already destroyed and
unref'ed the source.
Drop that and just restrict the scope of that variable down to make
things more obvious.
Just noticed this after reviewing #1953.
Some gpg-named functions/variables should be used for any signature
system, so remove "gpg_" prefix from them to avoid confusion.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Closes: #1889
Approved by: cgwalters
Do not build the code related to GPG sign and verification if
GPGME support is disabled.
Public functions return error 'G_IO_ERROR_NOT_SUPPORTED' in case if
gpg-related check is rquested.
Signed-off-by: Denis Pynkin <denis.pynkin@collabora.com>
Closes: #1889
Approved by: cgwalters