docs: Remove

This is moved to https://gitlab.com/bootc-org/documentation for
now.
This commit is contained in:
Colin Walters 2024-04-02 15:10:06 -04:00
parent 9fb497c949
commit 5dfd201832
14 changed files with 2 additions and 761 deletions

View File

@ -4,8 +4,6 @@ on:
pull_request:
branches:
- main
paths-ignore:
- "docs/**"
workflow_dispatch:

View File

@ -1,45 +0,0 @@
name: Docs
on:
push:
branches:
- main
workflow_dispatch:
permissions:
contents: read
pages: write
id-token: write
concurrency:
group: "pages"
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
- name: Setup Pages
uses: actions/configure-pages@1f0c5cde4bc74cd7e1254d0cb4de8d49e9068c7d # v4.0.0
- name: Build with Jekyll
uses: actions/jekyll-build-pages@b178f9334b208360999a0a57b523613563698c66 # v1.0.12
with:
source: ./docs
destination: ./_site
- name: Upload artifact
uses: actions/upload-pages-artifact@56afc609e74202658d3ffba0e8f6dda462b719fa # v3.0.1
# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e # v4.0.5

View File

@ -8,8 +8,7 @@ metadata:
build.appstudio.redhat.com/target_branch: "{{target_branch}}"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression:
event == "pull_request" && target_branch
== "main" && ! "docs/***".pathChanged()
event == "pull_request" && target_branch == "main"
creationTimestamp: null
labels:
appstudio.openshift.io/application: centos-bootc

View File

@ -8,8 +8,7 @@ metadata:
build.appstudio.redhat.com/target_branch: "{{target_branch}}"
pipelinesascode.tekton.dev/max-keep-runs: "3"
pipelinesascode.tekton.dev/on-cel-expression:
event == "push" && target_branch
== "main" && ! "docs/***".pathChanged()
event == "push" && target_branch == "main"
creationTimestamp: null
labels:
appstudio.openshift.io/application: centos-bootc

View File

@ -1,14 +0,0 @@
# Bundler setup for jekyll to be deployed on github pages.
source "https://rubygems.org"
# Note that we're using the github-pages gem to mimic the GitHub pages
# automated setup. That installs jekyll, a default set of jekyll
# plugins, and a modified jekyll configuration.
group :jekyll_plugins do
gem "github-pages"
gem "jekyll-remote-theme"
end
# Prefer the GitHub flavored markdown version of kramdown.
gem "kramdown-parser-gfm"

View File

@ -1,58 +0,0 @@
title: centos/centos-bootc
description: centos-bootc documentation
baseurl: "/centos-bootc"
url: "https://centos.github.io"
# Comment above and use below for local development
# url: "http://localhost:4000"
permalink: /:title/
markdown: kramdown
kramdown:
typographic_symbols:
ndash: "--"
mdash: "---"
# Exclude the README and the bundler files that would normally be
# ignored by default.
exclude:
- README.md
- Gemfile
- Gemfile.lock
- prep-docs.sh
- vendor/
# These are copies of the apidoc/html and man/html directories. Run
# prep-docs.sh before jekyll to put it in place.
include: [reference, man]
remote_theme: just-the-docs/just-the-docs@v0.4.1
plugins:
- jekyll-remote-theme
color_scheme: coreos
# Aux links for the upper right navigation
aux_links:
"centos-bootc on GitHub":
- "https://github.com/centos/centos-bootc"
footer_content: 'Copyright &copy; <a href="https://www.redhat.com">Red Hat, Inc.</a> and <a href="https://github.com/containers">others</a>.'
# Footer last edited timestamp
last_edit_timestamp: true
last_edit_time_format: "%b %e %Y at %I:%M %p"
# Footer "Edit this page on GitHub" link text
gh_edit_link: true
gh_edit_link_text: "Edit this page on GitHub"
gh_edit_repository: "https://github.com/centos/centos-bootc"
gh_edit_branch: "main"
gh_edit_source: docs
gh_edit_view_mode: "tree"
compress_html:
clippings: all
comments: all
endings: all
startings: []
blanklines: false
profile: false

View File

@ -1 +0,0 @@
$link-color: #53a3da;

View File

@ -1,143 +0,0 @@
---
nav_order: 3
---
# This document has moved
See <https://bootc-org.gitlab.io/documentation/>
---
---
## Configuring systems via container builds
A key part of the idea of this project is that every tool and technique
one knows for building application container images should apply
to building bootable host systems.
Most configuration for a Linux system boils down to writing a file (`COPY`)
or executing a command (`RUN`).
## Embedding application containers
A common pattern is to add "application" containers that have references
embedded in the bootable host container.
For example, one can use the [podman systemd](https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html)
configuration files, embedded via a container build instruction:
```dockerfile
FROM <base>
COPY foo.container /usr/share/containers/systemd
```
In this model, the application containers will be fetched and run on firstboot.
A key choice is whether to refer to images by digest, or by tag. Referring
to images by digest ensures repeatable deployments, but requires shipping
host OS updates to update the workload containers. Referring to images
by tag allows you to use other tooling to dynamically update the workload
containers.
## Users and groups
### Generic images
A common use case is to produce "generic" or "unconfigured" images that
don't have any hardcoded passwords or SSH keys and allow the end user to
inject them. Per the [install doc](install.md) this is how the primary base
image produced by this project works. Adding `cloud-init` into your image
works across many (but not all) environments.
Another pattern is to add users only when generating a disk image (not
in the container image); this is used by [bootc-image-builder](https://github.com/osbuild/bootc-image-builder).
### Injecting users at build time
However, some use cases really want an opinionated default authentication
story.
This is a highly complex topic. The short version is that instead of invoking
e.g. `RUN useradd someuser` in a container build (or indirectly via an RPM
`%post` script), you should use[sysusers.d](https://www.freedesktop.org/software/systemd/man/latest/sysusers.d.html#).
(Even better, if this is for code executed as part of a systemd unit, investigate
using `DynamicUser=yes`)
However, `sysusers.d` only works for "system" users, not human login users.
There is also [systemd JSON user records](https://systemd.io/USER_RECORD/)
which can be put into a container image; however at the time of this
writing while a `sshAuthorizedKeys` field exists, it is not synchronized
directly in a way that the SSH daemon can consume.
It is likely that at some point in the future the operating system upgrade logic
(bootc/ostree) will learn to just automatically reconcile changes to `/etc/passwd`.
At the current time, a workaround is to include a systemd unit which automatically
reconciles things at boot time, via e.g.
```text
ExecStart=/bin/sh -c 'getent someuser || useradd someuser'
```
For SSH keys, one approach is to hardcode the SSH authorized keys under `/usr`
so it's part of the clearly immutable state:
```dockerfile
RUN echo 'AuthorizedKeysFile /usr/etc-system/%u.keys' >> /etc/ssh/sshd_config.d/30-auth-system.conf && \
echo 'ssh-ed25519 AAAAC3Nza... root@example.com' > /usr/etc-system/root.keys && chmod 0600 /usr/etc-system/root.keys
```
Finally of course at scale, often one will want to have systems configured
to use the network as source of truth for authentication, using e.g. [FreeIPA](https://www.freeipa.org/).
That avoids the need to hardcode any users or keys in the image, just the
setup necessary to contact the IPA server.
### Avoiding home directory persistence
In a default installation, the `/root` and `/home` directories are persistent,
and are symbolic links to `/var/roothome` and `/var/home` respectively. This
persistence is typically highly desirable for machines that are somewhat "pet"
like, from desktops to some types of servers, and often undesirable for
scale-out servers and edge devices.
It's recommended for most use cases that don't want a persistent home
directory to inject a systemd unit like this for both these directories,
that uses [tmpfs](https://www.kernel.org/doc/html/latest/filesystems/tmpfs.html):
```systemd
[Unit]
Description=Create a temporary filesystem for /var/home
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target
[Mount]
What=tmpfs
Where=/var/home
Type=tmpfs
```
If your systems management tooling discovers SSH keys dynamically
on boot (cloud-init, afterburn, etc.) this helps ensure that there's fewer
conflicts around "source of truth" for keys.
### Usage of `ostree container commit`
While you may find `RUN ostree container commit` as part of some
container builds, specifically this project aims to use
`root.transient` which obviates most of the incompatibility
detection done in that command.
In other words it's not needed and as of recently does very little. We are likely
to introduce a new static-analyzer type process with a different name
and functionality in the future.
## Example repositories
The following git repositories have some useful examples:
- [centos-boot-examples](https://gitlab.com/CentOS/cloud/centos-boot-examples)
- [coreos/layering-examples](https://github.com/coreos/layering-examples)
- [openshift/rhcos-image-layering-examples](https://github.com/openshift/rhcos-image-layering-examples/)

View File

@ -1,64 +0,0 @@
---
nav_order: 4
---
# Project CentOS boot tier-1 and cloud agents
The tier-0 and tier-1 images today do not contain any special
hypervisor-specific agents. The following specifically are not included
for example:
- cloud-init
- vmware-guest-agent
- google-guest-agent
- qemu-guest-agent
- ignition
- afterburn
etc.
## Unnecessary on bare metal
For deployment to bare metal using e.g. Anaconda or `bootc install`, none of
these are necessary.
## Unnecessary for "immutable infrastructure" on hypervisors
A model we aim to emphasize is having the container image define the
"source of truth" for system state. This conflicts with using e.g. `cloud-init`
and having it fetch instance metadata and raises questions around changes to the
instance metadata and when they apply.
Related to this, `vmware-guest-agent` includes a full "backdoor" mechanism to
log into the OS.
## Should be containerized anyways
In general particularly for e.g. `vmware-guest-agent`, it makes more sense to
containerize it.
## Easy to install afterward
Many of these (particularly the first ones mentioned) are easy to install in a
custom image.
You can build your own derived image that includes e.g. vmware-guest-agent if
required alongside all other desired customizations.
## Fully supported if installed
It is supported to include these agents in your image if desired (whether as
part of the base image or containerized).
## What about Ignition
Ignition as shipped by CoreOS Container Linux derivatives has a lot of
advantages in providing a model that works smoothly across both bare metal and
virtualized scenarios.
It also has some compelling advantages over cloud-init at a technical level.
However, there is also significant overlap between a container-focused model of
the world and an Ignition-focused model.
More on this topic in [coreos.md](coreos.md).

View File

@ -1,21 +0,0 @@
text
# NOTE: As of the time of this writing, this kickstart only
# works with a Fedora 40+ (or ELN) installer ISO as it requires
# https://github.com/rhinstaller/anaconda/pull/5342
# Basic partitioning
clearpart --all --initlabel --disklabel=gpt
part /boot --size=1000 --fstype=ext4 --label=boot
part / --grow --fstype xfs
reqpart
ostreecontainer --url quay.io/centos-bootc/fedora-bootc:eln --no-signature-verification
# Or: quay.io/centos-bootc/centos-bootc-dev:stream9
firewall --disabled
services --enabled=sshd
# Only inject a SSH key for root
rootpw --iscrypted locked
# Add your example SSH key here!
#sshkey --username root "ssh-ed25519 <key> demo@example.com"
reboot

View File

@ -1,90 +0,0 @@
---
nav_order: 1
---
# Goals
This project's toplevel goal is to maintain default definitions for
base *bootable* container images, locked with Fedora ELN and CentOS Stream 9.
## Status
This is an in-development project not intended for production use yet.
## Container images
The primary output of this project is container images. The current
main development targets are [Fedora ELN](https://docs.fedoraproject.org/en-US/eln/)
and CentOS Stream 9.
### Distribution locked images
These images are intended to exactly match the content of the underlying distribution.
- `quay.io/centos-bootc/fedora-bootc:eln`
- `quay.io/centos-bootc/centos-bootc:stream9`
### Layered images
There are also layered images; for more information on these, see
[the centos-bootc-layered repository](https://gitlab.com/bootc-org/centos-bootc-layered).
### Development images
Some components of this project move quickly, and it's often useful to see things
as they appear in git `main` instead of waiting for package releases.
The following images track git main of selected components:
- `quay.io/centos-bootc/fedora-bootc-dev:eln`
- `quay.io/centos-bootc/centos-bootc-dev:stream9`
For more information, see [the dev repository](https://github.com/centos/centos-bootc-dev).
## Trying it out
See [install.md](./install.md).
## Understanding "tiers"
There is a "tier-0" image, but it is not yet being automatically built. The "tier-0"
contains:
- kernel
- systemd
- bootc
- selinux-policy-targeted
The tier-1 is a reasonably large system:
- NetworkManager, chrony
- openssh-server
- dnf (for installing packages in container builds)
- rpm-ostree (A lot of tooling uses this too)
The content set for these images is subject to change.
## Building
Here's an example command:
```shell
sudo rpm-ostree compose image --authfile ~/.config/containers/myquay.json --cachedir=cache -i --format=ociarchive centos-tier-0-stream9.yaml centos-tier-0-stream9.ociarchive
```
In some situations, copying to a local `.ociarchive` file is convenient. You
can also push to a registry with `--format=registry`.
More information at <https://coreos.github.io/rpm-ostree/container/>
## Badges
| Badge | Description | Service |
| ----------------------- | -------------------- | ------------ |
| [![Renovate][1]][2] | Dependencies | Renovate |
| [![Pre-commit][3]][4] | Static quality gates | pre-commit |
[1]: https://img.shields.io/badge/renovate-enabled-brightgreen?logo=renovate
[2]: https://renovatebot.com
[3]: https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit
[4]: https://pre-commit.com/

View File

@ -1,112 +0,0 @@
---
nav_order: 2
---
# This document has moved
See <https://bootc-org.gitlab.io/documentation/>
---
---
## Trying out development builds
Before you build a [derived container image](https://gitlab.com/bootc-org/examples),
you may want to just get a feel for the system, try out `bootc`, etc. The bootable
container images produced by this project are intended to be deployable in every
physical and virtual environment that is supported by CentOS Stream 9 today.
First, an important note to understand: the generic base container images
do *not* include any default passwords or SSH keys.
## Local virtualization (Linux & MacOS)
### podman desktop plugin (currently MacOS only)
There is a
[podman desktop extension](https://github.com/containers/podman-desktop-extension-bootc)
dedicated to this.
### podman-bootc-cli
A new [podman-bootc-cli tool](https://gitlab.com/bootc-org/podman-bootc-cli)
project offers a dedicated and streamlined CLI interface for running images, and
in the future, it will become the backend for the podman desktop plugin.
### bootc-image-builder
The
[bootc-image-builder tool](https://github.com/osbuild/bootc-image-builder)
supports generating local-virtualization ready types such as `qcow2` and `.raw`
from the bootable container image.
### The dedicated cloud-init image
Many people who just want to "try things out" will find it easiest to start
with
[the cloud image](https://gitlab.com/bootc-org/centos-bootc-layered/-/tree/main/cloud).
It's a separate container image because cloud-init does not work on every deployment
target, and it also serves as an effective demonstration of layering.
## Production-oriented physical installation
This project uses the same
[Anaconda](https://anaconda-installer.readthedocs.io/en/latest/intro.html)
installer as the package-based CentOS. Here's an example kickstart:
```text
# Basic setup
text
network --bootproto=dhcp --device=link --activate
# Basic partitioning
clearpart --all --initlabel --disklabel=gpt
reqpart --add-boot
part / --grow --fstype xfs
# Here's where we reference the container image to install - notice the kickstart
# has no `%packages` section! What's being installed here is a container image.
ostreecontainer --url quay.io/centos-bootc/centos-bootc:stream9 --no-signature-verification
firewall --disabled
services --enabled=sshd
# Only inject a SSH key for root
rootpw --iscrypted locked
sshkey --username root "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOQkQHeKan3X+g1jILw4a3KtcfEIED0kByKGWookU7ev walters+2015-general@verbum.org"
reboot
```
## Production-oriented cloud virtualization
### Generating AMIs, ISO and qcow2 (and more)
The [bootc-image-builder tool](https://github.com/osbuild/bootc-image-builder)
which supports `.qcow2` usable in OpenStack/libvirt etc. also supports generating
Amazon Machine Images, and other production-oriented IaaS formats as well as a
self-installing ISO. For more, please see the docs for that project.
After a disk image is generated, further updates will come from the container image.
### Replacing existing cloud images
A toplevel goal of this project is that the "source of truth" for Linux
operating system management is a container image registry - as opposed to e.g. a
set of qcow2 OpenStack images or AMIs, etc. Generating cloud disk images
gives fast boots into the target container image state, but also requires
maintaining infrastructure to e.g. manage garbage collection or versioning of
these images.
The latest releases of `bootc` have support for
`bootc install to-filesystem --replace=alongside`. More about this core mechanic
in the
[bootc install docs](https://github.com/containers/bootc/blob/main/docs/install.md).
Here's an example set of steps to execute; this could be done via e.g.
[cloud-init](https://cloudinit.readthedocs.io/en/latest/reference/index.html)
configuration.
```shell
dnf -y install podman skopeo
podman run --rm --privileged --pid=host -v /:/target -v /var/lib/containers:/var/lib/containers --security-opt label=type:unconfined_t <yourimage> bootc install to-filesystem --karg=console=ttyS0,115200n8 --replace=alongside /target
reboot
```

View File

@ -1,40 +0,0 @@
---
nav_order: 4
---
# This document has moved
See <https://bootc-org.gitlab.io/documentation/>
---
---
## Relationship with other projects
## Fedora CoreOS
The primary focus of Fedora CoreOS is on being a "golden image" that
can be configured via Ignition to run containers. In the Fedora CoreOS
model, the OS is "lifecycled" separately from the workload and configuration.
This project is explicitly designed to be derived from via container
tooling, not Ignition. While we will support a "just run the golden image" flow,
ading and customizing the base image with extra packages and content is the expected
norm. An important corrollary to this is that OS updates are "lifecycled" with the
workload and configuration.
## RHEL CoreOS
We sometimes say that RHEL CoreOS
[has FCOS as an upstream](https://github.com/openshift/os/blob/master/docs/faq.md#q-what-is-coreos)
but this is only kind of true; RHEL CoreOS includes a subset of FCOS content,
and is lifecycled with OCP.
An explicit goal of this project is to produce bootable container images
lifecycled with the base OS, that can be used as *base images* for RHEL CoreOS.
For more on this, see e.g.
<https://github.com/openshift/os/issues/799>
## RHEL for Edge
It is an explicit goal that CentOS boot also becomes a "base input" to RHEL for Edge.

View File

@ -1,167 +0,0 @@
---
nav_order: 3
---
# This document has moved
See <https://bootc-org.gitlab.io/documentation/>
---
---
## Operating system content and usage
## Configuring systemd units
To add a custom systemd unit:
```dockerfile
COPY mycustom.service /usr/lib/systemd/system
RUN ln -s mycustom.service /usr/lib/systemd/system/default.target.wants
```
It will *not* work currently to do `RUN systemctl enable mycustom.service` instead
of the second line - unless you also write a
[systemd preset file](https://www.freedesktop.org/software/systemd/man/latest/systemd.preset.html)
enabling that unit.
### Static enablement versus presets
systemd presets are designed for "run once" semantics - thereafter, OS upgrades
won't cause new services to start. In contrast, "static enablement" by creating
the symlink (as is done above) bypasses the preset logic.
In general, it's recommended to follow the "static enablement" approach because
it more closely aligns with "immutable infrastructure" model.
### Using presets
If nevertheless you want to use presets instead of "static enablement", one
recommended pattern to avoid this problem (and is also somewhat of a best
practice anyways) is to use a common prefix (e.g. `examplecorp-` for all of your
custom systemd units), resulting in `examplecorp-checkin.service`,
`examplecorp-agent.service` etc.
Then you can write a single systemd preset file to e.g.
`/usr/lib/systemd/system-preset/50-examplecorp.preset` that contains:
```systemd
enable examplecorp-*
```
## Automatic updates enabled by default
The base image here enables the
[bootc-fetch-apply-updates.service](https://github.com/containers/bootc/blob/main/manpages-md-extra/bootc-fetch-apply-updates.service.md)
systemd unit which automatically finds updated container images from the
registry and will reboot into them.
### Controlling automatic updates
First, one can disable the timer entirely as part of a container build:
```dockerfile
RUN systemctl mask bootc-fetch-apply-updates.timer
```
This is useful for environments where manually updating the systems is
preferred, or having another tool perform schedule and execute the
updates, e.g. Ansible.
Alternatively, one can use systemd "drop-ins" to override the timer
(for example, to schedule updates for once a week), create a file
like this, named e.g. `50-weekly.conf`:
```systemd
[Timer]
# Clear previous timers
OnBootSec= OnBootSec=1w OnUnitInactiveSec=1w
```
Then add it into your container:
```dockerfile
RUN mkdir -p /usr/lib/systemd/system/bootc-fetch-apply-updates.timer.d
COPY 50-weekly.conf /usr/lib/systemd/system/bootc-fetch-apply-updates.timer.d
```
## Air-gapped and dissconnected updates
For environments without a direct connection to a centralized container
registry, we encourage mirroring an on-premise registry if possible or manually
moving container images using `skopeo copy`.
See [this blog](https://www.redhat.com/sysadmin/manage-container-registries)
for example.
For systems that require manual updates via USB drives, this procedure
describes how to use `skopeo` and `bootc switch`.
Copy image to USB Drive:
```skopeo copy docker://[registry]/[path to image] dir://run/media/$USER/$DRIVE/$DIR```
*note, Using the dir transport will create a number of files,
and it's recommended to place the image in it's own directory.
If the image is local the containers-storage transport will transfer
the image from a system directly to the drive:
```skopeo copy containers-storage:[image]:[tag] dir://run/media/$USER/$DRIVE/$DIR```
From the client system, insert the USB drive and mount it:
```mount /dev/$DRIVE /mnt```
`bootc switch` will direct the system to look at this mount point for future
updates, and is only necessary to run one time if you wish to continue
consuming updates from USB devices. note that if the mount point changes,
simply run this command to point to the alternate location. We recommend
using the same location each time to simplfy this.
```bootc switch --transport dir /mnt/$DIR```
Finally `bootc upgrade` will 1) check for updates and 2) reboot the system
when --apply is used.
```bootc upgrade --apply```
## Filesystem interaction and layout
At "build" time, this image runs the same as any other OCI image where
the default filesystem setup is an `overlayfs` for `/` that captures all
changes written - to anywhere.
However, the default runtime (when booted on a virtual or physical host system,
with systemd as pid 1) there are some rules around persistence and writability.
The reason for this is that the primary goal is that base operating system
changes (updating kernels, binaries, configuration) are managed in your container
image and updated via `bootc upgrade`.
In general, aim for most content in your container image to be underneath
the `/usr` filesystem. This is mounted read-only by default, and this
matches many other "immutable infrastructure" operating systems.
The `/etc` filesystem defaults to persistent and writable - and is the expected
place to put machine-local state (static IP addressing, hostnames, etc).
All other machine-local persistent data should live underneath `/var` by default;
for example, the default is for systemd to persist the journal to `/var/log/journal`.
### Understanding `root.transient``
At a technical level today, the base image uses the
[bootc](https://github.com/containers/bootc) project, which uses
[ostree](https://github.com/ostreedev/ostree) as a backend. However, unlike many
other ostree projects, this base image enables the `root.transient` feature from
[ostree-prepare-root](https://github.com/ostreedev/ostree/blob/main/man/ostree-prepare-root.xml#L121).
This has two primary effects:
- Content placed underneath `/var` at container build time is moved t
`/usr/share/factory/var`, and on firstboot, updated files are handled via a
systemd `tmpfiles.d` rule that copies new files (see
`/usr/lib/tmpfiles.d/ostree-tmpfiles.conf`)
- The default `/` filesystem is writable, but not persistent. All content added
in the container image in other toplevel directories (e.g. `/opt`) will be
refreshed from the new container image on updates, and any modifications will
be lost.