doc: Split overview into chapters, expand a bit

This commit is contained in:
Colin Walters 2013-08-22 09:17:08 -04:00
parent d58d6a6ef2
commit 032f1316ad
1 changed files with 109 additions and 104 deletions

View File

@ -1,6 +1,6 @@
<?xml version="1.0"?> <?xml version="1.0"?>
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN" <!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
"http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" [ "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" [
<!ENTITY version SYSTEM "../version.xml"> <!ENTITY version SYSTEM "../version.xml">
]> ]>
<part id="overview"> <part id="overview">
@ -22,110 +22,115 @@
images. Instead, OSTree sits between those levels, offering a images. Instead, OSTree sits between those levels, offering a
blend of the advantages (and disadvantages) of both. blend of the advantages (and disadvantages) of both.
</para> </para>
<simplesect id="ostree-package-comparison">
<title>Comparison with "package managers"</title>
<para>
Because OSTree is designed for deploying core operating
systems, a comparison with traditional "package managers" such
as dpkg and rpm is illustrative. Packages are traditionally
composed of partial filesystem trees with metadata and scripts
attached, and these are dynamically assembled on the client
machine, after a process of dependency resolution.
</para>
<para>
In contrast, OSTree only supports recording and deploying
<emphasis>complete</emphasis> (bootable) filesystem trees. It
has no built-in knowledge of how a given filesystem tree was
generated or the origin of individual files, or dependencies,
descriptions of individual components.
</para>
<para>
The OSTree core emphasizes replicating read-only trees via
HTTP. It is designed for the model where a build server
assembles one or more trees, and these are replicated to
clients, which can choose between fully assembled (and
hopefully tested) trees.
</para>
<para>
However, it is entirely possible to use OSTree underneath a
package system; For example, when installing a package, rather
than mutating the currently running filesystem, the package
manager could assemble a new filesystem tree that includes the
new package, record it in the local OSTree repository, and
then set it up for the next boot. To support this model,
OSTree provides an (introspectable) C shared library.
</para>
</simplesect>
<simplesect id="ostree-block-comparison"> </chapter>
<title>Comparison with block/image replication</title>
<para> <chapter id="ostree-package-comparison">
OSTree shares some similarity with "dumb" replication and <title>Comparison with "package managers"</title>
stateless deployments, such as the model common in "cloud" <para>
deployments where nodes are booted from an (effectively) Because OSTree is designed for deploying core operating
readonly disk, and user data is kept on a different volumes. systems, a comparison with traditional "package managers" such
The advantage of "dumb" replication, shared by both OSTree and as dpkg and rpm is illustrative. Packages are traditionally
the cloud model, is that it's <emphasis>reliable</emphasis> composed of partial filesystem trees with metadata and scripts
and <emphasis>predictable</emphasis>. attached, and these are dynamically assembled on the client
</para> machine, after a process of dependency resolution.
<para> </para>
But unlike many default image-based deployments, OSTree <para>
supports a persistent, writable <literal>/etc</literal> that In contrast, OSTree only supports recording and deploying
is preserved across upgrades. <emphasis>complete</emphasis> (bootable) filesystem trees. It
</para> has no built-in knowledge of how a given filesystem tree was
<para> generated or the origin of individual files, or dependencies,
Because OSTree operates at the Unix filesystem layer, it works descriptions of individual components.
on top of any filesystem or block storage layout; it's </para>
possible to replicate a given filesystem tree from an OSTree <para>
repository into both a BTRFS disk and an XFS-on-LVM The OSTree core emphasizes replicating read-only OS trees via
deployment. Note: OSTree will transparently take advantage of HTTP, and where the OS includes (if desired) an entirely
some BTRFS features if deployed on it. separate mechanism to install applications, stored in <filename
</para> class='directory'>/var</filename> if they're system global, or
</simplesect> <filename class='directory'>/home</filename> for per-user
application installation.
</para>
<para>
However, it is entirely possible to use OSTree underneath a
package system, where the contents of <filename
class='directory'>/usr</filename> are computed on the client.
For example, when installing a package, rather than mutating the
currently running filesystem, the package manager could assemble
a new filesystem tree that includes the new package, record it
in the local OSTree repository, and then set it up for the next
boot. To support this model, OSTree provides an
(introspectable) C shared library.
</para>
</chapter>
<simplesect id="ostree-atomic-parallel-installation"> <chapter id="ostree-block-comparison">
<title>Atomic transitions between parallel-installable read-only filesystem trees</title> <title>Comparison with block/image replication</title>
<para> <para>
Another deeply fundamental difference between both package OSTree shares some similarity with "dumb" replication and
managers and image-based replication is that OSTree is stateless deployments, such as the model common in "cloud"
designed to parallel-install <emphasis>multiple deployments where nodes are booted from an (effectively)
versions</emphasis> of multiple readonly disk, and user data is kept on a different volumes.
<emphasis>independent</emphasis> operating systems. OSTree The advantage of "dumb" replication, shared by both OSTree and
relies on a new toplevel <filename the cloud model, is that it's <emphasis>reliable</emphasis>
class='directory'>ostree</filename> directory; it can in fact and <emphasis>predictable</emphasis>.
parallel install inside an existing OS or distribution </para>
occupying the physical <filename <para>
class='directory'>/</filename> root. But unlike many default image-based deployments, OSTree
</para> supports a persistent, writable <literal>/etc</literal> that
<para> is preserved across upgrades.
On each client machine, there is an OSTree repository stored </para>
in <filename class='directory'>/ostree/repo</filename>, and a <para>
set of "deployments" stored in <filename Because OSTree operates at the Unix filesystem layer, it works
class='directory'>/ostree/deploy/<replaceable>OSNAME</replaceable>/<replaceable>CHECKSUM</replaceable></filename>. on top of any filesystem or block storage layout; it's possible
Each deployment is primarily composed of a set of hardlinks to replicate a given filesystem tree from an OSTree repository
into the repository. This means each version is deduplicated; into plain ext4, BTRFS, XFS, or in general any Unix-compatible
an upgrade process only costs disk space proportional to the filesystem that supports hard links. Note: OSTree will
new files, plus some constant overhead. transparently take advantage of some BTRFS features if deployed
</para> on it.
<para> </para>
The model OSTree emphasizes is that the OS read-only content </chapter>
is kept in the classic Unix <filename
class='directory'>/usr</filename>; it comes with code to <chapter id="ostree-atomic-parallel-installation">
create a Linux read-only bind mount to prevent inadvertent <title>Atomic transitions between parallel-installable read-only filesystem trees</title>
corruption. There is exactly one <filename <para>
class='directory'>/var</filename> writable directory shared Another deeply fundamental difference between both package
between each deployment for a given OS. The OSTree core code managers and image-based replication is that OSTree is
does not touch content in this directory; it is up to the code designed to parallel-install <emphasis>multiple
in each operating system for how to manage and upgrade state. versions</emphasis> of multiple
</para> <emphasis>independent</emphasis> operating systems. OSTree
<para> relies on a new toplevel <filename
Finally, each deployment has its own writable copy of the class='directory'>ostree</filename> directory; it can in fact
configuration store <filename parallel install inside an existing OS or distribution
class='directory'>/etc</filename>. On upgrade, OSTree will occupying the physical <filename
perform a basic 3-way diff, and apply any local changes to the class='directory'>/</filename> root.
new copy, while leaving the old untouched. </para>
</para> <para>
</simplesect> On each client machine, there is an OSTree repository stored
in <filename class='directory'>/ostree/repo</filename>, and a
set of "deployments" stored in <filename
class='directory'>/ostree/deploy/<replaceable>OSNAME</replaceable>/<replaceable>CHECKSUM</replaceable></filename>.
Each deployment is primarily composed of a set of hardlinks
into the repository. This means each version is deduplicated;
an upgrade process only costs disk space proportional to the
new files, plus some constant overhead.
</para>
<para>
The model OSTree emphasizes is that the OS read-only content
is kept in the classic Unix <filename
class='directory'>/usr</filename>; it comes with code to
create a Linux read-only bind mount to prevent inadvertent
corruption. There is exactly one <filename
class='directory'>/var</filename> writable directory shared
between each deployment for a given OS. The OSTree core code
does not touch content in this directory; it is up to the code
in each operating system for how to manage and upgrade state.
</para>
<para>
Finally, each deployment has its own writable copy of the
configuration store <filename
class='directory'>/etc</filename>. On upgrade, OSTree will
perform a basic 3-way diff, and apply any local changes to the
new copy, while leaving the old untouched.
</para>
</chapter> </chapter>
</part> </part>