diff --git a/doc/overview.xml b/doc/overview.xml index 2c8e9576..7e535306 100644 --- a/doc/overview.xml +++ b/doc/overview.xml @@ -1,6 +1,6 @@ ]> @@ -22,110 +22,115 @@ images. Instead, OSTree sits between those levels, offering a blend of the advantages (and disadvantages) of both. - - - Comparison with "package managers" - - Because OSTree is designed for deploying core operating - systems, a comparison with traditional "package managers" such - as dpkg and rpm is illustrative. Packages are traditionally - composed of partial filesystem trees with metadata and scripts - attached, and these are dynamically assembled on the client - machine, after a process of dependency resolution. - - - In contrast, OSTree only supports recording and deploying - complete (bootable) filesystem trees. It - has no built-in knowledge of how a given filesystem tree was - generated or the origin of individual files, or dependencies, - descriptions of individual components. - - - The OSTree core emphasizes replicating read-only trees via - HTTP. It is designed for the model where a build server - assembles one or more trees, and these are replicated to - clients, which can choose between fully assembled (and - hopefully tested) trees. - - - However, it is entirely possible to use OSTree underneath a - package system; For example, when installing a package, rather - than mutating the currently running filesystem, the package - manager could assemble a new filesystem tree that includes the - new package, record it in the local OSTree repository, and - then set it up for the next boot. To support this model, - OSTree provides an (introspectable) C shared library. - - - - Comparison with block/image replication - - OSTree shares some similarity with "dumb" replication and - stateless deployments, such as the model common in "cloud" - deployments where nodes are booted from an (effectively) - readonly disk, and user data is kept on a different volumes. - The advantage of "dumb" replication, shared by both OSTree and - the cloud model, is that it's reliable - and predictable. - - - But unlike many default image-based deployments, OSTree - supports a persistent, writable /etc that - is preserved across upgrades. - - - Because OSTree operates at the Unix filesystem layer, it works - on top of any filesystem or block storage layout; it's - possible to replicate a given filesystem tree from an OSTree - repository into both a BTRFS disk and an XFS-on-LVM - deployment. Note: OSTree will transparently take advantage of - some BTRFS features if deployed on it. - - + + + + Comparison with "package managers" + + Because OSTree is designed for deploying core operating + systems, a comparison with traditional "package managers" such + as dpkg and rpm is illustrative. Packages are traditionally + composed of partial filesystem trees with metadata and scripts + attached, and these are dynamically assembled on the client + machine, after a process of dependency resolution. + + + In contrast, OSTree only supports recording and deploying + complete (bootable) filesystem trees. It + has no built-in knowledge of how a given filesystem tree was + generated or the origin of individual files, or dependencies, + descriptions of individual components. + + + The OSTree core emphasizes replicating read-only OS trees via + HTTP, and where the OS includes (if desired) an entirely + separate mechanism to install applications, stored in /var if they're system global, or + /home for per-user + application installation. + + + However, it is entirely possible to use OSTree underneath a + package system, where the contents of /usr are computed on the client. + For example, when installing a package, rather than mutating the + currently running filesystem, the package manager could assemble + a new filesystem tree that includes the new package, record it + in the local OSTree repository, and then set it up for the next + boot. To support this model, OSTree provides an + (introspectable) C shared library. + + - - Atomic transitions between parallel-installable read-only filesystem trees - - Another deeply fundamental difference between both package - managers and image-based replication is that OSTree is - designed to parallel-install multiple - versions of multiple - independent operating systems. OSTree - relies on a new toplevel ostree directory; it can in fact - parallel install inside an existing OS or distribution - occupying the physical / root. - - - On each client machine, there is an OSTree repository stored - in /ostree/repo, and a - set of "deployments" stored in /ostree/deploy/OSNAME/CHECKSUM. - Each deployment is primarily composed of a set of hardlinks - into the repository. This means each version is deduplicated; - an upgrade process only costs disk space proportional to the - new files, plus some constant overhead. - - - The model OSTree emphasizes is that the OS read-only content - is kept in the classic Unix /usr; it comes with code to - create a Linux read-only bind mount to prevent inadvertent - corruption. There is exactly one /var writable directory shared - between each deployment for a given OS. The OSTree core code - does not touch content in this directory; it is up to the code - in each operating system for how to manage and upgrade state. - - - Finally, each deployment has its own writable copy of the - configuration store /etc. On upgrade, OSTree will - perform a basic 3-way diff, and apply any local changes to the - new copy, while leaving the old untouched. - - + + Comparison with block/image replication + + OSTree shares some similarity with "dumb" replication and + stateless deployments, such as the model common in "cloud" + deployments where nodes are booted from an (effectively) + readonly disk, and user data is kept on a different volumes. + The advantage of "dumb" replication, shared by both OSTree and + the cloud model, is that it's reliable + and predictable. + + + But unlike many default image-based deployments, OSTree + supports a persistent, writable /etc that + is preserved across upgrades. + + + Because OSTree operates at the Unix filesystem layer, it works + on top of any filesystem or block storage layout; it's possible + to replicate a given filesystem tree from an OSTree repository + into plain ext4, BTRFS, XFS, or in general any Unix-compatible + filesystem that supports hard links. Note: OSTree will + transparently take advantage of some BTRFS features if deployed + on it. + + + + + Atomic transitions between parallel-installable read-only filesystem trees + + Another deeply fundamental difference between both package + managers and image-based replication is that OSTree is + designed to parallel-install multiple + versions of multiple + independent operating systems. OSTree + relies on a new toplevel ostree directory; it can in fact + parallel install inside an existing OS or distribution + occupying the physical / root. + + + On each client machine, there is an OSTree repository stored + in /ostree/repo, and a + set of "deployments" stored in /ostree/deploy/OSNAME/CHECKSUM. + Each deployment is primarily composed of a set of hardlinks + into the repository. This means each version is deduplicated; + an upgrade process only costs disk space proportional to the + new files, plus some constant overhead. + + + The model OSTree emphasizes is that the OS read-only content + is kept in the classic Unix /usr; it comes with code to + create a Linux read-only bind mount to prevent inadvertent + corruption. There is exactly one /var writable directory shared + between each deployment for a given OS. The OSTree core code + does not touch content in this directory; it is up to the code + in each operating system for how to manage and upgrade state. + + + Finally, each deployment has its own writable copy of the + configuration store /etc. On upgrade, OSTree will + perform a basic 3-way diff, and apply any local changes to the + new copy, while leaving the old untouched. +