[Openembedded-architecture] Heterogeneous System Proposal
Paul Barker
paul at betafive.co.uk
Tue Dec 3 13:36:25 UTC 2019
On Tue, 3 Dec 2019, at 13:20, Beth Flanagan wrote:
> On Mon, Dec 2, 2019 at 9:37 PM Mark Hatle
> <mark.hatle at kernel.crashing.org> wrote:
> >
> > Problem Statement
> > -----------------
> > In the current world there is an increasing number of heterogeneous
> > systems being developed. Currently these components can be built
> > independently of each other, and then combined later. For ease of use,
> > it would be nice to be able to build these systems with a single build,
> > including generating a final bootable image. A heterogeneous system may
> > simply be different configurations for different components, different
> > operating systems for different components, or a system made up of
> > diverse processor architectures. Recently the Yocto Project has added
> > multiconfig in order to enable these types of configurations, but
> > suggested workflows for various configurations are needed to avoid
> > confusion and to avoid developers implementing their own schemes.
> >
> > Proposal
> > --------
> > During the course of discussing this with other people a number of terms
> > have been involved, and after numerous discussions it is clear that
> > people have a different definition for the various components. The
> > purpose of this document is to explain the various pieces of this
> > proposal, so that we can all be using the same items for the various
> > components necessary to build a heterogeneous system.
> >
> > For a homogeneous build, the traditional Open Embedded/Yocto Project
> > components that are used include:
> >
> > * Build Configuration (local.conf)
> > * MACHINE
> > * Specifies target device information, including hardware
> > capabilities, console settings, boot image configurations, etc.
> > These settings are used by MACHINE packages, as well as image
> > generation.
> > * Define MACHINE in the conf/local.conf file in the Build Directory.
> > * SOC_FAMILY (optional, but implied by the machine)
> > * Way to group together machines based upon common System On Chip
> > components.
> > * A SOC_FAMILY by itself is not a fully configured and bootable
> > machine, but may be used by a series of machines to specify the
> > common components.
> > * TUNE (typically specified implicitly by the machine)
> > * Specifies the CPU (instruction set), and ABI settings available to
> > the user
> > * DEFAULTTUNE then selects which of the available tunes is to be used.
> > This is usually set by the machine.
> > * DISTRO (Distribution Configuration)
> > * Specifies cross recipe configurations that together result in an
> > overall distribution configuration.
> > * Define DISTRO in the conf/local.conf file in the Build Directory.
> > If not specified, a default "nodistro" distribution is used.
> > * Recipes (.bb files)
> > * Specify how to generically build individual items. These recipes
> > will inherit the system wide settings from the distribution
> > configuration, and tune. For MACHINE specific packages they can
> > also inherit machine specific settings as well.
> > * Image Recipes
> > * Specify a series of dependencies that cause recipes to be built and
> > a list of resulting packages to be installed into a target image.
> > * Image recipes are responsible for constructing a filesystem image.
> > Further the system can extend these into a bootable disc image
> > format.
> >
> > In any sort of heterogeneous configuration we want to use and build upon
> > the existing homogeneous components. A heterogeneous solution is really
> > comprised of a number of homogeneous configurations that when deployed
> > together result in a fully functional device. In other words, each of
> > individual parts of the heterogeneous build are standalone and not tied
> > to the assembly of a specific system.
> >
> > Based on this, we want to avoid any changes that complicate the existing
> > homogeneous build components or even adding additional levels of
> > configuration as this will complicate existing and future uses. For
> > example, in the past there have been suggestions for SUBMACHINE or other
> > levels of hierarchy between SOC_FAMILY and MACHINE. Adding this level
> > of indirection can make it more difficult to combine different
> > configurations into new heterogeneous solutions. For example, if
> > someone has already defined a heterogeneous solution using a MACHINE,
> > SUBMACHINE, and SOC_FAMILY heirachy and you wish to extend it the
> > existing MACHINE / SUBMACHINE may interfear with your own systems unique
> > configuration.
> >
> > There are a few types of heterogenous systems that I have seen. Each of
> > them can be constructed by combining the output of homogeneous
> > configurations. The most basic heterogenous systems I have seen include
> > either a collection of containers or different OSes, a primary CPU +
> > co-processors, or all of the CPUs are independent of each other but
> > share resources. It is also possible to combine these heterogenous
> > configurations such as a multiple CPU system, with some CPUs having
> > co-processors with one or more CPU running containers.
> >
> > In the case of the container based system, you really want a master
> > homogeneous machine configuration along with a few additional
> > configurations that can be incorporated into that image. Using a
> > multilib configuration you would have something like:
> >
> > build (build directory)
> > conf
> > local.conf:
> > MACHINE = "genericx86_64"
> > DISTRO = "poky"
> > BBMULTICONFIG = "container1 containter2"
> > multiconfig
> > container1.conf:
> > MACHINE = "genericx86_64"
> > DISTRO = "mydistro1"
> > TMPDIR = "${TOPDIR}/tmp/multi/container1"
> > container2.conf:
> > MACHINE = "genericx86_64"
> > DISTRO = "mydistro2"
> > TMPDIR = "${TOPDIR}/tmp/multi/container2"
> > layers
> > meta-<custom_layer>
> > conf
> > distro
> > mydistro1.conf
> > mydistro2.conf
> > recipes-images
> > microservices
> > service-image-1.bb
> > service-image-2.bb
> > other
> > my-custom-image-recipe.bb:
> > do_image[mcdepends] = "mc:container1:service-image1:do_rootfs \
> > mc:container2:service-image2:do_rootfs"
> > do_image() { ... instructions for combining stuff ... }
> >
> >
> > This configuration allows for individual configurations for each
> > container and changes to the multiconfigs, but general re-use of much of
> > the system. Especially if the distros in each one are same or similar.
> >
> > Similar to the above, you could use a multiconfig system to combine
> > different operating systems. For instance, instead of building a
> > standard-alone bare-metal style bootloader as part of the OS
> > configuration, you could think of it as an external non-OS application.
> > A configuration might look like:
> >
> > build
> > conf
> > local.conf:
> > MACHINE = "genericx86_64"
> > DISTRO = "poky"
> > BBMULTICONFIG = "bootloader"
> > multiconfig
> > bootloader.conf:
> > MACHINE = "genericx86_64"
> > DISTRO = "baremetal"
> > TMPDIR = "${TOPDIR}/tmp/multi/bootloader"
> > layers
> > meta-mybootloader
> > conf
> > distro
> > baremetal.conf
> > recipes
> > newlib
> > newlib_ver.bb
> > first_stage
> > first_stage.bb
> > second_stage
> > second_stage.bb
> > bootloader
> > bootloader.bb
> > DEPENDS = "newlib first_stage second_stage"
> > meta-<custom_layer>
> > recipes
> > bootloader
> > mybootloader.bb
> > depend on mc:bootloader:second_stage:....
> >
> >
> > For a simple heterogeneous solution, where the main CPU may need to load
> > software for co-processors, the configuration would be similar to the
> > above bare-metal example. The difference would be that the MACHINE and
> > DISTRO settings for a DSP would need to have the necessary
> > configurations to properly build applications for the DSP. Then the
> > Linux side of things can take this baremetal software and package it up
> > independently or in conjunction with software to actually configure and
> > load that software:
> >
> > build
> > conf
> > local.conf:
> > MACHINE = "myarmccpu"
> > DISTRO = "poky"
> > BBMULTICONFIG = "dsp"
> > multiconfig
> > dsp.conf:
> > MACHINE = "magic-dsp"
> > DISTRO = "baremetal"
> > TMPDIR = "${TOPDIR}/tmp/multi/dsp"
> > layers
> > meta-dsp
> > conf
> > distro
> > baremetal.conf
> > machine
> > dsp.conf
> > recipes-dsp
> > newlib
> > newlib_ver.bb
> > library
> > library.bb
> > application
> > application.bb
> > DEPENDS = "newlib library application"
> > recipes-linux
> > dsp-application
> > ...depends on application...
> >
> > Note: in the above, separating the distro, machine and application
> > components into individual layers may be needed for Yocto Project
> > compliance.
> >
> >
> > In all of the above examples, the user has manually configured the
> > multiconfig within their project. There is a simple way to move that
> > configuration to a layer, simply place it in a conf/multiconfig
> > directory within that layer.
> >
> > This ability suggests to be that there should be a standard way to
> > specify a layer above that of machine, which defines the overall
> > characteristics of the system.
> >
> > I'm proposing calling this new layer type as a system layer and it's
> > configuration variable "SYSTEM". It will be used instead of MACHINE
> > when there is no single machine to quantify the contents of the produced
> > system image. When implementing a system, we do not want to make major
> > changes to any other components. Due to the existing implementation
> > requiring MACHINE and certain TUNE parameters, this will require us to
> > provide a special MACHINE value that can be used for a heterogeneous
> > system. I suggest we create a new 'nomachine' system that only
> > defines/uses an equivalent noarch style tune. This will instruct the
> > system that this configuration can be used to noarch software and create
> > images (including with wic), but it is not able to compile specific
> > applications. Each of these applications or images must come from a
> > defined MACHINE.
> >
> > The SYSTEM level multiconfig could be used to combine any homogeneous or
> > heterogeneous configuration. For example:
> >
> > build
> > conf
> > local.conf:
> > SYSTEM = "mysystem"
> > layers
> > meta-<system>
> > conf
> > system
> > mysystem.conf
> > MACHINE = "nomachine"
> > BBMULTICONFIG = "bootloader fpga linux"
> > mysystem.wks
> > multiconfig
> > bootloader.conf:
> > MACHINE = "zcu102_microblaze"
> > DISTRO = "baremetal"
> > TMPDIR = "${TOPDIR}/tmp/multi/bootloader"
> > fpga.conf:
> > MACHINE = "zcu_fpga"
> > DISTRO = "baremetal"
> > TMPDIR = "${TOPDIR}/tmp/multi/fpga"
> > linux.conf:
> > MACHINE = "zcu_cortex-a72"
> > DISTRO = "poky"
> > TMPDIR = "${TOPDIR}/tmp/multi/linux"
> > recipes
> > images
> > system-images.bb
> > do_image[mcdepends] = "mc:bootloader:application:do_deploy \
> > mc:fpga:application:do_deploy \
> > mc:linux:core-image-minimal:do_rootfs"
> > do_image() { ... instructions for combining stuff ... }
> > meta-<machine>
> > conf
> > machine
> > zcu_microblaze.conf
> > zcu_fpga.conf
> > zcu_cortext-a72.conf
> > meta-mybootloader
> > conf
> > distro
> > baremetal.conf
> > recipes
> > newlib
> > newlib_ver.bb
> > first_stage
> > first_stage.bb
> > second_stage
> > second_stage.bb
> > bootloader
> > bootloader.bb
> > DEPENDS = "newlib first_stage second_stage"
> > meta-fpga
> > conf
> > distro
> > baremetal.conf
> > recipes-baremetal
> > newlib
> > newlib_ver.bb
> > library
> > library.bb
> > application
> > application.bb
> > DEPENDS = "newlib library application"
>
> A lot of what you're talking about here we've been doing for a while
> in meta-oryx with SYSTEM_PROFILE and APPLICATION_PROFILE.
>
> https://gitlab.com/oryx/meta-oryx/tree/master
>
> ├── conf
> │ ├── application-profiles
> │ │ ├── full-cmdline.conf
> │ │ ├── host.conf
> │ │ ├── host-mender-update-modules.conf
> │ │ ├── host-test.conf
> │ │ └── minimal.conf
> │ ├── distro
> │ │ └── oryx.conf
> │ ├── layer.conf
> │ └── system-profiles
> │ ├── guest-mender-update-module.conf
> │ ├── native.conf
> │ └── native-mender.conf
>
> In what we're doing, we're mainly focusing on native hosts and
> containerised guests, but there is no reason this couldn't be expanded
> to fit your use case.
To expand on Beth's comments a little:
In Oryx we introduced two new variables, APPLICATION_PROFILE and SYSTEM_PROFILE. These work much like DISTRO and MACHINE. In conf/distro/oryx.conf we have:
require conf/system-profiles/${ORYX_SYSTEM_PROFILE}.conf
require conf/application-profiles/${ORYX_APPLICATION_PROFILE}.conf
The idea is that a SYSTEM_PROFILE can determine how the underlying MACHINE is being used - are we running natively or in a container? Are we booting locally or over the network? etc.
--
Paul Barker
More information about the Openembedded-architecture
mailing list