[OE-core] Wind River Continuous Integration project

Randy MacLeod randy.macleod at windriver.com
Fri Oct 20 22:07:18 UTC 2017


On 2017-10-20 02:18 PM, Konrad Scherer wrote:
> 
> One of the common challenges of Yocto development is the setup and 
> maintenance of a build cluster. If only there was a generic open source 
> Yocto continuous integration project to make the setup and maintenance 
> of build clusters easy...
> 
> I have a working prototype and it's on GitHub.
> 
> https://github.com/WindRiver-OpenSourceLabs/ci-scripts
> 
> It is a set of scripts and docker images for building Yocto images using 
> Jenkins and Docker. There are three main features that I think you will 
> find interesting:

Konrad and I work together so my comments here might be biased. :)

I haven't been involved in the ci-scripts project until I tried
it out yesterday. Without Konrad's guidance, I was able to
install the system and run a core-image-minimal build. The
set-up took ~ 10 minutes of my time including reading some of
the Docker docs, installing packages, then it was another 10-15
mintues of downloading the images and so on on my slow network.

I ran it out on an old core2, 4 GB RAM, with one of those old,
slow, spinning magnetic disks  on a slowish network. A build
in the ubuntu container in docker finished in the same as
a native build:
   3hr 22min for core-image-minimal, poky/pyro.
Such slow hardware!

Btw, I recently got a Google Compute Platform (GCP) offer of
$300 worth of GCP time. I set-up a reasonable VM and on that
Haswell, 4 core system a build of core-image-minimal took
about 60 minutes and finally for comparison sake on our
128 vCore system, core-image-minimal takes ~ 30 minutes.

The cost for the build was 62 cents:

Resource               Usaged           Amount
Custom instance Core   658.42 Minute    $0.45
Custom instance Ram     21.95 GiB-hour  $0.12
Storage PD Capacity    959.78 GiB-day   $0.05

so a 10 node cluster running developer builds would be:
   $ 148/day
   $ 54K/year

There would of course be ways to reduce this cost using
sstate, ccache, etc.

I plan to set-up a cluster to test docker swarm at some
point.




> 1) Multi-host builds using Docker Swarm. This provides an easy way to
> scale the build cluster from one to tens of machines. Docker Swarm makes 
> this surprisingly simple.

The setup was pretty straight-forward for the single-host mode.

I'm using Fedora 26. It did require following the docker links
and making some decisions about how to install docker.
I went with adding the Docker CE rpm feed to my system
and then using the horrid curl-fetches-a-script approach
for Docker Compose.

Konrad, any idea why the Docker team doesn't have an rpm feed for
Docker Compose?

> 
> 2) Developer builds. This enables build testing of patches before they
> are committed to the main branches. It leverages the WR setup[1]
> program and a temporary layerindex to assemble a custom project that
> matches the developer's local project.

I haven't done that yet but
we have used this feature in our previous system at WR and it really
helps a team test their work when they can dispatch 10s of builds
for various configurations across the network. I'm sure that
many people in the community would like to be able to do the same
thing if we can figure out how to make the hardware available
without blowing the YP/LF budget!

> 
> 3) Toaster integration. A simple UI to dynamically expose the Toaster
> interface of all in progress builds.

I haven't done this yet.

> 
> We are deploying this internally at Wind River and I hope that the 
> oe-core community will be interested in collaborating on its development.
> 
> Please have a look and give it a try. All feedback welcome!

I spoke with Konrad and he needs to change a default so
that builds aren't removed:
   https://github.com/WindRiver-OpenSourceLabs/ci-scripts/issues/1
I guess I could fix that myself but it's the weekend now.


Looks like a good start to me.

../Randy

> 
> [1]: https://github.com/Wind-River/wr-lx-setup
> 


-- 
# Randy MacLeod.  WR Linux
# Wind River an Intel Company



More information about the Openembedded-core mailing list