[oe] Performance measurement: Building openembedded-core with and without overclocking on C-i7 2600K

Raffaele Recalcati lamiaposta71 at gmail.com
Sat Oct 29 14:23:08 UTC 2011


Hi,

> Our main build server has two Xeons X5670 with 96 GB of RAM. It's running 24×7
> cyclicly doing clean isolated builds of all our OE projects (currently five).
> "Isolated" means that the the actual build is done inside a container (LXC)
> with no network available at all, that is a requirement for all our builds, to
> be able to take one tarball of sources, one tarball of OE tree and build the
> project somewhere in a nuclear bunker.

So, LXC sounds a better way than qemu for performances, isn't it?
Portability issues (next years operating systems) moving around the OE setup?

> The system is configured in such way that container is located on tmpfs (with
> size of 50 GB). The most complex build takes about hour and a half on this
> system occupying about 45 GB of space in a ramdisk.

I'm now building on xeon 5530 8 core (HP Z800) with 24GB DDR3.
20GB for tmpfs where I put only the tmp directory.
Recipes dir is on ssd 480MB/S writing speed.

> So today I tried to get some RAM vs. Linux cache statistics and switched this
> mount point over to newly created 60 GB LVM partition with ext4 on RAID0 array
> consisting of two SAS 15K drives.
>
> The system made builds for three projects is this configuration and I see no
> difference at all, usual 1-2 minutes deviation. Granted, the system has quite
> powerful disks (RAID array gives about 380 MB/sec on hdparm) and things might
> be a little different on plain SATA drives, but frankly I'd expected to see
> the difference anyway since there are lots of small files involved in a build.

You say you have no big differences between all tmpfs and SAS one?
I have instead SAS 15k RAID1, but I don't remember now their speed.
Moving from a complete SAS 15K setup to tmp dir in RAM I've instead
reduced from 4h to 1.5h for my image.

> Maybe I should try to further degrade the disk system by creating some
> encrypted volume inside LVM, but still from what I see Linux caching and
> buffering works good enough, just give it as much RAM as you can.
>
> But then also what you'll get from RAM or disk or even CPU upgrade depends on
> what type of build you have. Upgrading developers build servers from pair of
> 4-core Opterons (don't remember exact model) with 8 GB of RAM to pair of Xeons
> E5620 with 24 GB of RAM with comparable disks gave about 20-30% of build time
> reduction for one project and 50% for another. But that builds are not
> isolated and use icecc cluster with all build servers available to the
> cluster, maybe that helps also in our situation.

In my opinion the best setup is the one that I can move from a box to
another one without problem,
to be safe in case a box stops working.
If it stops I go to another one, I rsync download from the backup
server and restart.

Redundancy of cpu power should a different approach, but you "nuclear
bunker" contraints doesn't include this possibility.
I was in the past thinking to get a 15k€ building system with 2
computational nodes and a fast storage, with a standard debian on it.
A "small" blade center.
A developer ssh into it, he can do meld through nfs mount, start it's
own tagged qemu and compile his image.
If one core crash, the other one works anyway.

But now we are instead moving from 8 cores to 16 cores (IBM I don't
remember the model, something about 7000k€ off full price w/o
discount) with 64GB ram in order to be able to compile at least the
compilation at the same time.

Anyway it is difficult to convince developers to use central machine,
instead of using their own awful pc.
It's an hard job, but it's needed to speed up the stupid compilation job.

To complete the redundancy idea I'll have soon the 16cores with arch
linux and qemu through virtual manager.
If I have 16cores pb I'll get the used qemu image to the 8cores and I go on.

Bye,
Raffaele

-- 
www.opensurf.it




More information about the Openembedded-devel mailing list