[OE-core] Improving Build Speed

Ulf Samuelsson ulf at emagii.com
Thu Nov 21 07:15:08 UTC 2013


2013-11-21 01:19, Martin Jansa skrev:
> On Wed, Nov 20, 2013 at 11:43:13PM +0100, Ulf Samuelsson wrote:
>> 2013-11-20 22:29, Richard Purdie skrev:
>> Another idea:
>>
>> I suspect that there is a lot of unpacking and patching of recipes
>> for the target when the native stuff is built.
>> Does it make sense to have multiple threads reading the disk, for
>> the target recipes during the native build or will we just lose out
>> due to seek time?
>>
>> Having multiple threads accessing the disk, might force the disk to spend
>> most of its time seeking.
>> Found an application which measures seek time performance,
>> and my WD Black will do 83 seeks per second, and my SAS disk will do
>> twice that.
>> The RAID of two SAS disks will provide close to SSD throughput (380 MB/s)
>> but seek time is no better than a single SAS disk.
>>
>> Since there is "empty time" at the end of the native build, does it make
>> sense
>> to minimize unpack/patch of target stuff when we reach that point, and
>> then we let loose?
> In my benchmarks increasing PARALLEL_MAKE till number of cores was
> significantly improving build time, but BB_NUMBER_THREADS had minimal
> influence somewhere above 6 or 8 (tested on various systems, even only 4 was
> optimum on my older RAID-0 and 2 on single disk).
> Of course it was quite different for clean build without sstate
> prepopulated and build where most of the stuff was reused from sstate.
>
> see http://wiki.webos-ports.org/wiki/OE_benchmark

How many cores do you have in your build machine?
I started a build, and after 20 minutes it had completed 1500 tasks using:

PARALLEL_MAKE     = "-j24"
BB_NUMBER_THREADS =   "6"

The I decided to kill it.

When I did
PARALLEL_MAKE     = "-j12"
BB_NUMBER_THREADS =   "24"

It completed 2000 tasks in less than half the time.

This does not use tmpfs though.
Do you have any comparision between tmpfs builds and RAID builds?

I currently do not use INHERIT += "rm_work"
since I want to be able to do changes on some packages.
Is there a way to defined rm_work on a package basis?
Then the majority of the packages can be removed.

I use 75 GB without "rm_work"


BR
Ulf
>
>> ========================
>>
>> Now with 48 MB of RAM, (which I might grow to 96 GB, if someone proves that
>> this makes it faster), this might be useful to speed things up.
>>
>> Can tmpfs beat the kernel cache system?
>>
>> 1.    Typically, I work on less than 10 recipes, and if I continuosly
>>           rebuild those, why not create the build directories as links to
>> a tmpfs file system.
>>           Maybe a configuration file with a list of recipes to build on
>> tmpfs.
>>
>>           During a build from scratch, this is not so useful, but once
>> most stuff is in place, it might,
>>
>> 2.     If the downloads directory was shadowed in a tmpfs system
>>           then there would be less seek time during the build.
>>           The downloads tmpfs should be poplulated at boot time,
>>           and rsynced with a real disk in the background when new stuff
>>           is downloaded from internet.
>>
>> 3.     With 96 GB of RAM, maybe the complete build directory will fit.
>>           Would be nice to build everything on tmpfs, and automatically rsync
>>           to a real disk when there is nothing else to do...
>>
>> 4.     If not tmpfs is used, then It would still be good to have better
>> control
>>           over the build directory.
>>           It make sense to me to have the metadata on an SSD, but the
>>           build directory should be on my RAID cluster for fast rebuilds.
>>           I can set this up manually, but it would be better to be able to
>>           specify this in a configuration file.
>>
> See
> http://www.mail-archive.com/yocto@yoctoproject.org/msg14879.html
>


-- 
Best Regards
Ulf Samuelsson
ulf at emagii.com
+46 722 427437




More information about the Openembedded-core mailing list