[bitbake-devel] [Openembedded-architecture] Multi-configuration builds

nick xerofoify at gmail.com
Tue Jun 21 19:18:44 UTC 2016



On 2016-06-21 02:48 PM, Koen Kooi wrote:
> 
> 
>> Op 21 jun. 2016 om 20:37 heeft nick <xerofoify at gmail.com> het volgende geschreven:
>>
>>
>>
>>> On 2016-06-21 09:26 AM, Koen Kooi wrote:
>>>
>>>> Op 20 jun. 2016, om 05:14 heeft nick <xerofoify at gmail.com> het volgende geschreven:
>>>>
>>>>
>>>>
>>>>> On 2016-06-19 10:46 PM, Trevor Woerner wrote:
>>>>>> On Fri 2016-06-10 @ 05:13:43 PM, Richard Purdie wrote:
>>>>>>> On Fri, 2016-06-10 at 12:07 -0400, Trevor Woerner wrote:
>>>>>>>> On Fri 2016-06-10 @ 04:33:29 PM, Richard Purdie wrote:
>>>>>>>> A few people have asked about multi-machine builds.
>>>>>>>
>>>>>>> Do you envision each config also pointing to individual bblayer
>>>>>>> configurations
>>>>>>> too? I.e. if I'm building for 3 different MACHINEs, with 3 different
>>>>>>> configs
>>>>>>> (local.conf?), then there would also be 3 different bblayers.conf's?
>>>>>>
>>>>>>
>>>>>> No, there is one local.conf and one bblayers.conf file and then three
>>>>>> different multiconfig files, each one of which sets a different
>>>>>> MACHINE.
>>>>>>
>>>>>> Would people really want to support different bblayer files? That would
>>>>>> complicate things quite a lot :/.
>>>>>
>>>>> Personally I have a common "Downloads" directory (this is probably quite normal).
>>>>>
>>>>> Then, I have a common "layers" directory in which I checkout every layer of
>>>>> which I'm aware. I also have a script that I run manually from time to time to
>>>>> keep each layer up to date (although it's capable of running any general git
>>>>> command on each git repository it finds one level beneath it):
>>>>>    https://github.com/twoerner/oe-misc/blob/master/scripts/gitcmd.sh
>>>>>
>>>>> I then create separate directories for each platform for which I'm interested
>>>>> in building (e.g. raspi2, raspi3, minnow, dragon, etc...). In each of those
>>>>> directories I have separate local.conf, bblayers.conf, sstate-cache, and tmp
>>>>> directories.
>>>>>
>>>>> I know most will disagree with this arrangement (especially the separate
>>>>> sstate-cache directories) but it's a system that has evolved over time, each
>>>>> decision was made based on experience, and it works great for me!
>>>>>
>>>>> It's been my experience that having too many layers in a build slows down the
>>>>> initial parsing stage noticeably and too often layers don't play well with
>>>>> each other. Also *many* build issues after an update can be fixed by blowing
>>>>> away tmp *and* sstate and starting over. Often, building for a particular
>>>>> board requires particular tweaks to local.conf (whether to enable a vendor
>>>>> license or to enable specific hardware/features) which don't apply to other
>>>>> boards and builds.
>>>>>
>>>>> I'm happy with the speed of my builds, and I have enough disk space to
>>>>> maintain the multiple sstates/tmps/etc. *Most* of my builds are
>>>>> core-image-full-cmdline-type builds and I can crank one of those out from
>>>>> scratch in 20 minutes (assuming the majority of sources have already been
>>>>> downloaded). Although I do sometimes need chromium (which takes an hour on its
>>>>> own) and I used to do qt (which is also quite painful). So I can understand
>>>>> how sstate might be more useful to others, but for me, not so much.
>>>> I second Trevor on this unless your building gui or media based packages sstate
>>>> is not very useful if you have a modern system with a 4 to 8 core CPU with 8 to
>>>> 16GB of ram. However if your trying to just do small tweaks to the same board and
>>>> test it may be of use as I use something similar for building the kernel called
>>>> ccache. Again as Trevor stated you may want to benchmark the results and see if
>>>> sstate actually decreases your build time by a significant margin i.e probably
>>>> more then half decreases your build speed. Otherwise I would agree with Trevor
>>>> and just not worry about sstate.
>>>
>>> My CI run that builds a basic image for about 30 machines drops from ~16 hours to about 1 hour after the first build with most of the remaining time spent in:
>>>
>>> 1) xz’ing the images
>>> 2) importing prserv-export.conf
>>> 3) parsing
>>>
>>> That’s with WORKDIR in tmpfs or nvme ssd, SSTATEDIR and DL_DIR on spinning rust RAID5, metadata on a regular ssd.
>>>
>>> regards,
>>>
>>> Koen
>> Koen,
>> I don't known the bitbake commands that well but to my knowledge there is a profile option that can tell you where you 
>> build is taking the most time. However why not try moving your whole project setup to the NVME ssd if there is enough
>> space on it, this may help improve the speed of your builds even further.
> 
> Nothing will be faster than tmpfs and the raid array is faster than pigz/Xz/bzip can process.
> 
That's true, so it seems fine to me but are you actually happy with or just curious is it's normal.
Seems pretty normal to be but then again I haven't tested on a nvme/tmpfs setup with a raid. You
can always update the CPU to more cores but it probably won't build much faster, maybe twice as 
fast depending on the processor you update to.
Nick
>> Hope this helps,
>> Nick



More information about the bitbake-devel mailing list