[OE-core] [PATCH v2] bitbake.conf: omit XZ threads and RAM from sstate signatures

Adrian Bunk bunk at stusta.de
Mon Feb 24 17:12:26 UTC 2020


On Mon, Feb 24, 2020 at 04:44:28PM +0000, Richard Purdie wrote:
> On Mon, 2020-02-24 at 15:40 +0200, Adrian Bunk wrote:
> > On Mon, Feb 24, 2020 at 12:59:55PM +0000, André Draszik wrote:
> > > The number of threads used, and the amount of memory allowed
> > > to be used, should not affect sstate signatures, as they
> > > don't affect the result.
> > 
> > Unfortunately they can affect the result.
> 
> I looked into this a bit and its complicated. The threads are used to
> compress chunks and their compression should be deterministic whether
> done serially or in parallel.
> 
> I did some tests and:
> 
> xz <file>
> gave equivalent output to:
> xz <file> --threads=1
> 
> and
> 
> xz <file> --threads=2
> xz <file> --threads=5
> xz <file> --threads=50
> 
> all give different identical output.
> 
> So if we force --threads >=2 we should have determinism?

This was also my guess after reading the manpage,
but no definite answer from me.

> > > Otherwise, it becomes impossible to re-use sstate from
> > > automated builders on developer's machines (as the former
> > > might execute bitbake with certain constraints different
> > > compared to developer's machines).
> > > ...
> > > -XZ_DEFAULTS ?= "--memlimit=50% --threads=${@oe.utils.cpu_count()}"
> > > ...
> > 
> > Threaded compression can result in slightly worse compression
> > than single-threaded compression.
> > 
> > With memlimit the problem is actually the opposite way,
> > and worse than what you were trying to fix:
> > 
> > When a developer hits memlimit during compression, the documented
> > behavour of xz is to scale down the compression level.
> > 
> > I assume 50% wrongly gives the same sstate signature no matter how
> > much RAM is installed on the local machine?
> 
> I did some tests locally and I could see different output checksums
> depending on how much memory I gave xz.
> 
> Perhaps we should specify a specific high amount like 1GB?

xz -9 needs 1.25 GB per thread.

And since xz decompression speed is linear to compressed size,
-9 is often wanted since it gives the fastest xz decompression.

> Does anyone know more about the internals and how to have this behave
> "nicely" for our needs?
> 
> FWIW we haven't seen variation on the autobuilder due to this as far as
> I know.

I assume the autobuilders have plenty of RAM per core?

For any reasonably sizes machine that doesn't OOM on larger C++ projects
the memlimit is a nop and can be dropped.

More problematic might be developers with oldish desktops/laptops with 
many cores and few RAM.

> Cheers,
> 
> Richard

cu
Adrian


More information about the Openembedded-core mailing list