[OE-core] [PATCH 3/3] rm_work.bbclass: clean up sooner

Martin Jansa martin.jansa at gmail.com
Wed Mar 1 15:52:43 UTC 2017


On Thu, Feb 16, 2017 at 11:26:54AM +0100, Patrick Ohly wrote:
> On Wed, 2017-02-15 at 19:32 +0100, Martin Jansa wrote:
> > Are all changes necessary for this to work already in master?
> 
> Yes.
> 
> > Yesterday I've noticed that rm_work for some components which are
> > early in the dependency (like qtbase) are executed relatively late
> > (together with do_package_qa).
> 
> Could do_rm_work run before do_package_qa? rm_work.bbclass doesn't know
> that, and therefore schedules do_rm_work after do_package_qa.
> 
> If yes, then adding a list of tasks that can be ignored would be
> trivial. This can be a variable, so a recipe can even add their own
> ones, if necessary.

That's now what I've meant.

I believe that rm_work needs to be executed after do_package_qa, but I
don't understand the scheduler code enough (at least not yet) to say
that higher priority of rm_work task also makes all the tasks rm_work
depends on e.g. do_package_qa to be executed sooner.

From my observation it looked like do_package_qa is still executed
"late", but immediately followed by rm_work thanks to its high priority
(so it's executed as soon as it can, but it's still late in progress of
the whole build).

Another interesting test from today was to run:
# rm -rf tmp-glibc/*
# bitbake -n zlib | tee log.zlib.rm_work
# cd oe-core; git revert -1 936179754c8d0f98e1196ddc6796fdfd72c0c3b4; cd ..
# rm -rf tmp-glibc/*
# bitbake -n zlib | tee log.zlib.rm_work.revert

and it shows interesting difference that many rm_work tasks aren't
executed at all:

# grep rm_work log.zlib.rm_work* | grep zlib_
log.zlib.rm_work:NOTE: Running task 526 of 527 (/OE/build/oe-core/openembedded-core/meta/recipes-core/zlib/zlib_1.2.8.bb:do_rm_work)
log.zlib.rm_work.revert:NOTE: Running task 128 of 721 (virtual:native:/OE/build/oe-core/openembedded-core/meta/recipes-core/zlib/zlib_1.2.8.bb:do_rm_work)
log.zlib.rm_work.revert:NOTE: Running task 717 of 721 (/OE/build/oe-core/openembedded-core/meta/recipes-core/zlib/zlib_1.2.8.bb:do_rm_work)
log.zlib.rm_work.revert:NOTE: Running task 721 of 721 (/OE/build/oe-core/openembedded-core/meta/recipes-core/zlib/zlib_1.2.8.bb:do_rm_work_all)

# grep rm_work log.zlib.rm_work* | grep gcc
log.zlib.rm_work.revert:NOTE: Running task 2 of 721 (/OE/build/oe-core/openembedded-core/meta/recipes-devtools/gcc/gcc-source_6.3.bb:do_rm_work)
log.zlib.rm_work.revert:NOTE: Running task 240 of 721 (/OE/build/oe-core/openembedded-core/meta/recipes-devtools/gcc/gcc-cross-initial_6.3.bb:do_rm_work)
log.zlib.rm_work.revert:NOTE: Running task 250 of 721 (/OE/build/oe-core/openembedded-core/meta/recipes-devtools/gcc/libgcc-initial_6.3.bb:do_rm_work)
log.zlib.rm_work.revert:NOTE: Running task 634 of 721 (/OE/build/oe-core/openembedded-core/meta/recipes-devtools/gcc/gcc-cross_6.3.bb:do_rm_work)
log.zlib.rm_work.revert:NOTE: Running task 674 of 721 (/OE/build/oe-core/openembedded-core/meta/recipes-devtools/gcc/libgcc_6.3.bb:do_rm_work)
log.zlib.rm_work.revert:NOTE: Running task 678 of 721 (/OE/build/oe-core/openembedded-core/meta/recipes-devtools/gcc/gcc-runtime_6.3.bb:do_rm_work)

# grep -c rm_work log.zlib.rm_work*
log.zlib.rm_work:1
log.zlib.rm_work.revert:63

I'll check if it's something in my setup or if this happens everywhere now.

> > So I've tried very naive way to find out if the rm_work tasks are
> > executed sooner or not just by comparing Task IDs in build of the same
> > image built from scratch (without sstate) with Dizzy, Morty and
> > current master.
> 
> Interesting, I hadn't thought of testing it like that.
> 
> > If we dismiss the strange case in rm_work.tasks.master.qemux86 then it
> > seems to perform at least as good as old completion BB_SCHEDULER.
> > 
> > 
> > But I wanted to ask if there is something else we can do or you were
> > planing to do, because IIRC you shared some longer analysis of what
> > could be improved here and I'm not sure if you managed to implement it
> > all.
> 
> The other ideas that I mentioned at some point didn't pan out as
> intended. In particular allowing do_rm_work tasks to run when the normal
> task limit was reached didn't have a big effect and the implementation
> was a hack, so I dropped that.
> 
> > It feels to me that rm_work has high priority, but still it's
> > "blocked" by e.g. do_package_qa which gets executed late and then
> > immediately followed by rm_work.
> 
> That should be easy to change, perhaps like this (untested):
> 
> RM_WORK_TASKS_WHITELIST = "do_build do_package_qa"
> 
>         deps = set(bb.build.preceedtask('do_build', True, d))
> 	whitelist = d.getVar('RM_WORK_TASKS_WHITELIST').split()
>         deps.difference_update(whitelist)
>         # In practice, addtask() here merely updates the dependencies.
>         bb.build.addtask('do_rm_work', 'do_build', ' '.join(deps), d)
> 
> 
> -- 
> Best Regards, Patrick Ohly
> 
> The content of this message is my personal opinion only and although
> I am an employee of Intel, the statements I make here in no way
> represent Intel's position on the issue, nor am I authorized to speak
> on behalf of Intel on this matter.
> 
> 
> 

-- 
Martin 'JaMa' Jansa     jabber: Martin.Jansa at gmail.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: Digital signature
URL: <http://lists.openembedded.org/pipermail/openembedded-core/attachments/20170301/41f0cca1/attachment-0002.sig>


More information about the Openembedded-core mailing list