[OE-core] [PATCH 3/3] rm_work.bbclass: clean up sooner

Martin Jansa martin.jansa at gmail.com
Wed Feb 15 18:32:35 UTC 2017


Are all changes necessary for this to work already in master?

Yesterday I've noticed that rm_work for some components which are early in
the dependency (like qtbase) are executed relatively late (together with
do_package_qa).

So I've tried very naive way to find out if the rm_work tasks are executed
sooner or not just by comparing Task IDs in build of the same image built
from scratch (without sstate) with Dizzy, Morty and current master.

First I've stripped unnecessary prefix and names of proprietary components
(in case someone wants me to share these lists):
grep "^NOTE: Running task .*, do_rm_work)$" log.build |sed
's#/jenkins/mjansa/build-[^/]*/##;
s#meta-lg-webos/[^:]*:#private-component:#g; s#^NOTE: Running task ##g' >
rm_work.tasks.dizzy

with slightly different regexp for morty and master:
grep "^NOTE: Running task .*:do_rm_work)$" ld.gold/log.m16p |sed
's#/jenkins/mjansa/build-[^/]*/##;
s#meta-lg-webos/[^:]*:#private-component:#g; s#^NOTE: Running task ##g' >
rm_work.tasks.morty

and then I did even more naive thing to compare average task id of rm-work
jobs with following results:
for i in rm_work.tasks.*; do echo $i; export COUNT=0 SUM=0; for
TASK in `cat $i | cut -f 1 -d\  `; do COUNT=`expr $COUNT + 1`; SUM=`expr
$SUM + $TASK`; done; echo "AVG = `expr $SUM / $C
OUNT`; COUNT = $COUNT"; done
rm_work.tasks.dizzy
AVG = 6429; COUNT = 764
rm_work.tasks.master
AVG = 7570; COUNT = 891
rm_work.tasks.master.qemux86
AVG = 5527; COUNT = 665
rm_work.tasks.morty
AVG = 6689; COUNT = 786
rm_work.tasks.morty.gold
AVG = 6764; COUNT = 786

rm_work.tasks.morty.gold is the same build as in rm_work.tasks.morty just
with ld-is-gold added to DISTRO_FEATUREs (as I was testing build time to
compare ld.bfd and ld.gold in our images).
rm_work.tasks.master.qemux86 is the same build as rm_work.tasks.master but
for qemux86, all other builds are for some ARM board we use

Then few interesting steps:

gcc-cross looks good (not available for dizzy build which is using external
toolchain)
$ grep gcc-cross_ rm_work.tasks.*
rm_work.tasks.master:510 of 14470
(oe-core/meta/recipes-devtools/gcc/gcc-cross_6.3.bb:do_rm_work)
rm_work.tasks.master.qemux86:515 of 10296
(oe-core/meta/recipes-devtools/gcc/gcc-cross_6.3.bb:do_rm_work)
rm_work.tasks.morty:2592 of 12021
(oe-core/meta/recipes-devtools/gcc/gcc-cross_6.2.bb:do_rm_work)
rm_work.tasks.morty.gold:2734 of 12021
(oe-core/meta/recipes-devtools/gcc/gcc-cross_6.2.bb:do_rm_work)

qtdeclarative-native got rm_work a bit later, whcih might be caused only by
the increased number of tasks thanks to RSS
$ grep native.*qtdeclarative rm_work.tasks.*
rm_work.tasks.dizzy:2101 of 11766 (ID: 11128,
virtual:native:meta-qt5/recipes-qt/qt5/qtdeclarative_git.bb, do_rm_work)
rm_work.tasks.master:2614 of 14470
(virtual:native:meta-qt5/recipes-qt/qt5/qtdeclarative_git.bb:do_rm_work)
rm_work.tasks.master.qemux86:2521 of 10296
(virtual:native:meta-qt5/recipes-qt/qt5/qtdeclarative_git.bb:do_rm_work)
rm_work.tasks.morty:1513 of 12021
(virtual:native:meta-qt5/recipes-qt/qt5/qtdeclarative_git.bb:do_rm_work)
rm_work.tasks.morty.gold:1514 of 12021
(virtual:native:meta-qt5/recipes-qt/qt5/qtdeclarative_git.bb:do_rm_work)

and here is the target qtdeclarative which trigered this whole naive
analysis:
$ grep qtdeclarative rm_work.tasks.* | grep -v native
rm_work.tasks.dizzy:4952 of 11766 (ID: 6670, meta-qt5/recipes-qt/qt5/
qtdeclarative_git.bb, do_rm_work)
rm_work.tasks.master:4317 of 14470
(meta-qt5/recipes-qt/qt5/qtdeclarative_git.bb:do_rm_work)
rm_work.tasks.master.qemux86:10142 of 10296
(meta-qt5/recipes-qt/qt5/qtdeclarative_git.bb:do_rm_work)
rm_work.tasks.morty:6753 of 12021
(meta-qt5/recipes-qt/qt5/qtdeclarative_git.bb:do_rm_work)
rm_work.tasks.morty.gold:6883 of 12021
(meta-qt5/recipes-qt/qt5/qtdeclarative_git.bb:do_rm_work)

If we dismiss the strange case in rm_work.tasks.master.qemux86 then it
seems to perform at least as good as old completion BB_SCHEDULER.

But I wanted to ask if there is something else we can do or you were
planing to do, because IIRC you shared some longer analysis of what could
be improved here and I'm not sure if you managed to implement it all.

It feels to me that rm_work has high priority, but still it's "blocked" by
e.g. do_package_qa which gets executed late and then immediately followed
by rm_work.

in ideal case I would really like to have a switch which will force rm_work
to take absolute priority over other tasks, it doesn't take very long to
delete the files in tmpfs and would allow me to do tmpfs builds on builders
with smaller RAM.

The "state of bitbake world" builds are performed in 74G tmpfs (for whole
tmpdir-glibc) and yesterday's builds started to fail again (when it happens
to run chromium and chromium-wayland at the same time) - the manual
solution for this I'm using for last couple years is to build in "steps"
which force to run rm_work for all included components, so e.g.

bitbake gcc-cross-arm && bitbake small-image && bitbake chromium && bitbake
chromium-wayland && bitbake big-image && bitbake world

will keep the tmpfs usage peaks much lower than running just bitbake world

On Fri, Jan 6, 2017 at 10:55 AM, Patrick Ohly <patrick.ohly at intel.com>
wrote:

> Having do_rm_work depend on do_build had one major disadvantage:
> do_build depends on the do_build of other recipes, to ensure that
> runtime dependencies also get built. The effect is that when work on a
> recipe is complete and it could get cleaned up, do_rm_work still
> doesn't run because it waits for those other recipes, thus leading to
> more temporary disk space usage than really needed.
>
> The right solution is to inject do_rm_work before do_build and after
> all tasks of the recipe. Achieving that depends on the new bitbake
> support for prioritizing anonymous functions to ensure that
> rm_work.bbclass gets to see a full set of existing tasks when adding
> its own one. This is relevant, for example, for do_analyseimage in
> meta-security-isafw's isafw.bbclass.
>
> In addition, the new "rm_work" scheduler is used by default. It
> prioritizes finishing recipes over continuing with the more
> important recipes (with "importance" determined by the number of
> reverse-dependencies).
>
> Benchmarking (see "rm_work + pybootchart enhancements" on the OE-core
> mailing list) showed that builds with the modified rm_work.bbclass
> were both faster (albeit not by much) and required considerably less
> disk space (14230MiB instead of 18740MiB for core-image-sato).
> Interestingly enough, builds with rm_work.bbclass were also faster
> than those without.
>
> Signed-off-by: Patrick Ohly <patrick.ohly at intel.com>
> ---
>  meta/classes/rm_work.bbclass | 31 ++++++++++++++++++-------------
>  1 file changed, 18 insertions(+), 13 deletions(-)
>
> diff --git a/meta/classes/rm_work.bbclass b/meta/classes/rm_work.bbclass
> index 3516c7e..1205104 100644
> --- a/meta/classes/rm_work.bbclass
> +++ b/meta/classes/rm_work.bbclass
> @@ -11,16 +11,13 @@
>  # RM_WORK_EXCLUDE += "icu-native icu busybox"
>  #
>
> -# Use the completion scheduler by default when rm_work is active
> +# Use the dedicated rm_work scheduler by default when rm_work is active
>  # to try and reduce disk usage
> -BB_SCHEDULER ?= "completion"
> +BB_SCHEDULER ?= "rm_work"
>
>  # Run the rm_work task in the idle scheduling class
>  BB_TASK_IONICE_LEVEL_task-rm_work = "3.0"
>
> -RMWORK_ORIG_TASK := "${BB_DEFAULT_TASK}"
> -BB_DEFAULT_TASK = "rm_work_all"
> -
>  do_rm_work () {
>      # If the recipe name is in the RM_WORK_EXCLUDE, skip the recipe.
>      for p in ${RM_WORK_EXCLUDE}; do
> @@ -97,13 +94,6 @@ do_rm_work () {
>          rm -f $i
>      done
>  }
> -addtask rm_work after do_${RMWORK_ORIG_TASK}
> -
> -do_rm_work_all () {
> -    :
> -}
> -do_rm_work_all[recrdeptask] = "do_rm_work"
> -addtask rm_work_all after do_rm_work
>
>  do_populate_sdk[postfuncs] += "rm_work_populatesdk"
>  rm_work_populatesdk () {
> @@ -117,7 +107,7 @@ rm_work_rootfs () {
>  }
>  rm_work_rootfs[cleandirs] = "${WORKDIR}/rootfs"
>
> -python () {
> +python __anonymous_rm_work() {
>      if bb.data.inherits_class('kernel', d):
>          d.appendVar("RM_WORK_EXCLUDE", ' ' + d.getVar("PN"))
>      # If the recipe name is in the RM_WORK_EXCLUDE, skip the recipe.
> @@ -126,4 +116,19 @@ python () {
>      if pn in excludes:
>          d.delVarFlag('rm_work_rootfs', 'cleandirs')
>          d.delVarFlag('rm_work_populatesdk', 'cleandirs')
> +    else:
> +        # Inject do_rm_work into the tasks of the current recipe such
> that do_build
> +        # depends on it and that it runs after all other tasks that block
> do_build,
> +        # i.e. after all work on the current recipe is done. The reason
> for taking
> +        # this approach instead of making do_rm_work depend on do_build
> is that
> +        # do_build inherits additional runtime dependencies on
> +        # other recipes and thus will typically run much later than
> completion of
> +        # work in the recipe itself.
> +        deps = bb.build.preceedtask('do_build', True, d)
> +        if 'do_build' in deps:
> +            deps.remove('do_build')
> +        bb.build.addtask('do_rm_work', 'do_build', ' '.join(deps), d)
>  }
> +# Higher priority than the normal 100, and thus we run after other
> +# classes like package_rpm.bbclass which also add custom tasks.
> +__anonymous_rm_work[__anonprio] = "1000"
> --
> 2.1.4
>
> --
> _______________________________________________
> Openembedded-core mailing list
> Openembedded-core at lists.openembedded.org
> http://lists.openembedded.org/mailman/listinfo/openembedded-core
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openembedded.org/pipermail/openembedded-core/attachments/20170215/8b1fae8f/attachment-0002.html>


More information about the Openembedded-core mailing list