[OE-core] [PATCH] classes/rm_work: use the idle I/O scheduler class

Andre McCurdy armccurdy at gmail.com
Fri Jun 17 20:47:25 UTC 2016


On Thu, Jun 16, 2016 at 2:04 AM, Patrick Ohly <patrick.ohly at intel.com> wrote:
> On Tue, 2016-06-14 at 16:18 +0100, Ross Burton wrote:
>> As rm_work is just cleanup it shouldn't starve more important tasks such as
>> do_compile of I/O, so use BB_TASK_IONICE_LEVEL to run the task in the idle
>> scheduler class.
>
> Whether that's desirable depends a lot on the goals for rm_work: when I
> tried to use it for TravisCI to get around some pretty tight disk space
> constraints, I found that do_rm_work was often not scheduled early
> enough because other tasks generating more files had higher priority.
>
> Reducing the IO priority of do_rm_work may have the same effect: it
> runs, but then instead of removing files, the system produces more of
> them, thus increasing the risk of exhausting the disk space.
>
> I suspect a lot of benchmarking will be needed to determine what really
> works well and what doesn't.
>
> I ended up writing a custom scheduler for running under TravisCI:
> https://github.com/01org/meta-intel-iot-security/blob/master/scripts/rmwork.py

I like the idea of BB_NUMBER_COMPILE_THREADS. Would it be appropriate
to add support for that to the default scheduler?

> That orders do_rm_work before any other task and also orders all tasks
> related to single recipe so that they run together, thus making it
> possible to clean up after do_build sooner. As an additional tweak it
> distinguishes between "compile" and "cleanup" tasks and can run
> "cleanup" tasks when the normal scheduler wouldn't because
> BB_NUMBER_THREADS is reached.
>
> But it has the same problem: not enough benchmarking to really quantify
> the effect. All I know is that I stopped running out of disk space under
> TravisCI ;-}
>
> --
> Best Regards, Patrick Ohly
>
> The content of this message is my personal opinion only and although
> I am an employee of Intel, the statements I make here in no way
> represent Intel's position on the issue, nor am I authorized to speak
> on behalf of Intel on this matter.
>
>
>
> --
> _______________________________________________
> Openembedded-core mailing list
> Openembedded-core at lists.openembedded.org
> http://lists.openembedded.org/mailman/listinfo/openembedded-core



More information about the Openembedded-core mailing list