[OE-core] Illustration of good and bad QA test output

Paul Eggleton paul.eggleton at linux.intel.com
Tue Sep 8 10:27:49 UTC 2015


On Tuesday 08 September 2015 09:53:31 Richard Purdie wrote:
> A while back, I asked for a review of test output when things fail as it
> was causing problems. I suspect some people don't really understand why
> this is a big deal. I'd therefore like to illustrate this with a new
> example:
> 
> https://autobuilder.yoctoproject.org/main/builders/nightly-oe-selftest/build
> s/180/steps/Running%20oe-selftest/logs/stdio
> 
> This is a failure we encountered on the autobuilder. It basically tells
> me that "1 is not 0".  I did see this failure in master-next but I had
> no what to know what was causing it, whether it was a patch in -next or
> whether it came from somewhere else. I therefore had to move things
> forward and merged -next.
> 
> Locally, I added this change:
> 
> http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/t222&id=
> c8134250f72d5102f96137265ae71cf8f2eff5a9
> 
> In amongst the build output this shows when the test fails, I see:
> 
> WARNING: Failed to fetch URL
> file://d1/sstate:m4::1.4.17:r0::3:d1c56d6fa574f1093f4be30cb90ac127_populate
> _lic.tgz.sig, attempting MIRRORS if available ERROR: Fetcher failure: Unable
> to find file
> file://d1/sstate:m4::1.4.17:r0::3:d1c56d6fa574f1093f4be30cb90ac127_populate
> _lic.tgz.sig anywhere. The paths that were searched were:
> /media/build1/poky/build/temp_sstate_20150907190612
>     /media/build1/poky/build/temp_sstate_20150907190612
> NOTE: recipe m4-1.4.17-r0: task do_populate_lic_setscene: Succeeded
> 
> and from that error and my knowledge of what was in -next, I can be
> pretty sure this comes from:
> 
> http://git.yoctoproject.org/cgit.cgi/poky/commit/?id=e3feac122b6baa67a6e75a9
> 9da6e8834f0f2a7b0
> 
> and I now know who to blame (sorry Ross!). The issue is that this is now
> in master and we have the error in a much more serious place. If the QA
> code was showing better information in the case of failure, this would
> never have made it into master.
> 
> I mention this since I'm hoping a practical example of how the test
> failures influence decisions and make a difference to the project might
> encourage people to pay more attention to these details.

This is a good thing to note; being able to tell why the test has failed helps 
a lot if you're trying to figure out why the test failed and whether it's the 
test or the code that needs fixing.

One addendum though, this is not necessary if you are using runCmd() or 
bitbake() without passing ignore_status=True - it will fail the test and print 
the output for you if the command fails. Generally I only use 
ignore_status=True when I need to test a command that is expected to fail (any 
needed cleanup can generally be handled through add_command_to_tearDown() and 
track_for_cleanup()).

Cheers,
Paul

-- 

Paul Eggleton
Intel Open Source Technology Centre



More information about the Openembedded-core mailing list