[OE-core] ptest formatting

Eric Yu eric.yu at ni.com
Thu Jul 24 16:40:17 UTC 2014


Hello,
        My name is Eric Yu, and I am an intern at National Instruments for 
this summer. The project I am currently working on involves integrating 
ptest with our current automated testing framework to help test our 
open-embedded distributions. On the Yocto Project wiki, it states that one 
major point of ptest is to consolidate the output of different package 
tests in a common “<result>: <testname>” format. I was hoping that this 
common format would allow ptest results to be easily machine parsed and 
integrated with our current testing framework. However, after enabling and 
running the ptests, I discovered that the formatting of the results was 
not as friendly to automation as I had hoped.
 It appears that each separate package prints out its own errors, 
warnings, and other output along with these test results, burying the 
common output in lots of other text. Also, one package (gdk-pixbuff) used 
“FAILED” when reporting results rather than the expected “FAIL”. 
In the bash ptests, several tests print warnings saying to ignore failures 
where the output differs only by whitespace. This seems to be bad practice 
for test writing and is not friendly to automated analysis of test 
results. 
At the conclusion of each ptest, some packages give a summary of how many 
tests were passed, skipped, and failed, while others do not. I find that 
having these summaries gives a useful overview of how the test went 
overall and is a good reference in case some tests fail to produce the 
common  “<result>: <testname>” output.
I understand that much of this is due to the fact that separate developers 
write the tests for different packages, but it would be beneficial if 
ptest was friendlier to automated parsing and analysis of test results. 
Currently, I have addressed some of these obstacles by writing a simple 
script that parses the output of each ptest and only outputs only the 
“<result>: <testname>”  results while accounting for both “FAIL” and 
“FAILED”. The script keeps a running count of how many tests were reported 
as failed, skipped or passed, and at the conclusion of each ptest, the 
script prints a summary including the number of tests passed, skipped, and 
failed along with a total number of tests run. While this works with the 
current set of ptests, as more and more packages add ptest functionality, 
this script may not scale well if more inconsistencies in formatting are 
introduced. Therefore, I believe it would be a good idea to enforce a more 
consistent formatting of ptest results to assist in the use of ptest for 
automated testing. Are there any plans to further consolidate the ptest 
result format such that it is more accessible for automated testing?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openembedded.org/pipermail/openembedded-core/attachments/20140724/300daa6a/attachment-0002.html>


More information about the Openembedded-core mailing list