[OE-core] ptest formatting

Tudor Florea Tudor.Florea at enea.com
Thu Jul 24 22:02:34 UTC 2014


Hi,



On 7/24/2014 19:40, Eric Yu wrote:

> Hello,

>        My name is Eric Yu, and I am an intern at National Instruments for this summer. The project I am currently

> working on involves integrating ptest with our current automated testing framework to help test our open-

> embedded distributions. On the Yocto Project wiki, it states that one major point of ptest is to consolidate the

> output of different package tests in a common “<result>: <testname>” format. I was hoping that this common

> format would allow ptest results to be easily machine parsed and integrated with our current testing framework.

> However, after enabling and running the ptests, I discovered that the formatting of the results was not as

> friendly to automation as I had hoped.

>

>  It appears that each separate package prints out its own errors, warnings, and other output along with these

> test results, burying the common output in lots of other text.



Indeed (one of) the purpose of ptest is to facilitate automation by using the common <result>: <testname> format.

However, ptest should not filter the output of a package test (so that the automation could be done easier).

That output might be useful in later investigation about why a test failed and further improving the package (testing).

Is should be easier to implement this filter in your automation scripts.



>

> Also, one package (gdk-pixbuff) used “FAILED” when reporting results rather than the expected “FAIL”.



This sounds like a bug. You can help us sending a patch for this. :)



>

> In the bash ptests, several tests print warnings saying to ignore failures where the output differs only by

> whitespace. This seems to be bad practice for test writing and is not friendly to automated analysis of test

> results.



This is a good example to support my statement above: While automation scripts may easily

add some PASS, FAIL, SKIP, XFAIL, XPASS and ERROR entries into a database, those (additional) warning are helpful

for people interested in improving bash (an bash testing), hence the output should be kept along with the results.



>

> At the conclusion of each ptest, some packages give a summary of how many tests were passed, skipped, and

> failed, while others do not. I find that having these summaries gives a useful overview of how the test went

> overall and is a good reference in case some tests fail to produce the common  “<result>: <testname>” output.

>

> I understand that much of this is due to the fact that separate developers write the tests for different packages,

> but it would be beneficial if ptest was friendlier to automated parsing and analysis of test results. Currently, I

> have addressed some of these obstacles by writing a simple script that parses the output of each ptest and only

> outputs only the “<result>: <testname>”  results while accounting for both “FAIL” and “FAILED”. The script keeps

> a running count of how many tests were reported as failed, skipped or passed, and at the conclusion of each

> ptest, the script prints a summary including the number of tests passed, skipped, and failed along with a total

> number of tests run. While this works with the current set of ptests, as more and more packages add ptest

> functionality, this script may not scale well if more inconsistencies in formatting are introduced. Therefore, I

> believe it would be a good idea to enforce a more consistent formatting of ptest results to assist in the use of

> ptest for automated testing. Are there any plans to further consolidate the ptest result format such that it is

> more accessible for automated testing?



I hope I have answered at least partially to your questions above.

Kind regards,

  Tudor.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openembedded.org/pipermail/openembedded-core/attachments/20140724/e9eb5328/attachment-0002.html>


More information about the Openembedded-core mailing list