[OE-core] [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting

Richard Purdie richard.purdie at linuxfoundation.org
Mon Jan 21 14:25:38 UTC 2019


On Fri, 2019-01-04 at 14:46 +0800, Yeoh Ee Peng wrote:
> These scripts were developed as an alternative testcase management
> tool to Testopia. Using these scripts, user can manage the
> testresults.json files generated by oeqa automated tests. Using the
> "store" operation, user can store multiple groups of test result each
> into individual git branch. Within each git branch, user can store
> multiple testresults.json files under different directories (eg.
> categorize directory by selftest-<distro>, runtime-<image>-
> <machine>).
> Then, using the "report" operation, user can view the test result
> summary for all available testresults.json files being stored that
> were grouped by directory and test configuration.
>
> This scripts depends on scripts/oe-git-archive where it was
> facing error if gitpython package was not installed. Refer to
> [YOCTO# 13082] for more detail.

Thanks for the patches. These are a lot more readable than the previous
versions and the code quality is much better which in turn helped
review!

I experimented with the code a bit. I'm fine with the manual test
execution piece of this, I do have some questions/concerns with the
result storage/reporting piece though.

What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test
results for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per
test run? or ???
- Are branches used for each release series (master, thud, sumo etc?)
Basically, the layout we'd use to import the autobuilder results for
each master run for example remains unclear to me, or how we'd look up
the status of a given commit.

The code doesn't support comparison of two sets of test results (which
tests were added/removed? passed when previously failed? failed when
previously passed?)

The code also doesn't allow investigation of test report "subdata" like
looking at the ptest results, comparing them to previous runs, showing
the logs for passed/failed ptests.

There is also the question of json build performance data.

The idea behind this code is to give us a report which allows us to
decide on the QA state of a given set of testreport data. I'm just not
sure this patch set lets us do that, or gives us a path to allow us to
do that either.

Cheers,

Richard





More information about the Openembedded-core mailing list