[OE-core] [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting

Yeoh, Ee Peng ee.peng.yeoh at intel.com
Tue Jan 22 09:44:47 UTC 2019


Hi Richard,

After your recently sharing on pythonic, we had revised these scripts in hope to improve the code readability and ease of maintenance. Also new functionalities were developed following pythonic style. 

The latest patches are just submitted today at below URL. 
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278240.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278238.html
http://lists.openembedded.org/pipermail/openembedded-core/2019-January/278239.html

Changes compared to previous version:
1. Add new features, merge multiple testresults.json file & regression analysis for two specified testresults.json
2. Add selftest to test merge, store, report and regression functionalities
3. Revised code style to align with pythonic

Regarding your questions below:
1. What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test results for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per test run? or ???
- Are branches used for each release series (master, thud, sumo etc?) Basically, the layout we'd use to import the autobuilder results for each master run for example remains unclear to me, or how we'd look up the status of a given commit.

The target layout shall be a specific git branch for each commit tested, where the file directories shall be  based on existing Autobuilder results archive (eg. assuming store command was executed inside Autobuilder machine that stored the testresults.json files and predefined directory), simply execute: $ resultstool store <source_dir> <git_branch> where source_dir was the top directory used by Autobuilder to archive all testresults.json file, git_branch was the QA cycle for current tested commit. 

The first instance to execute "resultstool store" will generate a git repository under <poky>/<build>/ directory. To update files to be stored, simply execute $ resultstool store <source_dir> <git_branch> -d <poky>/<build>/<testresults_datetime>.

2. The code doesn't support comparison of two sets of test results (which tests were added/removed? passed when previously failed? failed when previously passed?)

Assuming results from a particular tested commit were merged into a single file (using existing "merge" functionality), user shall use the newly added "regression" functionality for comparing results status for two testresults.json files. Based on the configurations data for each result_id set, the comparison logic will select result with same configurations for comparison. More advance regression and automation can be developed from current code base. 

3. The code also doesn't allow investigation of test report "subdata" like looking at the ptest results, comparing them to previous runs, showing the logs for passed/failed ptests.

There is also the question of json build performance data.

This was not supported as of now, this will need further enhancement. 

Please let me know if any questions and inputs. Thank you very much for your sharing and help!

Thanks,
Yeoh Ee Peng 



-----Original Message-----
From: Richard Purdie [mailto:richard.purdie at linuxfoundation.org] 
Sent: Monday, January 21, 2019 10:26 PM
To: Yeoh, Ee Peng <ee.peng.yeoh at intel.com>; openembedded-core at lists.openembedded.org
Cc: Burton, Ross <ross.burton at intel.com>; Paul Eggleton <paul.eggleton at linux.intel.com>
Subject: Re: [OE-core] [PATCH 2/3 v3] scripts/test-case-mgmt: store test result and reporting

On Fri, 2019-01-04 at 14:46 +0800, Yeoh Ee Peng wrote:
> These scripts were developed as an alternative testcase management 
> tool to Testopia. Using these scripts, user can manage the 
> testresults.json files generated by oeqa automated tests. Using the 
> "store" operation, user can store multiple groups of test result each 
> into individual git branch. Within each git branch, user can store 
> multiple testresults.json files under different directories (eg.
> categorize directory by selftest-<distro>, runtime-<image>- 
> <machine>).
> Then, using the "report" operation, user can view the test result 
> summary for all available testresults.json files being stored that 
> were grouped by directory and test configuration.
>
> This scripts depends on scripts/oe-git-archive where it was facing 
> error if gitpython package was not installed. Refer to [YOCTO# 13082] 
> for more detail.

Thanks for the patches. These are a lot more readable than the previous versions and the code quality is much better which in turn helped review!

I experimented with the code a bit. I'm fine with the manual test execution piece of this, I do have some questions/concerns with the result storage/reporting piece though.

What target layout are we aiming for in the git repository? 
- Are we aiming for a directory per commit tested where all the test results for that commit are in the same json file?
- A directory per commit, then a directory per type of test? or per test run? or ???
- Are branches used for each release series (master, thud, sumo etc?) Basically, the layout we'd use to import the autobuilder results for each master run for example remains unclear to me, or how we'd look up the status of a given commit.

The code doesn't support comparison of two sets of test results (which tests were added/removed? passed when previously failed? failed when previously passed?)

The code also doesn't allow investigation of test report "subdata" like looking at the ptest results, comparing them to previous runs, showing the logs for passed/failed ptests.

There is also the question of json build performance data.

The idea behind this code is to give us a report which allows us to decide on the QA state of a given set of testreport data. I'm just not sure this patch set lets us do that, or gives us a path to allow us to do that either.

Cheers,

Richard





More information about the Openembedded-core mailing list