[OE-core] [PATCHv2 0/3] Upgrades for comply Intel Graphics Stack Release 2014Q3

Richard Purdie richard.purdie at linuxfoundation.org
Wed Dec 10 16:21:54 UTC 2014


On Wed, 2014-12-10 at 14:17 -0200, Otavio Salvador wrote:
> On Wed, Dec 10, 2014 at 2:15 PM, Richard Purdie
> <richard.purdie at linuxfoundation.org> wrote:
> > On Tue, 2014-12-09 at 18:19 -0200, Otavio Salvador wrote:
> >> On Tue, Dec 9, 2014 at 6:06 PM, Burton, Ross <ross.burton at intel.com> wrote:
> >> >
> >> > On 9 December 2014 at 18:14, Aníbal Limón <anibal.limon at linux.intel.com>
> >> > wrote:
> >> >>
> >> >> I don't know, maybe Ross knows?.
> >> >
> >> >
> >> > I ask for Piglit results to verify that approx 95% passes as that's the
> >> > expected pass rate.  I don't bother tracking the exact count as how it
> >> > changes as there's almost ten thousand tests and at some point you have to
> >> > let upstream deal with this.
> >>
> >> What is the point in having a tool for regression test if it is not tracked?
> >
> > Going forward, the plan is to track this FWIW. Right now that is a work
> > in progress as the test result collection piece is missing. Its also a
> > chicken and egg problem, you can't track things until you have some
> > tests to track. Things are being run manually to check for regressions
> > in a coarse manner too and this is better than not at all.
> 
> Well, when asked Ross said he is not tracking this data. So the question...

He personally isn't and he had valid reasons for looking at an overall
percentage rather than individual scores, but the QA people working on
the project are working towards tracking the data. Once we can track it,
we will need some ways of analysing the data as that in itself is
tricky.

Cheers,

Richard





More information about the Openembedded-core mailing list