[OE-core] [PATCH] "Finish" the IMAGE_GEN_DEBUGFS implementation

Richard Purdie richard.purdie at linuxfoundation.org
Fri Oct 2 14:30:43 UTC 2015


On Fri, 2015-10-02 at 09:23 -0500, Mark Hatle wrote:
> On 10/1/15 5:31 PM, Richard Purdie wrote:
> > On Thu, 2015-10-01 at 13:26 -0500, Mark Hatle wrote:
> >> It was noticed today that the IMAGE_GEN_DEBUGFS implementation was not 
> >> complete.  The version that was merged back in May only contained the 
> >> filesystem generation pieces, but not the pieces for creating the image
> >> from that filesystem.
> >>
> >> The code has been tested and is working.  The only thing that I don't 
> >> particularly like is that the processing code and loop is a duplicate of
> >> the code that runs just before.  Unfortunately the only way around this
> >> is to change the way that way the parallel bits are processed to support
> >> multiple datastores..  (or create "another" function..)
> >>
> >> Any feedback appreciated, but without this the feature is broken!
> > 
> > Could we not make a function which these two code points then call?
> 
> The duplicate piece is because the existing setup and loop depend on the local
> self.d value(s).  In order to do this, we need to temporarily modify self.d and
> run this under an alternative datastore, and then put it back to the original value.
> 
> If I don't duplicate the:
> 
>             for image_cmds in debugfs_image_cmd_groups:
> ...
>                 results = list(pool.imap(generate_image, image_cmds))
> ...
>                 for image_type, subimages, script in image_cmds:
>                     bb.note("Creating debugfs symlinks for %s image ..." %
> image_type)
>                     self._create_symlinks(subimages)
> 
> there is no concept of two different datastores.
> 
> The alternative we have is to include a reference to the datastore itself in the
> 'image_cmds'.  Then we could support any number of datastores as appropriate for
> the commands.  (This of course will require additional changes to be able to
> pass that datastore to the various users.)

Ah, right. Going from memory, we can't share a datastore with a
multiprocessing pool. We could do that with a multithreaded pool but I'm
nervous about that for two reasons, one that we've had bad interaction
between threads and processes in the past and secondly around locking
and data usage in the threads.

So not an easy problem...

Cheers,

Richard




More information about the Openembedded-core mailing list