[bitbake-devel] [PATCHv2] s3.py: Add support for fetching source mirrors/minor cleanup

Beth 'pidge' Flanagan pidge at toganlabs.com
Wed Mar 29 09:07:58 UTC 2017


On Tue, 2017-03-28 at 09:30 -0700, Andre McCurdy wrote:
> On Tue, Mar 28, 2017 at 7:04 AM, Elizabeth 'pidge' Flanagan
> <pidge at toganlabs.com> wrote:
> > 
> > This commits main purpose is to add support for fetching source
> > mirrors. In the current incarnation:
> > 
> > SOURCE_MIRROR_URL ?= "s3://mybucket/downloads"
> > 
> > will fail for two reasons. First, download doesn't support it,
> > but second, without aws included in HOSTTOOLS you'll end up
> > with aws not being found by bitbake (for either source mirrors or
> > sstate mirrors).
> > 
> > Part of this is fixed with this commit. However, this will still
> > fail if HOSTTOOLS doesn't include 'aws' in bitbake.conf. I've
> > another
> > commit or two to fix that as well.
> > 
> > I've also DRYed up some of the error handling, removed the
> > hardcoded
> > aws and added some logging.
> > 
> > Signed-off-by: Elizabeth 'pidge' Flanagan <pidge at toganlabs.com>
> > ---
> >  lib/bb/fetch2/s3.py | 22 ++++++++++++++++------
> >  1 file changed, 16 insertions(+), 6 deletions(-)
> > 
> > diff --git a/lib/bb/fetch2/s3.py b/lib/bb/fetch2/s3.py
> > index 27993aa..791f3b2 100644
> > --- a/lib/bb/fetch2/s3.py
> > +++ b/lib/bb/fetch2/s3.py
> > @@ -34,6 +34,7 @@ import urllib.request, urllib.parse, urllib.error
> >  from bb.fetch2 import FetchMethod
> >  from bb.fetch2 import FetchError
> >  from bb.fetch2 import runfetchcmd
> > +from bb.fetch2 import logger
> > 
> >  class S3(FetchMethod):
> >      """Class to fetch urls via 'aws s3'"""
> > @@ -48,6 +49,8 @@ class S3(FetchMethod):
> >          return True
> > 
> >      def urldata_init(self, ud, d):
> > +        ud.basecmd = d.getVar("FETCHCMD_s3", True) or
> > "/usr/bin/env aws s3"
> > +
> >          if 'downloadfilename' in ud.parm:
> >              ud.basename = ud.parm['downloadfilename']
> >          else:
> > @@ -60,8 +63,13 @@ class S3(FetchMethod):
> >          Fetch urls
> >          Assumes localpath was called first
> >          """
> > -
> > -        cmd = 'aws s3 cp s3://%s%s %s' % (ud.host, ud.path,
> > ud.localpath)
> > +        if 'downloadfilename' in ud.parm:
> > +            dldir = d.getVar("DL_DIR", True)
> > +            bb.utils.mkdirhier(os.path.dirname(dldir + os.sep +
> > ud.localfile))
> > +            cmd = '%s cp s3://%s%s %s%s%s' % (ud.basecmd, ud.host,
> > ud.path, dldir, os.sep, ud.localpath)
> > +        else:
> > +            cmd = '%s cp s3://%s%s %s' % (ud.basecmd, ud.host,
> > ud.path, ud.localpath)
> > +        logger.debug(2, "Fetching %s using command '%s'" %
> > (ud.url, cmd))
> The intention is that "downloadfilename" is incorporated into
> ud.localfile by urldata_init(). Did you find a case where that
> doesn't
> work?
> 

Well, out of the box, the fetcher doesn't work:

DEBUG: lzo-native-2.09-r0 do_fetch: Fetcher failure: Fetch command
export SSH_AGENT_PID="26127"; export SSH_AUTH_SOCK="/tmp/ssh-
3LoCAIBmY5WA/agent.26126"; export PATH="/home/pidge/openembedded-
core/scripts/native-intercept:/home/pidge/openembedded-
core/scripts:/home/pidge/openembedded-core/build/tmp-glibc/work/x86_64-
linux/lzo-native/2.09-r0/recipe-sysroot-native/usr/bin/x86_64-
linux:/home/pidge/openembedded-core/build/tmp-glibc/work/x86_64-
linux/lzo-native/2.09-r0/recipe-sysroot-
native/usr/bin:/home/pidge/openembedded-core/build/tmp-
glibc/work/x86_64-linux/lzo-native/2.09-r0/recipe-sysroot-
native/usr/sbin:/home/pidge/openembedded-core/build/tmp-
glibc/work/x86_64-linux/lzo-native/2.09-r0/recipe-sysroot-
native/usr/bin:/home/pidge/openembedded-core/build/tmp-
glibc/work/x86_64-linux/lzo-native/2.09-r0/recipe-sysroot-
native/sbin:/home/pidge/openembedded-core/build/tmp-glibc/work/x86_64-
linux/lzo-native/2.09-r0/recipe-sysroot-
native/bin:/home/pidge/openembedded-
core/bitbake/bin:/home/pidge/openembedded-core/build/tmp-
glibc/hosttools"; export HOME="/home/pidge"; aws s3 cp s3://lgi-onemw-
staging/beth-test/downloads/lzo-2.09.tar.gz /home/pidge/openembedded-
core/build/downloads/lzo-2.09.tar.gz failed with exit code 127, output:
/bin/sh: aws: command not found

I agree that part of the above isn't needed (I had pulled them in from
an older version of an s3 fetcher I wrote for krogoth last year).

Some of this however is needed/desired (like assigning the FETCHCMD and
using ud.basecmd). Aws also needs to be added to HOSTTOOLS_NONFATAL in
order to get past all this (which is really the core issue).

> > 
> >          bb.fetch2.check_network_access(d, cmd, ud.url)
> >          runfetchcmd(cmd, d)
> > 
> > @@ -70,11 +78,11 @@ class S3(FetchMethod):
> >          # tool with a little healthy suspicion).
> > 
> >          if not os.path.exists(ud.localpath):
> > -            raise FetchError("The aws cp command returned success
> > for s3://%s%s but %s doesn't exist?!" % (ud.host, ud.path,
> > ud.localpath))
> > +            raise FetchError("The command  %s returned success but
> > %s doesn't exist?!" % (cmd, ud.localpath))
> This seems fairly arbitrary churn...
> 

I wouldn't agree that it's arbitrary churn neccessarily. If FetchErrors
are raised, I'd want to know the exact command that was issued.

> > 
> > 
> >          if os.path.getsize(ud.localpath) == 0:
> >              os.remove(ud.localpath)
> > -            raise FetchError("The aws cp command for s3://%s%s
> > resulted in a zero size file?! Deleting and failing since this
> > isn't right." % (ud.host, ud.path))
> > +            raise FetchError("The command %s resulted in a zero
> > size file?! Deleting and failing since this isn't right." % (cmd))
> Ditto.
> 
> > 
> > 
> >          return True
> > 
> > @@ -83,7 +91,9 @@ class S3(FetchMethod):
> >          Check the status of a URL
> >          """
> > 
> > -        cmd = 'aws s3 ls s3://%s%s' % (ud.host, ud.path)
> > +        cmd = '%s ls s3://%s%s' % (ud.basecmd, ud.host, ud.path)
> > +        logger.debug(2, "Checking %s using command '%s'" %
> > (ud.url, cmd))
> runfetchcmd() will output basically the same thing, so this isn't
> really required.
> 

Agreed.

> > 
> > +
> >          bb.fetch2.check_network_access(d, cmd, ud.url)
> >          output = runfetchcmd(cmd, d)
> > 
> > @@ -91,6 +101,6 @@ class S3(FetchMethod):
> >          # is not found, so check output of the command to confirm
> > success.
> > 
> >          if not output:
> > -            raise FetchError("The aws ls command for s3://%s%s
> > gave empty output" % (ud.host, ud.path))
> > +            raise FetchError("The command %s gave empty output" %
> > (cmd))
> More churn for little obvious gain.
> 
> > 
> > 
> >          return True
> > --
> > 1.9.1
> > 
> > --
> > _______________________________________________
> > bitbake-devel mailing list
> > bitbake-devel at lists.openembedded.org
> > http://lists.openembedded.org/mailman/listinfo/bitbake-devel



More information about the bitbake-devel mailing list