On Thu, 2017-08-17 at 10:50 +0300, Markus Lehtonen wrote:
> Hi,
> 
> I quickly run some tests on a Xeon server, using glibc-locale as the recipe 
> to build.
> 100: 154s 
> 10: 162s (+5%)
> 1: 234s (+51%)

What I did to measure parallel versus serial is to run the corresponding
selftest (signing.Signing.test_signing_packages) several times for
chucks of 100, 20, 10 and 1. 

Here are the results (Xeon machine also)

100:
- Ran 1 test in 51.857s
- Ran 1 test in 52.148s
- Ran 1 test in 52.048s
- Ran 1 test in 52.397s

20:

- Ran 1 test in 54.068s
- Ran 1 test in 67.295s
- Ran 1 test in 52.608s
- Ran 1 test in 51.948s
- Ran 1 test in 53.283s

10

- Ran 1 test in 55.178s
- Ran 1 test in 56.468s
- Ran 1 test in 52.735s
- Ran 1 test in 53.530s
- Ran 1 test in 53.064s

1:
- Ran 1 test in 52.604s
- Ran 1 test in 53.211s
- Ran 1 test in 53.020s
- Ran 1 test in 53.017s
- Ran 1 test in 53.029s


so at least at selftest level, there is not such an perf penalty as you
observed. This is the test involved:


    @OETestID(1362)
    def test_signing_packages(self):

"""                                                                             
                                                                                
                                                                       
        Summary:     Test that packages can be signed in the package
feed                                                                            
                                                                                
          
        Expected:    Package should be signed with the correct
key                                                                             
                                                                                
                
        Expected:    Images can be created from signed packages      





> 
> Even if signing is not parallel, the difference may be explained by the 
> number of rpm processes that get spawned. I would drop the factor to 10 or 
> use BB_NUMBER_THREADS as Andre suggested in another email.
>   - Markus
> 
> 
> 
> On 16/08/2017, 19.00, "Leonardo Sandoval" 
> <leonardo.sandoval.gonza...@linux.intel.com> wrote:
> 
>     On Wed, 2017-08-16 at 15:28 +0300, Markus Lehtonen wrote:
>     > I agree. I don't see reason for dropping parallelism completely. There 
> is a real gain when running on beefier machines. Making it configurable would 
> probably be best. Or just drop it to a saner value, like 20 or 10.
>     >    - Markus
>     > 
>     
>     I ran some tests with 100, 20 and 1 and I saw (I can rerun and provide
>     times) no difference on times. gpg may be intrinsically serial so
>     passing 1 or N files wont make much difference in type. The only gain
>     when using file chunks is that one one process is launched.
>     
>     I the other hand, I tried using multiprocessing.Pool, but failed
>     miserably due to file looking reasons.
>     
>     
>     
>     > On 16/08/2017, 2.53, "Mark Hatle" 
> <openembedded-core-boun...@lists.openembedded.org on behalf of 
> mark.ha...@windriver.com> wrote:
>     > 
>     >     It would probably be better if this was configurable with a 'safe' 
> default.
>     >     
>     >     Moving from parallel to single will greatly affect the overall 
> performance on
>     >     larger build machines (lots of memory and cores) that can handle 
> the load vs a
>     >     typical development machine.
>     >     
>     >     --Mark
>     >     
>     >     On 8/15/17 4:40 PM, leonardo.sandoval.gonza...@linux.intel.com 
> wrote:
>     >     > From: Leonardo Sandoval 
> <leonardo.sandoval.gonza...@linux.intel.com>
>     >     > 
>     >     > gpg signing in file batches (which was default to 100) is a 
> memory expensive
>     >     > computation, causing trouble in some host machines (even on 
> production AB
>     >     > as seen on the bugzilla ID). Also, in terms of performance, there 
> is no real
>     >     > gain when rpm signing is done in batches. Considering the latter 
> issues, perform the
>     >     > rpm signing serially.
>     >     > 
>     >     > Log showing errors observed recently at AB workers:
>     >     > 
>     >     >     | gpg: signing failed: Cannot allocate memory
>     >     >     | gpg: signing failed: Cannot allocate memory
>     >     >     | error: gpg exec failed (2)
>     >     >     | 
> /home/pokybuild/yocto-autobuilder/yocto-worker/nightly-oe-selftest/build/build/tmp/work/core2-64-poky-linux/base-passwd/3.5.29-r0/deploy-rpms/core2_64/base-passwd-dev-3.5.29-r0.core2_64.rpm:
>     >     > 
>     >     > [YOCTO #11914]
>     >     > 
>     >     > Signed-off-by: Leonardo Sandoval 
> <leonardo.sandoval.gonza...@linux.intel.com>
>     >     > ---
>     >     >  meta/lib/oe/gpg_sign.py | 6 +++---
>     >     >  1 file changed, 3 insertions(+), 3 deletions(-)
>     >     > 
>     >     > diff --git a/meta/lib/oe/gpg_sign.py b/meta/lib/oe/gpg_sign.py
>     >     > index f4d8b10e4b..5c7985a856 100644
>     >     > --- a/meta/lib/oe/gpg_sign.py
>     >     > +++ b/meta/lib/oe/gpg_sign.py
>     >     > @@ -45,9 +45,9 @@ class LocalSigner(object):
>     >     >              if fsk_password:
>     >     >                  cmd += "--define '_file_signing_key_password %s' 
> " % fsk_password
>     >     >  
>     >     > -        # Sign in chunks of 100 packages
>     >     > -        for i in range(0, len(files), 100):
>     >     > -            status, output = oe.utils.getstatusoutput(cmd + ' 
> '.join(files[i:i+100]))
>     >     > +        # Sign packages
>     >     > +        for f in files:
>     >     > +            status, output = oe.utils.getstatusoutput(cmd + ' ' 
> + f)
>     >     >              if status:
>     >     >                  raise bb.build.FuncFailed("Failed to sign RPM 
> packages: %s" % output)
>     >     >  
>     >     > 
>     >     
>     >     -- 
>     >     _______________________________________________
>     >     Openembedded-core mailing list
>     >     Openembedded-core@lists.openembedded.org
>     >     http://lists.openembedded.org/mailman/listinfo/openembedded-core
>     >     
>     > 
>     > 
>     
>     
>     
> 
> 


-- 
_______________________________________________
Openembedded-core mailing list
Openembedded-core@lists.openembedded.org
http://lists.openembedded.org/mailman/listinfo/openembedded-core

Reply via email to