Hi Richard,

> > > Anecdotally, we are running Zeus for nightly builds with three
> > > multiconfigs. I cherry-picked your "bitbake: fix2" and "bitbake:
> > > fixup" patches and haven't seen any of the BB_UNIHASH errors since.
> > > Granted it's only been a week. But before that, hash equiv +
> > > multiconfig was unusable due to the BB_UNIHASH errors.
> >
> > That is a really helpful data point, thanks. I should probably clean up
> > those bitbake patches and get them merged then, I couldn't decide if
> > they were right or not...
> >
> 
> I just picked all your pending changes out of master-next into our
> local patch queue - will let you know how it looks when it's finished
> cooking!

There are two small issues I have observed.  

One is occasionally I get a lot of undeterministic metadata errors when 
BB_CACHE_POLICY = "cache", multiconfig, and hash equiv are enabled. The errors 
are all on recipes for which SRCREV = "${AUTOREV}". It doesn't always happen. 
But it did just now when I rebased our "zeus-modified" branch onto the upstream 
"zeus" branch, to get the changes starting with 
7dc72fde6edeb5d6ac6b3832530998afeea67cbc. 

Two is that, sometimes "Initializing tasks" stage appears stuck at 44% for a 
couple minutes. I traced it down to this code in runqueue.py (line 1168 on 
zeus):

        # Iterate over the task list and call into the siggen code
        dealtwith = set()
        todeal = set(self.runtaskentries)
        while len(todeal) > 0: 
            for tid in todeal.copy():
                if len(self.runtaskentries[tid].depends - dealtwith) == 0:
                    dealtwith.add(tid)
                    todeal.remove(tid)
                    self.prepare_task_hash(tid)

When I instrument the loop to print out the size of "todeal", I see it decrease 
very slowly, sometimes only a couple at a time. I'm guessing this is because 
prepare_task_hash is contacting the hash equiv server, in a serial manner here. 
I'm over my work VPN which makes things extra slow. Is there an opportunity for 
batching here? 

Thanks,
Chris
-- 
_______________________________________________
Openembedded-core mailing list
[email protected]
http://lists.openembedded.org/mailman/listinfo/openembedded-core

Reply via email to