Re: [yocto] Build time data

2012-04-13 Thread Darren Hart
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/12/2012 10:51 PM, Martin Jansa wrote:

 And my system is very slow compared to yours, I've found my
 measurement of core-image-minimal-with-mtdutils around 95 mins 
 http://patchwork.openembedded.org/patch/17039/ but this was with
 Phenom II X4 965, 4GB RAM, RAID0 (3 SATA2 disks) for WORKDIR, RAID5
 (the same 3 SATA2 disks) BUILDDIR (raid as mdraid), now I have
 Bulldozer AMD FX(tm)-8120, 16GB RAM, still the same RAID0 but 
 different motherboard..

Why RAID5 for BUILDDIR? The write overhead of RAID5 is very high. The
savings RAID5 alots you is more significant with more disks, but with
3 disks it's only 1 disk better than RAID10, with a lot more overhead.

I spent some time outlining all this a while back:
http://www.dvhart.com/2011/03/qnap_ts419p_configuration_raid_levels_and_throughput/

Here's the relevant bit:

RAID 5 distributes parity across all the drives in the array, this
parity calculation is both compute intensive and IO intensive. Every
write requires the parity calculation, and data must be written to
every drive.



- --
Darren Hart
Intel Open Source Technology Center
Yocto Project - Linux Kernel
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJPh8LTAAoJEKbMaAwKp364pa8H/A8BSudN/g7ixFmUTYMNGHlC
2+H59MgNHYWRYzNn9QvN6vyyfXzX7C00HUTQ4MQ3CmisTUza2tbJEdX9CpeIBQNg
Ny8iqyNNoInTFx2T1Yi2eA9Ytegtue9Ls+IcBRbpIbs6Zo1Qwzi6oemdPZN7g3YI
rH/NKALWIBt/Y/Dt2k0fz7WsQGYOuE/lYpL/CmukU7vNNEUAdOs7tZa5o1ZOQDuj
zGCwuVH9QwrDJEXNsMtjNY37aJeAgDMwSXjN0pKv1WQI9j47kYQQrrp2qKVQYhV1
x4QxJ5aOuV7BaS0Y7zYkNo9nv+yKPODt25s5L83k5vjbMhCvczmMJn3jupQuUhQ=
=3GDA
-END PGP SIGNATURE-
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Build time data

2012-04-13 Thread Martin Jansa
On Thu, Apr 12, 2012 at 11:08:19PM -0700, Darren Hart wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 04/12/2012 10:51 PM, Martin Jansa wrote:
 
  And my system is very slow compared to yours, I've found my
  measurement of core-image-minimal-with-mtdutils around 95 mins 
  http://patchwork.openembedded.org/patch/17039/ but this was with
  Phenom II X4 965, 4GB RAM, RAID0 (3 SATA2 disks) for WORKDIR, RAID5
  (the same 3 SATA2 disks) BUILDDIR (raid as mdraid), now I have
  Bulldozer AMD FX(tm)-8120, 16GB RAM, still the same RAID0 but 
  different motherboard..
 
 Why RAID5 for BUILDDIR? The write overhead of RAID5 is very high. The
 savings RAID5 alots you is more significant with more disks, but with
 3 disks it's only 1 disk better than RAID10, with a lot more overhead.

Becaure RAID10 needs at least 4 drivers and all my SATA ports are
already used and also it's on my /home partition.. please not that this
is not some company build server, just my desktop where it happens I do
a lot of builds for comunity distribution for smartphones
http://shr-project.org

Server we have available for builds is _much_ slower then this
especially IO (some virtualized host on busy server), but has much
better network bandwidth.. :).

Cheers,
 
 I spent some time outlining all this a while back:
 http://www.dvhart.com/2011/03/qnap_ts419p_configuration_raid_levels_and_throughput/
 
 Here's the relevant bit:
 
 RAID 5 distributes parity across all the drives in the array, this
 parity calculation is both compute intensive and IO intensive. Every
 write requires the parity calculation, and data must be written to
 every drive.
 
 
 
 - --
 Darren Hart
 Intel Open Source Technology Center
 Yocto Project - Linux Kernel
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.11 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
 
 iQEcBAEBAgAGBQJPh8LTAAoJEKbMaAwKp364pa8H/A8BSudN/g7ixFmUTYMNGHlC
 2+H59MgNHYWRYzNn9QvN6vyyfXzX7C00HUTQ4MQ3CmisTUza2tbJEdX9CpeIBQNg
 Ny8iqyNNoInTFx2T1Yi2eA9Ytegtue9Ls+IcBRbpIbs6Zo1Qwzi6oemdPZN7g3YI
 rH/NKALWIBt/Y/Dt2k0fz7WsQGYOuE/lYpL/CmukU7vNNEUAdOs7tZa5o1ZOQDuj
 zGCwuVH9QwrDJEXNsMtjNY37aJeAgDMwSXjN0pKv1WQI9j47kYQQrrp2qKVQYhV1
 x4QxJ5aOuV7BaS0Y7zYkNo9nv+yKPODt25s5L83k5vjbMhCvczmMJn3jupQuUhQ=
 =3GDA
 -END PGP SIGNATURE-

-- 
Martin 'JaMa' Jansa jabber: martin.ja...@gmail.com


signature.asc
Description: Digital signature
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Build time data

2012-04-13 Thread Wolfgang Denk
Dear Darren Hart,

In message 4f87c2d3.8020...@linux.intel.com you wrote:

  Phenom II X4 965, 4GB RAM, RAID0 (3 SATA2 disks) for WORKDIR, RAID5
  (the same 3 SATA2 disks) BUILDDIR (raid as mdraid), now I have
  Bulldozer AMD FX(tm)-8120, 16GB RAM, still the same RAID0 but 
  different motherboard..
 
 Why RAID5 for BUILDDIR? The write overhead of RAID5 is very high. The
 savings RAID5 alots you is more significant with more disks, but with
 3 disks it's only 1 disk better than RAID10, with a lot more overhead.

Indeed, RAID5 with just 3 devices makes little sense - especially
when running on the same drives as the RAID0 workdir.

 I spent some time outlining all this a while back:
 http://www.dvhart.com/2011/03/qnap_ts419p_configuration_raid_levels_and_throughput/

Well, such data from a 4 spindle array are nor teling much. When you
are asking for I/O performance on RAID arrays, you want to distibute
load over _many_ spindles. Do your comparisons on a 8 or 16 (or more)
spindle setup, and the results will be much different. Also, your
test of copying huge files is just one usage mode: strictly
sequential access. But what we see with OE / Yocto builds is
completely different. Here you will see a huge number of small and
even tiny data transfers.

Classical recommendations for performance optimization od RAID
arrays (which are usually tuning for such big, sequentuial accesses
only) like using big stripe sizes and huge read-ahead etc. turn out
to be counter-productive here.  But it makes no sense to have for
example a stripe size of 256 kB or more when 95% or more of your disk
accesses write less than 4 kB only.

 Here's the relevant bit:
 
 RAID 5 distributes parity across all the drives in the array, this
 parity calculation is both compute intensive and IO intensive. Every
 write requires the parity calculation, and data must be written to
 every drive.

But did you look at a real system?  I never found the CPU load of the
parity calculations to be a bottleneck.  I rather have the CPU spend
cycles on computing parity, instead of running it with all cores idle
because it's waitong for I/O to complete.  I found that for the work
loads we have (software builds like Yocto etc.) a multi-spindle
software RAID array outperforms all other solutions (and especially
the h/w RAID controllers I had access to so far - these don't even
closely reach the same number of IOPS).

OH - and BTW: if you care about reliability, then don't use RAID5.
Go for RAID6.  Yes, it's more expensive, but it's also much less
painful when you have to rebuild the array in case of a disk failure.
I've seen too many cases where a second disk would fail during the
rebuild to ever go with RAID5 for big systems again - restoring
several TB of data from tape ain't no fun.

See also the RAID wiki for specific performance optizations on such
RAID arrays.

Best regards,

Wolfgang Denk

-- 
DENX Software Engineering GmbH, MD: Wolfgang Denk  Detlev Zundel
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: w...@denx.de
Never put off until tomorrow what you can put off indefinitely.
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] test

2012-04-13 Thread Yes Yanus

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Build time data

2012-04-13 Thread Richard Purdie
On Thu, 2012-04-12 at 07:34 -0700, Darren Hart wrote:
 
 On 04/12/2012 07:08 AM, Björn Stenberg wrote:
  Darren Hart wrote:
  /dev/md0/build  ext4 
  noauto,noatime,nodiratime,commit=6000
  
  A minor detail: 'nodiratime' is a subset of 'noatime', so there is no
  need to specify both.
 
 Excellent, thanks for the tip.

Note the key here is that for a system with large amounts of memory, you
can effectively keep the build in memory due to the long commit time.

All the tests I've done show we are not IO bound anyway.


  Yet for all the combined horsepower, I am unable to match your time
  of 30 minutes for core-image-minimal. I clock in at around 37 minutes
  for a qemux86-64 build with ipk output:
  
  -- NOTE: Tasks Summary: Attempted 1363 tasks of which 290 didn't
  need to be rerun and all succeeded.
  
  real36m32.118s user214m39.697s sys 108m49.152s --
  
  These numbers also show that my build is running less than 9x
  realtime, indicating that 80% of my cores sit idle most of the time.
 
 Yup, that sounds about right. The build has a linear component to it,
 and anything above about 12 just doesn't help. In fact the added
 scheduling overhead seems to hurt.

  This confirms what ps xf says during the builds: Only rarely is
  bitbake running more than a handful tasks at once, even with
  BB_NUMBER_THREADS at 64. And many of these tasks are in turn running
  sequential loops on a single core.
  
  I'm hoping to find time soon to look deeper into this issue and
  suggest remedies. It my distinct feeling that we should be able to
  build significantly faster on powerful machines.
  
 
 Reducing the dependency chains that result in the linear component of
 the build (forcing serialized execution) is one place we've focused, and
 could probably still use some attention. CC'ing RP as he's done a lot there.

The minimal build is about our worst case single threaded build as it is
highly dependency ordered. We've already done a lot of work looking at
the single thread of core dependencies and this is for example why we
have gettext-minimal-native which unlocked some of the core path
dependencies. When you look at what we build, there is a reason for most
of it unfortunately. There are emails from me about what I looked and
found on the mailing list, I tried to keep a record of it somewhere at
least. You can get some wins with things like ASSUME_PROVIDED +=
git-native.

For something like a sato build you should see more parallelism. 

I do also have some small gains in some pending patches:

http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/t2id=2023801e25d81e8cffb643eac259c18b9fecda0b
http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/t2id=ecf5f5de8368fdcf90c3d38eafc689d6d265514b
http://git.yoctoproject.org/cgit.cgi/poky-contrib/commit/?h=rpurdie/t2id=2190a51ffac71c9d19305601f8a3a46e467b745a

which look at speeding up do_package, do_package_write_rpm and do_rootfs
(with rpm). There were developed too late for 1.2 and are in some cases
only partially complete but they show some ways we can squeeze some
extra performance out the system.

There are undoubtedly ways we can improve performance but I think we've
done the low hanging fruit and we need some fresh ideas.

Cheers,

Richard


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Build time data

2012-04-13 Thread Björn Stenberg
Darren Hart wrote:
 One thing that comes to mind is the parallel settings, BB_NUMBER_THREADS
 and PARALLEL_MAKE. I noticed a negative impact if I increased these
 beyond 12 and 14 respectively. I tested this with bb-matrix
 (scripts/contrib/bb-perf/bb-matrix.sh). The script is a bit fickle, but
 can provide useful results and killer 3D surface plots of build time
 with BB and PM on the axis.

Very nice! I ran a batch overnight with permutations of 8,12,16,24,64 cores:

BB PM %e %S %U %P %c %w %R %F %M %x
8 8 2288.96 2611.37 10773.53 584% 810299 18460161 690464859 0 1715456 0
8 12 2198.40 2648.57 10846.28 613% 839750 18559413 690563187 0 1982864 0
8 16 2157.26 2672.79 10943.59 631% 898599 18487946 690761197 0 1715440 0
8 24 2125.15 2916.33 11199.27 664% 89 18412764 690856116 0 1715440 0
8 64 2189.14 7084.14 12906.95 913% 1491503 18646891 699897733 0 1715440 0
12 8 2277.66 2625.82 10805.21 589% 691752 18596208 690998433 0 1715440 0
12 12 2194.04 2664.01 10934.65 619% 714997 18717017 691199925 0 1715440 0
12 16 2183.95 2736.33 11162.30 636% 1090270 18359128 690559327 0 1715440 0
12 24 2120.46 2907.63 11229.50 666% 829783 18644293 690729638 0 1715312 0
12 64 2171.58 6767.09 12822.86 902% 1524683 18634668 690904549 0 1867456 0
16 8 2294.59 2691.74 10813.69 588% 771621 18637582 686712129 0 1715344 0
16 12 2201.51 2704.54 11017.23 623% 753662 18590533 699231236 0 1715424 0
16 16 2154.54 2692.31 11023.28 636% 809586 18557781 691014487 0 1715440 0
16 24 2130.33 2932.18 11259.09 666% 905669 18531776 691082307 0 2030992 0
16 64 2184.01 6954.71 12922.39 910% 1467774 18800203 701770099 0 1715440 0
24 8 2284.88 2645.88 10854.89 590% 833061 18523938 691067170 0 1715328 0
24 12 2203.72 2696.96 11033.10 623% 931443 18457749 691187723 0 2016368 0
24 16 2176.02 2727.94 3.33 636% 940044 18420200 690959670 0 1715440 0
24 24 2170.38 2938.80 11643.10 671% 1023328 18641215 686665448 15 1715440 0
24 64 2200.02 7188.60 12902.42 913% 1509158 18924772 690615091 66 1715440 0
64 8 2309.40 2702.33 10952.18 591% 753168 18687309 690927732 10 1867440 0
64 12 2230.80 2765.98 11131.22 622% 875495 18744802 691213524 28 1715216 0
64 16 2182.22 2786.22 11180.86 640% 881328 18724987 691020084 109 1768576 0
64 24 2136.20 3001.36 11238.81 666% 898320 18646384 691239254 46 1715312 0
64 64 2189.73 7154.10 12846.99 913% 1416830 18781801 690890798 41 1715424 0

What it shows is that BB_NUMBER_THREADS makes no difference at all in this 
range. As for PARALLEL_MAKE, it shows 24 is better than 16 but 64 is too high, 
incurring a massive scheduling penalty. I wonder if newer kernel versions have 
become more efficient. In hindsight, I should have included 32 and 48 cores in 
the test.

Unfortunately I was unable to produce plots with bb-matrix-plot.sh. It gave me 
pretty png files, but missing any plotted data:

# ../../poky/scripts/contrib/bb-perf/bb-matrix-plot.sh
 line 0: Number of grid points must be in [2:1000] - not changed!

  Warning: Single isoline (scan) is not enough for a pm3d plot.
   Hint: Missing blank lines in the data file? See 'help pm3d' and FAQ.
  Warning: Single isoline (scan) is not enough for a pm3d plot.
   Hint: Missing blank lines in the data file? See 'help pm3d' and FAQ.
  Warning: Single isoline (scan) is not enough for a pm3d plot.
   Hint: Missing blank lines in the data file? See 'help pm3d' and FAQ.
  Warning: Single isoline (scan) is not enough for a pm3d plot.
   Hint: Missing blank lines in the data file? See 'help pm3d' and FAQ.

Result: http://imgur.com/mfgWb

-- 
Björn
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] When building an image using yocto, I was puzzled by such errors, who can help me? thx!

2012-04-13 Thread Gary Thomas

On 2012-04-13 03:55, 云鹤 wrote:

When building an image using yocto, I was puzzled by such errors, who can help 
me? thx!

I am new to yocto and I followed Yocto Project Quick Start document to start building 
an image. After typing $ bitbake -k core-image-sato ,error :
jackie@jackie-Lenovo:~/yocto/edison-6.0.1-build$ bitbake -k core-image-minimal
Loading cache: 100% |###| ETA: 00:00:00
Loaded 1041 entries from dependency cache.

OE Build Configuration:
BB_VERSION = 1.13.3
TARGET_ARCH = i586
TARGET_OS = linux
MACHINE = qemux86
DISTRO = poky
DISTRO_VERSION = 1.1.1
TUNE_FEATURES = m32 i586
TARGET_FPU = 
meta
meta-yocto = unknown:unknown

NOTE: Resolving any missing task queue dependencies
NOTE: Preparing runqueue
NOTE: Executing SetScene Tasks
NOTE: Executing RunQueue Tasks
NOTE: Running task 327 of 1827 (ID: 1009, 
/home/jackie/yocto/poky-edison-6.0.1/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_2.6.37.2.bb,
 do_fetch)
NOTE: Running task 637 of 1827 (ID: 584, 
/home/jackie/yocto/poky-edison-6.0.1/meta/recipes-kernel/linux/linux-yocto_3.0.bb,
 do_fetch)
NOTE: package linux-libc-headers-2.6.37.2-r2: task do_fetch: Started
NOTE: package 
linux-yocto-3.0.18+git1+d386e09f316e03061c088d2b13a48605c20fb3a6_1+d7e81e7f975c57c581ce13446adf023f95d9fd9f-r3:
 task do_fetch: Started
ERROR: Function 'File: 
'/home/jackie/yocto/edison-6.0.1-build/downloads/linux-2.6.37.2.tar.bz2' has 
md5 checksum cdc3c12d007a0ff597a34b062907e8f7 when
89f681bc7c917a84aa7470da6eed5101 was expected (from URL: 
'http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.37.2.tar.bz2')' failed
ERROR: Logfile of failure stored in: 
/home/jackie/yocto/edison-6.0.1-build/tmp/work/i586-poky-linux/linux-libc-headers-2.6.37.2-r2/temp/log.do_fetch.3886
Log data follows:
| ERROR: Function 'File: 
'/home/jackie/yocto/edison-6.0.1-build/downloads/linux-2.6.37.2.tar.bz2' has 
md5 checksum cdc3c12d007a0ff597a34b062907e8f7 when
89f681bc7c917a84aa7470da6eed5101 was expected (from URL: 
'http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.37.2.tar.bz2')' failed
NOTE: package linux-libc-headers-2.6.37.2-r2: task do_fetch: Failed
ERROR: Task 1009 
(/home/jackie/yocto/poky-edison-6.0.1/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_2.6.37.2.bb,
 do_fetch) failed with exit code '1'
……
| ERROR: Function 'Fetcher failure for URL:
'git://git.yoctoproject.org/linux-yocto-3.0-1.1.x.git;protocol=git;nocheckout=1;branch=yocto/standard/common-pc/base,meta;name=machine,meta'.
 Unable to fetch URL
git://git.yoctoproject.org/linux-yocto-3.0-1.1.x.git;protocol=git;nocheckout=1;branch=yocto/standard/common-pc/base,meta;name=machine,meta
 from any source.' failed
|
NOTE: package 
linux-yocto-3.0.18+git1+d386e09f316e03061c088d2b13a48605c20fb3a6_1+d7e81e7f975c57c581ce13446adf023f95d9fd9f-r3:
 task do_fetch: Failed
ERROR: Task 584 
(/home/jackie/yocto/poky-edison-6.0.1/meta/recipes-kernel/linux/linux-yocto_3.0.bb,
 do_fetch) failed with exit code '1'
ERROR: 
'/home/jackie/yocto/poky-edison-6.0.1/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_2.6.37.2.bb'
 failed
ERROR: 
'/home/jackie/yocto/poky-edison-6.0.1/meta/recipes-kernel/linux/linux-yocto_3.0.bb'
 failed

who can help me? thanks all!


The file downloaded as 
/home/jackie/yocto/edison-6.0.1-build/downloads/linux-2.6.37.2.tar.bz2 is 
corrupt.
Try deleting it (and 
/home/jackie/yocto/edison-6.0.1-build/downloads/linux-2.6.37.2.tar.bz2.done as 
well)
and start again.

--

Gary Thomas |  Consulting for the
MLB Associates  |Embedded world

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Build time data

2012-04-13 Thread Koen Kooi

Op 13 apr. 2012, om 11:56 heeft Tomas Frydrych het volgende geschreven:

 On 12/04/12 01:30, Darren Hart wrote:
 Next up is storage. 
 
 Indeed. In my experience by far the biggest limiting factor in the
 builds is getting io bound. If you are not running a dedicated build
 machine, it is well worth using a dedicated disk for the poky tmp dir;
 assuming you have cpu time left, this leaves the machine completely
 usable for other things.
 
 
 Now RAM, you will want about 2 GB of RAM per core, with a minimum of 4GB.
 
 My experience does not bear this out at all; building Yocto on a 6 core
 hyper threaded desktop machine I have never ever seen the system memory
 use to get significantly over a 2GB mark (out of 8GB available), doing
 Yocto build using 10 cores/threads.

Try building webkit or asio, the linker will uses ~1.5GB per object, so for 
asio you need PARALLEL_MAKE * 1.5 GB of ram to avoid swapping to disk.
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Rebuild of 1.2 rc3.

2012-04-13 Thread Flanagan, Elizabeth
We've rebased the 1.2_M4 branch and rebuilt the images. The new images are
available at:

http://autobuilder.pokylinux.org/nightly/20120412-2/

This build is based off the 1.2_M4.rc3.2 tag with
fae1c7a5fdcea19http://git.yoctoproject.org/cgit/cgit.cgi/poky/commit/?h=1.2_M4id=fae1c7a5fdcea199338b5f91efaafd7c72aec5dd(flip
of distro.conf).

The adt for this build should be up shortly (within the hour).

-- 
Elizabeth Flanagan
Yocto Project
Build and Release
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Build time data

2012-04-13 Thread Darren Hart


On 04/13/2012 01:47 AM, Björn Stenberg wrote:
 Darren Hart wrote:
 One thing that comes to mind is the parallel settings, BB_NUMBER_THREADS
 and PARALLEL_MAKE. I noticed a negative impact if I increased these
 beyond 12 and 14 respectively. I tested this with bb-matrix
 (scripts/contrib/bb-perf/bb-matrix.sh). The script is a bit fickle, but
 can provide useful results and killer 3D surface plots of build time
 with BB and PM on the axis.
 
 Very nice! I ran a batch overnight with permutations of 8,12,16,24,64 cores:
 
 BB PM %e %S %U %P %c %w %R %F %M %x
 8 8 2288.96 2611.37 10773.53 584% 810299 18460161 690464859 0 1715456 0
 8 12 2198.40 2648.57 10846.28 613% 839750 18559413 690563187 0 1982864 0
 8 16 2157.26 2672.79 10943.59 631% 898599 18487946 690761197 0 1715440 0
 8 24 2125.15 2916.33 11199.27 664% 89 18412764 690856116 0 1715440 0
 8 64 2189.14 7084.14 12906.95 913% 1491503 18646891 699897733 0 1715440 0
 12 8 2277.66 2625.82 10805.21 589% 691752 18596208 690998433 0 1715440 0
 12 12 2194.04 2664.01 10934.65 619% 714997 18717017 691199925 0 1715440 0
 12 16 2183.95 2736.33 11162.30 636% 1090270 18359128 690559327 0 1715440 0
 12 24 2120.46 2907.63 11229.50 666% 829783 18644293 690729638 0 1715312 0
 12 64 2171.58 6767.09 12822.86 902% 1524683 18634668 690904549 0 1867456 0
 16 8 2294.59 2691.74 10813.69 588% 771621 18637582 686712129 0 1715344 0
 16 12 2201.51 2704.54 11017.23 623% 753662 18590533 699231236 0 1715424 0
 16 16 2154.54 2692.31 11023.28 636% 809586 18557781 691014487 0 1715440 0
 16 24 2130.33 2932.18 11259.09 666% 905669 18531776 691082307 0 2030992 0
 16 64 2184.01 6954.71 12922.39 910% 1467774 18800203 701770099 0 1715440 0
 24 8 2284.88 2645.88 10854.89 590% 833061 18523938 691067170 0 1715328 0
 24 12 2203.72 2696.96 11033.10 623% 931443 18457749 691187723 0 2016368 0
 24 16 2176.02 2727.94 3.33 636% 940044 18420200 690959670 0 1715440 0
 24 24 2170.38 2938.80 11643.10 671% 1023328 18641215 686665448 15 1715440 0
 24 64 2200.02 7188.60 12902.42 913% 1509158 18924772 690615091 66 1715440 0
 64 8 2309.40 2702.33 10952.18 591% 753168 18687309 690927732 10 1867440 0
 64 12 2230.80 2765.98 11131.22 622% 875495 18744802 691213524 28 1715216 0
 64 16 2182.22 2786.22 11180.86 640% 881328 18724987 691020084 109 1768576 0
 64 24 2136.20 3001.36 11238.81 666% 898320 18646384 691239254 46 1715312 0
 64 64 2189.73 7154.10 12846.99 913% 1416830 18781801 690890798 41 1715424 0
 
 What it shows is that BB_NUMBER_THREADS makes no difference at all in this 
 range. As for PARALLEL_MAKE, it shows 24 is better than 16 but 64 is too 
 high, incurring a massive scheduling penalty. I wonder if newer kernel 
 versions have become more efficient. In hindsight, I should have included 32 
 and 48 cores in the test.
 
 Unfortunately I was unable to produce plots with bb-matrix-plot.sh. It gave 
 me pretty png files, but missing any plotted data:


Right, gnuplot likes evenly spaced values of BB and PM. So you could
have done: 8,12,16,24,28,32 (anything about that is going to go down
anyway). Unfortunately, the gaps force the plot to generate spikes at
the interpolated points. I'm open to ideas on how to make it compatible
with arbitrary gaps and avoid the spikes.


Perhaps I should rewrite this with python matplotlib and scipy and use
the interpolate module. This is non-trivial, so not something I'll get
to quickly.

 
 # ../../poky/scripts/contrib/bb-perf/bb-matrix-plot.sh
  line 0: Number of grid points must be in [2:1000] - not changed!
 
   Warning: Single isoline (scan) is not enough for a pm3d plot.
Hint: Missing blank lines in the data file? See 'help pm3d' and 
 FAQ.
   Warning: Single isoline (scan) is not enough for a pm3d plot.
Hint: Missing blank lines in the data file? See 'help pm3d' and 
 FAQ.
   Warning: Single isoline (scan) is not enough for a pm3d plot.
Hint: Missing blank lines in the data file? See 'help pm3d' and 
 FAQ.
   Warning: Single isoline (scan) is not enough for a pm3d plot.
Hint: Missing blank lines in the data file? See 'help pm3d' and 
 FAQ.
 
 Result: http://imgur.com/mfgWb
 

-- 
Darren Hart
Intel Open Source Technology Center
Yocto Project - Linux Kernel
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH 1/1] [meta-intel] Romley: Update SRCREV

2012-04-13 Thread kishore . k . bodke
From: Kishore Bodke kishore.k.bo...@intel.com

Update the SRCREV to include the
82580 Gigabit ethernet driver from meta branch
for the romley machine.

Signed-off-by: Kishore Bodke kishore.k.bo...@intel.com
---
 .../recipes-kernel/linux/linux-yocto_3.2.bbappend  |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/meta-romley/recipes-kernel/linux/linux-yocto_3.2.bbappend 
b/meta-romley/recipes-kernel/linux/linux-yocto_3.2.bbappend
index 803e772..19d96bf 100644
--- a/meta-romley/recipes-kernel/linux/linux-yocto_3.2.bbappend
+++ b/meta-romley/recipes-kernel/linux/linux-yocto_3.2.bbappend
@@ -5,5 +5,5 @@ COMPATIBLE_MACHINE_romley = romley
 KMACHINE_romley  = romley
 KBRANCH_romley  = standard/default/common-pc-64/romley
 
-SRCREV_machine_pn-linux-yocto_romley ?= 
5c7f1c53b5b367858ae6a86c1d4de36d8c71bedb
+SRCREV_machine_pn-linux-yocto_romley ?= 
135c75bf9615334b5b8bb9108d612fe7dfbdb901
 SRCREV_meta_pn-linux-yocto_romley ?= 59f350ec3794e19fa806c1b73749d851f8ebf364
-- 
1.7.5.4

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH 0/1] [meta-intel] Romley: Update SRCREV

2012-04-13 Thread kishore . k . bodke
From: Kishore Bodke kishore.k.bo...@intel.com

Updating the SRCREV in linux 3.2 bbappend file
for including the 82580 Gigabit Ethernet drivers
from the meta branch for the romley machine.

Please pull into meta-intel/master.

Thanks
Kishore.

The following changes since commit 15860ffb2164629c27f4bf49614efad5177441c4:

  Cedartrail: Update the README. (2012-04-12 09:09:07 -0500)

are available in the git repository at:
  git://git.pokylinux.org/meta-intel-contrib kishore/romley
  http://git.pokylinux.org/cgit.cgi/meta-intel-contrib/log/?h=kishore/romley

Kishore Bodke (1):
  Romley: Update SRCREV

 .../recipes-kernel/linux/linux-yocto_3.2.bbappend  |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

-- 
1.7.5.4

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH 1/1] [meta-intel] Romley: Update SRCREV

2012-04-13 Thread kishore . k . bodke
From: Kishore Bodke kishore.k.bo...@intel.com

Update the SRCREV to include the
82580 Gigabit ethernet driver from meta branch
for the romley machine.

Signed-off-by: Kishore Bodke kishore.k.bo...@intel.com
---
 .../recipes-kernel/linux/linux-yocto_3.2.bbappend  |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/meta-romley/recipes-kernel/linux/linux-yocto_3.2.bbappend 
b/meta-romley/recipes-kernel/linux/linux-yocto_3.2.bbappend
index 803e772..517ab50 100644
--- a/meta-romley/recipes-kernel/linux/linux-yocto_3.2.bbappend
+++ b/meta-romley/recipes-kernel/linux/linux-yocto_3.2.bbappend
@@ -6,4 +6,4 @@ KMACHINE_romley  = romley
 KBRANCH_romley  = standard/default/common-pc-64/romley
 
 SRCREV_machine_pn-linux-yocto_romley ?= 
5c7f1c53b5b367858ae6a86c1d4de36d8c71bedb
-SRCREV_meta_pn-linux-yocto_romley ?= 59f350ec3794e19fa806c1b73749d851f8ebf364
+SRCREV_meta_pn-linux-yocto_romley ?= 135c75bf9615334b5b8bb9108d612fe7dfbdb901
-- 
1.7.5.4

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] native recipe and sysroot-destdir troubles

2012-04-13 Thread Philip Tricca
Worked out a solution to this issue.  For the sake of brevity it can be
found here:

http://twobit.us/blog/2012/04/openembedded-yocto-native-hello-world/

Cheers,
- Philip

On 04/12/2012 07:55 PM, Philip Tricca wrote:
 More / better info:
 
 On 04/12/2012 10:44 AM, Philip Tricca wrote:
 I'm working on two new recipes and both are working quite well.  Now I
 need native variants and online sources indicate this should be done
 through BBCLASSEXTEND = native.  For one of my recipes this works
 fine, for the other not so much.

 The error I'm seeing seems to be in the staging of the sysroot-destdir
 which ends up being empty though the source code builds fine (image
 directory has everything expected). do_populate_sysroot seems to assume
 there's a directory structure present which ends up being empty causing
 an error when it trys to tar the directory up.  If I create the
 directories do_populate_sysroot expects the recipe runs to completion
 but sysroot-destdir still ends up being empty and no packages are built.
 
 Image directiory is populated as expected (has lib, usr/include etc with
 expected files).  Build is failing on populate_sysroot:
 
 CalledProcessError: Command 'tar -cf - -C
 /home/build/poky-edison-6.0/build/tmp/work/i686-linux/libmylib-native-2.1.4-r0/sysroot-destdir///home/build/poky-edison-6.0/build/tmp/sysroots/i686-linux
 -ps . | tar -xf - -C
 /home/build/poky-edison-6.0/build/tmp/sysroots/i686-linux' returned
 non-zero exit status 2 with output tar:
 /home/build/poky-edison-6.0/build/tmp/work/i686-linux/libmylib-native-2.1.4-r0/sysroot-destdir///home/build/poky-edison-6.0/build/tmp/sysroots/i686-linux:
 Cannot chdir: No such file or directory
 tar: Error is not recoverable: exiting now
 
 The failure is obviously due to sysroot-destdir being empty.  The
 question is why this directory is populated for libmylib but not for
 libmylib-native ... they're built from the same recipe.
 
 Stack trace:
 
 ERROR: The stack trace of python calls that resulted in this
 exception/failure was:
 ERROR:   File sstate_task_postfunc, line 10, in module
 ERROR:
 ERROR:   File sstate_task_postfunc, line 4, in sstate_task_postfunc
 ERROR:
 ERROR:   File sstate.bbclass, line 19, in sstate_install
 ERROR:
 ERROR:   File /home/build/poky-edison-6.0/meta/lib/oe/path.py, line
 59, in copytree
 ERROR: check_output(cmd, shell=True, stderr=subprocess.STDOUT)
 ERROR:
 ERROR:   File /home/build/poky-edison-6.0/meta/lib/oe/path.py, line
 121, in check_output
 ERROR: raise CalledProcessError(retcode, cmd, output=output)
 ERROR:
 ERROR: The code that was being executed was:
 ERROR:  0006:bb.build.exec_func(intercept, d)
 ERROR:  0007:sstate_package(shared_state, d)
 ERROR:  0008:
 ERROR:  0009:
 ERROR:  *** 0010:sstate_task_postfunc(d)
 ERROR:  0011:
 ERROR: (file: 'sstate_task_postfunc', lineno: 10, function: module)
 ERROR:  0001:
 ERROR:  0002:def sstate_task_postfunc(d):
 ERROR:  0003:shared_state = sstate_state_fromvars(d)
 ERROR:  *** 0004:sstate_install(shared_state, d)
 ERROR:  0005:for intercept in shared_state['interceptfuncs']:
 ERROR:  0006:bb.build.exec_func(intercept, d)
 ERROR:  0007:sstate_package(shared_state, d)
 ERROR:  0008:
 ERROR: (file: 'sstate_task_postfunc', lineno: 4, function:
 sstate_task_postfunc)
 ERROR: Function 'sstate_task_postfunc' failed
 
 Thanks,
 - Philip
 

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] error: runqemu command not fonund, when $ runqemu qemux86 bzImage-qemux86.bin core-image-lsb-qt3-qemux86.ext3 THx!

2012-04-13 Thread jack

hi,all! I Use a pre-built image and run it in the QEMU emulator.
I have downloaded and installed 
poky-eglibc-i686-i586-toolchain-gmae-1.1.1.tar.bz2, then I downloaded 
bzImage-qemux86.bin  and  core-image-lsb-qt3-qemux86.ext3

but when
$ runqemu qemux86 bzImage-qemux86.bin core-image-lsb-qt3-qemux86.ext3
error like below:
error: runqemu command not fonund

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto