Re: Issues with Android toolchain build

2011-05-09 Thread Zach Pfeffer
Paul, thanks for getting some builds going.

On 8 May 2011 18:40, Paul Sokolovsky paul.sokolov...@linaro.org wrote:
 (sorry, first time sent from wrong email, don't know if that'll get
 thru)

 Hello Android team,

 I was working on making Android toolchain buildable using Android build
 service, and finally I was able to do successful and reproducible
 builds - of AOSP pristine bare-matal toolchain so far
 (https://android-build.linaro.org/builds/~pfalcon/aosp-toolchain/).
 There were few issues which needed to be investigated and resolved,
 and which I would like to discuss here:

 1. make -jN breakage

 Android build service builds on EC2 XLARGE instances with 16
 concurrent make jobs (-j16). This invariably leads to build failure
 sooner or later (exact location depends on other options and maybe
 non-deterministic at all). The failure is error: Link tests are not
 allowed after GCC_NO_EXECUTABLES. which somehow sends issue-hunting
 the wrong trail (like sysroot issues, etc.) but after some experiments
 I reduced this to exactly -j 1 being used, with -j1 it went past usual
 failure points reliably.

 2. Lack of DESTDIR support

 There's standard GNU autotools variable DESTDIR to install package
 into other directory besides $prefix. It is supported for gcc,
 binutils, etc., but not for Android's own toolchain/build project. The
 usual local-use trick is just to pass another prefix just for make
 install target, and that's what toolchain/build's README suggests. My
 only concern is cleanroomness of the results - suppose make install
 suddenly wants to rebuild something (libtool used to have such habit),
 then new prefix may be embedded into some executable, and then hit
 beyond the usual usage pattern (like, with non-English locale). Still,
 this is more a theoretical risk, which I as a build engineer should
 note, but not something to much worry about.

 3. libstdc++v3 build

 toolchain/build's README says that the default is not to build
 libstdc++v3 and it must be enabled explicitly. But in the current
 master I found that not to be the case - it gets enabled by default.
 And its build requires sysroot, so I had to disable it explicitly
 for bare-metal build.

Sounds like out-of-date documentation.

 4. sysroot source

 So, to build full-fledged toolchain, we need to supply sysroot.
 What should be source of it? Android build service script I started
 from extracts it from an official Android SDK release. Is that good
 enough source? I guess we'd miss some non-released optimizations that
 way (byteswap ARMv6 optimizations come to mind). Otherwise, what should
 be used instead? Obvious choice is to build Android target images, then
 build toolchain on that tree. But that would be too long and expensive.
 Should we prepare and tar sysroot and provide it as extra build
 artifact for Android target builds? Also, can there be any
 machine-specificity, or it's for sure one generic sysroot for specific
 Android version (with arch-specific things handled via #ifdef's)?


 Of these 4, first 3 are upstream-related bugreports with known
 workarounds. I wanted to write them down, but I'm not sure if more can
 be done about them - if you think it would be useful to submit them as
 bugs at lp/linaro-android (or directly to upstream?), I can do it.
 Sysroot issue of course requires input/discussion - in mail or at some
 LDS session.

I think bug reports would be useful, would you file them? We'll chat at LDS.



 Thanks,
  Paul                          mailto:pmis...@gmail.com


 --
 Best Regards,
 Paul


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [PATCH v3 00/12] mmc: use nonblock mmc requests to minimize latency

2011-05-09 Thread Philip Rakity

Hi Per,

We noticed on some of our systems if we ADMA or SDMA and a bounce buffer it is 
significantly faster then SDMA.

I believe ADMA will do large transfers.  Another data point.

Philip

On May 7, 2011, at 12:14 PM, Per Forlin wrote:

 How significant is the cache maintenance over head?
 It depends, the eMMC are much faster now
 compared to a few years ago and cache maintenance cost more due to
 multiple cache levels and speculative cache pre-fetch. In relation the
 cost for handling the caches have increased and is now a bottle neck
 dealing with fast eMMC together with DMA.
 
 The intention for introducing none blocking mmc requests is to minimize the
 time between a mmc request ends and another mmc request starts. In the
 current implementation the MMC controller is idle when dma_map_sg and
 dma_unmap_sg is processing. Introducing none blocking mmc request makes it
 possible to prepare the caches for next job parallel with an active
 mmc request.
 
 This is done by making the issue_rw_rq() none blocking.
 The increase in throughput is proportional to the time it takes to
 prepare (major part of preparations is dma_map_sg and dma_unmap_sg)
 a request and how fast the memory is. The faster the MMC/SD is
 the more significant the prepare request time becomes. Measurements on U5500
 and Panda on eMMC and SD shows significant performance gain for for large
 reads when running DMA mode. In the PIO case the performance is unchanged.
 
 There are two optional hooks pre_req() and post_req() that the host driver
 may implement in order to move work to before and after the actual mmc_request
 function is called. In the DMA case pre_req() may do dma_map_sg() and prepare
 the dma descriptor and post_req runs the dma_unmap_sg.
 
 Details on measurements from IOZone and mmc_test:
 https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req
 
 Under consideration:
 * Make pre_req and post_req private to core.c.
 * Generalize implementation and make it available for SDIO. 
 
 Changes since v2:
 * Fix compile warnings in core.c and block.c
 * Simplify max transfer size in mmc_test
 * set TASK_RUNNING in queue.c before issue_req()
 
 Per Forlin (12):
  mmc: add none blocking mmc request function
  mmc: mmc_test: add debugfs file to list all tests
  mmc: mmc_test: add test for none blocking transfers
  mmc: add member in mmc queue struct to hold request data
  mmc: add a block request prepare function
  mmc: move error code in mmc_block_issue_rw_rq to a separate function.
  mmc: add a second mmc queue request member
  mmc: add handling for two parallel block requests in issue_rw_rq
  mmc: test: add random fault injection in core.c
  omap_hsmmc: use original sg_len for dma_unmap_sg
  omap_hsmmc: add support for pre_req and post_req
  mmci: implement pre_req() and post_req()
 
 drivers/mmc/card/block.c  |  493 +++--
 drivers/mmc/card/mmc_test.c   |  340 +++-
 drivers/mmc/card/queue.c  |  180 ++--
 drivers/mmc/card/queue.h  |   31 ++-
 drivers/mmc/core/core.c   |  132 ++-
 drivers/mmc/core/debugfs.c|5 +
 drivers/mmc/host/mmci.c   |  146 +++-
 drivers/mmc/host/mmci.h   |8 +
 drivers/mmc/host/omap_hsmmc.c |   90 +++-
 include/linux/mmc/core.h  |9 +-
 include/linux/mmc/host.h  |   13 +-
 lib/Kconfig.debug |   11 +
 12 files changed, 1174 insertions(+), 284 deletions(-)
 
 -- 
 1.7.4.1
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-mmc in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


LDS

2011-05-09 Thread David Rusling

All,
for those of you that are here, welcome (especially any newbies), 
hope you have a good week! For those of you not here, you can join 
in as all of the sessions are streamed.   You can also ask questions via 
IRC.A good link to the Linaro sessions is here:


http://summit.linaro.org/uds-o/

Dave

--
David Rusling, CTO
http://www.linaro.org

Linaro
Lockton House
Clarendon Rd
Cambridge
CB2 8FH

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [PATCH v3 00/12] mmc: use nonblock mmc requests to minimize latency

2011-05-09 Thread Per Forlin
On 9 May 2011 04:05, Philip Rakity prak...@marvell.com wrote:

 Hi Per,

 We noticed on some of our systems if we ADMA or SDMA and a bounce buffer it 
 is significantly faster then SDMA.

I have not done work with ADMA or SDMA. Where should I look to read
more about it?
Are these the right places. DMA iop-dma.c and imx-sdma.c, MMC: sdhci.c.

 I believe ADMA will do large transfers.  Another data point.

 Philip
Thanks,
Per


 On May 7, 2011, at 12:14 PM, Per Forlin wrote:

 How significant is the cache maintenance over head?
 It depends, the eMMC are much faster now
 compared to a few years ago and cache maintenance cost more due to
 multiple cache levels and speculative cache pre-fetch. In relation the
 cost for handling the caches have increased and is now a bottle neck
 dealing with fast eMMC together with DMA.

 The intention for introducing none blocking mmc requests is to minimize the
 time between a mmc request ends and another mmc request starts. In the
 current implementation the MMC controller is idle when dma_map_sg and
 dma_unmap_sg is processing. Introducing none blocking mmc request makes it
 possible to prepare the caches for next job parallel with an active
 mmc request.

 This is done by making the issue_rw_rq() none blocking.
 The increase in throughput is proportional to the time it takes to
 prepare (major part of preparations is dma_map_sg and dma_unmap_sg)
 a request and how fast the memory is. The faster the MMC/SD is
 the more significant the prepare request time becomes. Measurements on U5500
 and Panda on eMMC and SD shows significant performance gain for for large
 reads when running DMA mode. In the PIO case the performance is unchanged.

 There are two optional hooks pre_req() and post_req() that the host driver
 may implement in order to move work to before and after the actual 
 mmc_request
 function is called. In the DMA case pre_req() may do dma_map_sg() and prepare
 the dma descriptor and post_req runs the dma_unmap_sg.

 Details on measurements from IOZone and mmc_test:
 https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req

 Under consideration:
 * Make pre_req and post_req private to core.c.
 * Generalize implementation and make it available for SDIO.

 Changes since v2:
 * Fix compile warnings in core.c and block.c
 * Simplify max transfer size in mmc_test
 * set TASK_RUNNING in queue.c before issue_req()

 Per Forlin (12):
  mmc: add none blocking mmc request function
  mmc: mmc_test: add debugfs file to list all tests
  mmc: mmc_test: add test for none blocking transfers
  mmc: add member in mmc queue struct to hold request data
  mmc: add a block request prepare function
  mmc: move error code in mmc_block_issue_rw_rq to a separate function.
  mmc: add a second mmc queue request member
  mmc: add handling for two parallel block requests in issue_rw_rq
  mmc: test: add random fault injection in core.c
  omap_hsmmc: use original sg_len for dma_unmap_sg
  omap_hsmmc: add support for pre_req and post_req
  mmci: implement pre_req() and post_req()

 drivers/mmc/card/block.c      |  493 
 +++--
 drivers/mmc/card/mmc_test.c   |  340 +++-
 drivers/mmc/card/queue.c      |  180 ++--
 drivers/mmc/card/queue.h      |   31 ++-
 drivers/mmc/core/core.c       |  132 ++-
 drivers/mmc/core/debugfs.c    |    5 +
 drivers/mmc/host/mmci.c       |  146 +++-
 drivers/mmc/host/mmci.h       |    8 +
 drivers/mmc/host/omap_hsmmc.c |   90 +++-
 include/linux/mmc/core.h      |    9 +-
 include/linux/mmc/host.h      |   13 +-
 lib/Kconfig.debug             |   11 +
 12 files changed, 1174 insertions(+), 284 deletions(-)

 --
 1.7.4.1

 --
 To unsubscribe from this list: send the line unsubscribe linux-mmc in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html



___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [PATCH v3 00/12] mmc: use nonblock mmc requests to minimize latency

2011-05-09 Thread Philip Rakity

On May 9, 2011, at 5:34 AM, Per Forlin wrote:

 On 9 May 2011 04:05, Philip Rakity prak...@marvell.com wrote:
 
 Hi Per,
 
 We noticed on some of our systems if we ADMA or SDMA and a bounce buffer it 
 is significantly faster then SDMA.
 
 I have not done work with ADMA or SDMA. Where should I look to read
 more about it?
 Are these the right places. DMA iop-dma.c and imx-sdma.c, MMC: sdhci.c.

sdhci.c for ADMA and SDMA

spec is at
http://www.sdcard.org/developers/tech/sdcard/pls/simplified_specs/

version 3 discusses ADMA

 
 I believe ADMA will do large transfers.  Another data point.
 
 Philip
 Thanks,
 Per
 
 
 On May 7, 2011, at 12:14 PM, Per Forlin wrote:
 
 How significant is the cache maintenance over head?
 It depends, the eMMC are much faster now
 compared to a few years ago and cache maintenance cost more due to
 multiple cache levels and speculative cache pre-fetch. In relation the
 cost for handling the caches have increased and is now a bottle neck
 dealing with fast eMMC together with DMA.
 
 The intention for introducing none blocking mmc requests is to minimize the
 time between a mmc request ends and another mmc request starts. In the
 current implementation the MMC controller is idle when dma_map_sg and
 dma_unmap_sg is processing. Introducing none blocking mmc request makes it
 possible to prepare the caches for next job parallel with an active
 mmc request.
 
 This is done by making the issue_rw_rq() none blocking.
 The increase in throughput is proportional to the time it takes to
 prepare (major part of preparations is dma_map_sg and dma_unmap_sg)
 a request and how fast the memory is. The faster the MMC/SD is
 the more significant the prepare request time becomes. Measurements on U5500
 and Panda on eMMC and SD shows significant performance gain for for large
 reads when running DMA mode. In the PIO case the performance is unchanged.
 
 There are two optional hooks pre_req() and post_req() that the host driver
 may implement in order to move work to before and after the actual 
 mmc_request
 function is called. In the DMA case pre_req() may do dma_map_sg() and 
 prepare
 the dma descriptor and post_req runs the dma_unmap_sg.
 
 Details on measurements from IOZone and mmc_test:
 https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req
 
 Under consideration:
 * Make pre_req and post_req private to core.c.
 * Generalize implementation and make it available for SDIO.
 
 Changes since v2:
 * Fix compile warnings in core.c and block.c
 * Simplify max transfer size in mmc_test
 * set TASK_RUNNING in queue.c before issue_req()
 
 Per Forlin (12):
  mmc: add none blocking mmc request function
  mmc: mmc_test: add debugfs file to list all tests
  mmc: mmc_test: add test for none blocking transfers
  mmc: add member in mmc queue struct to hold request data
  mmc: add a block request prepare function
  mmc: move error code in mmc_block_issue_rw_rq to a separate function.
  mmc: add a second mmc queue request member
  mmc: add handling for two parallel block requests in issue_rw_rq
  mmc: test: add random fault injection in core.c
  omap_hsmmc: use original sg_len for dma_unmap_sg
  omap_hsmmc: add support for pre_req and post_req
  mmci: implement pre_req() and post_req()
 
 drivers/mmc/card/block.c  |  493 
 +++--
 drivers/mmc/card/mmc_test.c   |  340 +++-
 drivers/mmc/card/queue.c  |  180 ++--
 drivers/mmc/card/queue.h  |   31 ++-
 drivers/mmc/core/core.c   |  132 ++-
 drivers/mmc/core/debugfs.c|5 +
 drivers/mmc/host/mmci.c   |  146 +++-
 drivers/mmc/host/mmci.h   |8 +
 drivers/mmc/host/omap_hsmmc.c |   90 +++-
 include/linux/mmc/core.h  |9 +-
 include/linux/mmc/host.h  |   13 +-
 lib/Kconfig.debug |   11 +
 12 files changed, 1174 insertions(+), 284 deletions(-)
 
 --
 1.7.4.1
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-mmc in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
 


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Issues with Android toolchain build

2011-05-09 Thread Paul Sokolovsky
Hello Jim,

On Mon, 9 May 2011 19:01:43 +0800
Jim Huang jim.hu...@linaro.org wrote:

 On 9 May 2011 00:40, Paul Sokolovsky paul.sokolov...@linaro.org
 wrote:
  (sorry, first time sent from wrong email, don't know if that'll get
  thru)
 
  Hello Android team,
 
 hi Paul,
 
 It is my pleasure to discuss with you.
 
  I was working on making Android toolchain buildable using Android
  build service, and finally I was able to do successful and
  reproducible builds - of AOSP pristine bare-matal toolchain so far
  (https://android-build.linaro.org/builds/~pfalcon/aosp-toolchain/).
  There were few issues which needed to be investigated and resolved,
  and which I would like to discuss here:
 
 
 In my humble opinion, AOSP toolchain ought to keep the same from
 Google binary delivery since we have no official documentation about
 Google's build environment and detailed instructions.

I agree. But the talk is more about performing reliable builds of an
Android toolchain - taken from any source, of which AOSP is one (known
good apparently). It just happen to be that pristine AOSP toolchain
build is already here (Linaro-optimized to follow of course).

 Linaro wiki contains one comprehensive page:
 https://wiki.linaro.org/Platform/Android/UpstreamToolchain

Well, doing continuous build in the cloud uncovered bunch of issues not
discussed there ;-).
 
 Although we can build on our own, it is unofficial and not fully the
 same as Google's.
 
  1. make -jN breakage
 
  Android build service builds on EC2 XLARGE instances with 16
  concurrent make jobs (-j16). This invariably leads to build failure
  sooner or later (exact location depends on other options and maybe
  non-deterministic at all). The failure is error: Link tests are not
  allowed after GCC_NO_EXECUTABLES. which somehow sends issue-hunting
  the wrong trail (like sysroot issues, etc.) but after some
  experiments I reduced this to exactly -j 1 being used, with -j1 it
  went past usual failure points reliably.
 
 Agree.  Using -j1 is safe.

Ok, is it well-known fact that using N1 is unsafe? If no, should we do
something about it?

  2. Lack of DESTDIR support
 
  There's standard GNU autotools variable DESTDIR to install package
  into other directory besides $prefix. It is supported for gcc,
  binutils, etc., but not for Android's own toolchain/build project.
  The usual local-use trick is just to pass another prefix just for
  make install target, and that's what toolchain/build's README
  suggests. My only concern is cleanroomness of the results -
  suppose make install suddenly wants to rebuild something (libtool
  used to have such habit), then new prefix may be embedded into some
  executable, and then hit beyond the usual usage pattern (like, with
  non-English locale). Still, this is more a theoretical risk, which
  I as a build engineer should note, but not something to much worry
  about.
 
 If you were looking into recent change in AOSP, more target are going
 to be installed for unknown reasons.
 
 I don't think it is a problem though once we have specific manners.
 
  3. libstdc++v3 build
 
  toolchain/build's README says that the default is not to build
  libstdc++v3 and it must be enabled explicitly. But in the current
  master I found that not to be the case - it gets enabled by default.
  And its build requires sysroot, so I had to disable it explicitly
  for bare-metal build.
 
  4. sysroot source
 
 
 Yes, you need to give sysroot while building toolchain.
 
 You can get it from either NDK or built root file system.

Are there any differences using one vs another? Are there any risks
using NDK's?


There're following sessions dedicated to Android continuous builds
and/or toolchains, let's discuss these questions within their scope:

https://blueprints.launchpad.net/linaro-android/+spec/linaro-android-o-continuous-integration
https://blueprints.launchpad.net/linaro-android/+spec/linaro-android-o-toolchain-distribution

 
  So, to build full-fledged toolchain, we need to supply sysroot.
  What should be source of it?
 
 Thanks,
 -jserv



-- 
Best Regards,
Paul

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Linaro Technical Showcase

2011-05-09 Thread Stephen Doel
Hi All,

 

As a reminder, please make sure you head to the Linaro Technical Showcase in
the Grand Ballroom on Tuesday from 7pm. 

 

Here are 3 great reasons to be there:

1.   Opportunity to see and chat about at least 19 Linaro based demos
(https://wiki.linaro.org/Events/2011-05-LDS/Showcase#Demo_list)

. BTW - if you'd still like to do a demo, its not too late. Just let
me, Arwen or Michael Opdenacker know before 4pm Tuesday

2.   Vote for your favourite demo - the first 50 voters get a $50
discount voucher to purchase a Freescale Quick Start board

3.   See our illustrious VP Engineering in a tuxedo!

 

Have a great evening.

 

Thx

 

Stephen Doel

Chief Operating Officer

Linaro Ltd

+44 77 66 014 247

 

___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: Linaro Technical Showcase

2011-05-09 Thread Genesi USA
Hi Stephen, how about first 50 voters get a $100 discount toward the
purchase an Efika MX 3G Smartbook. :)

www.genesi-usa.com/store/details/16

Best wishes for a successful Linaro@UDS. :)

Best regards,
Raquel and Bill
Genesi


 On Mon, May 9, 2011 at 2:47 PM, Stephen Doel stephen.d...@linaro.orgwrote:

 Hi All,



 As a reminder, please make sure you head to the Linaro Technical Showcase
 in the Grand Ballroom on Tuesday from 7pm.



 Here are 3 great reasons to be there:

 1.   Opportunity to see and chat about at least 19 Linaro based demos
 (https://wiki.linaro.org/Events/2011-05-LDS/Showcase#Demo_list)

 · BTW – if you’d still like to do a demo, *its not too late*.
 Just let me, Arwen or Michael Opdenacker know before 4pm Tuesday

 2.   Vote for your favourite demo – the first 50 voters get a $50
 discount voucher to purchase a Freescale Quick Start board

 3.   See our illustrious VP Engineering in a tuxedo!



 Have a great evening.



 Thx



 Stephen Doel

 Chief Operating Officer

 Linaro Ltd

 +44 77 66 014 247



 ___
 linaro-dev mailing list
 linaro-dev@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/linaro-dev




 --
 http://bbrv.blogspot.com/




-- 
http://bbrv.blogspot.com/
___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev