You can stay on cdh5-trunk.

On Fri, Mar 25, 2016 at 9:43 AM, Nishidha Panpaliya <[email protected]>
wrote:

> Yes, I've started rebasing on cdh5-trunk. As expected, lot of changes and
> conflicts! Is there any release tag or version to which should I update
> from cdh5-trunk?
>
> Will update you the build status after merge.
>
> Thanks & Regards,
> Nishidha
>
> [image: Inactive hide details for Tim Armstrong ---03/24/2016 08:08:06
> AM---If you haven't already, I'd suggest rebasing on cdh5-trunk]Tim
> Armstrong ---03/24/2016 08:08:06 AM---If you haven't already, I'd suggest
> rebasing on cdh5-trunk and taking a look at my patch here that f
>
> From: Tim Armstrong <[email protected]>
> To: Jim Apple <[email protected]>
> Cc: Nishidha Panpaliya/Austin/Contr/IBM@IBMUS, David
> Clissold/Austin/IBM@IBMUS, [email protected], Manish
> Patil/Austin/Contr/IBM@IBMUS, Sudarshan Jagadale/Austin/Contr/IBM@IBMUS
> Date: 03/24/2016 08:08 AM
> Subject: Re: Fw: Debugging Impala code
> ------------------------------
>
>
>
> If you haven't already, I'd suggest rebasing on cdh5-trunk and taking a
> look at my patch here that fixes those tests for later versions of LLVM (on
> x86): *http://gerrit.cloudera.org/#/c/2486/*
> <http://gerrit.cloudera.org/#/c/2486/> . There were a lot of subtle
> issues so it will save you a lot of time.
>
> - Tim
>
> On Wed, Mar 23, 2016 at 9:45 AM, Jim Apple <*[email protected]*
> <[email protected]>> wrote:
>
>    You might try looking for your broken tests in the bug tracker. For
>    instance, looking for expr-test leads to
>    *https://issues.cloudera.org/browse/IMPALA-2995*
>    <https://issues.cloudera.org/browse/IMPALA-2995>.
>
>    On Wed, Mar 23, 2016 at 2:33 AM, Nishidha Panpaliya <
>    *[email protected]* <[email protected]>> wrote:
>    Hi Tim and Jim,
>
>    Once again I thank you for your quick help.
>
>    I ran run-backend-tests.sh and here is the result of all tests -
>       89% tests passed, 8 tests failed out of 71
>
>          Total Test time (real) = 979.08 sec
>
>          The following tests FAILED:
>          1 - llvm-codegen-test (SEGFAULT)
>          13 - expr-test (OTHER_FAULT)
>          14 - expr-codegen-test (OTHER_FAULT)
>          19 - data-stream-test (Failed)
>          22 - buffered-block-mgr-test (Failed)
>          32 - tmp-file-mgr-test (Failed)
>          33 - row-batch-serialize-test (SEGFAULT)
>          68 - filesystem-util-test (Failed)
>
>    PFA full log.
> * (See attached file: LastTest.log)*
>
>    I also investigated some of the failures like
>       - *In filesystem-util-test* : Cause of this failure is user running
>          the tests being "root". I could not run these tests with non-root 
> user but
>          I tested a sample test application that does exactly same thing as 
> done in
>          Createdirectory test of filesystem-util-test and it passed with 
> non-root
>          user. Also verified this by linux commands mkdir, chmod, rmdir with 
> root
>          and non-root user.
>          - *In tmp-file-mgr-test *: Even this failure looks same as it
>          also does chmod and the tries allocating space. Since I'm running 
> these
>          tests using root user, I would not get failure in accessing the 
> dir/file
>          even after removing write permissions if the user is root.
>          - *In* *llvm-codegen-test: *I tried debugging this test and
>          found a crash in llvm::Type::getVoidTy(...).
>       I'll keep on investigating other crashes and segmentation faults.
>    But in case, you find any of the failures familiar or existing on x86
>    platforms, please let us know.
>
>    Another news is we have got approval to share our patches. Soon, I'll
>    be uploading a patch with LLVM up-gradation work.
>
>    Thanks,
>    Nishidha
>
>
>    Thanks,
>    Nishidha
>    Tim Armstrong ---03/21/2016 09:10:46 PM---We generally run the full
>    test suite on machines with at least 32GB of memory: it's pretty memory hu
>
>    From: Tim Armstrong <*[email protected]*
>    <[email protected]>>
>    To: Jim Apple <*[email protected]* <[email protected]>>
>    Cc: *[email protected]* <[email protected]>,
>    Sudarshan Jagadale/Austin/Contr/IBM@IBMUS, Manish
>    Patil/Austin/Contr/IBM@IBMUS, David Clissold/Austin/IBM@IBMUS,
>    Nishidha Panpaliya/Austin/Contr/IBM@IBMUS
>    Date: 03/21/2016 09:10 PM
>    Subject: Re: Fw: Debugging Impala code
>    ------------------------------
>
>
>
>    We generally run the full test suite on machines with at least 32GB of
>    memory: it's pretty memory hungry because you have 3 Impalads running
>    side-by-side. I believe we tend to run the full data load on machines with
>    even more memory. You can start the test cluster with a single impalad
>    before running tests (./bin/start-impala-cluster -s1 &&
>    ./tests/run-tests.py). Some tests will fail since they assume 3 Impalads
>    but most should work ok.
>
>    Starting with the backend tests sounds like a good idea - they do
>    exercise some of the codegen and other architecture-dependent parts that
>    will likely be tricky.
>
>    - Tim
>
>    On Mon, Mar 21, 2016 at 5:09 AM, Jim Apple <*[email protected]*
>    <[email protected]>> wrote:
>       I think you should be able to run the backend tests without data
>          loading:
>
>          ./bin/run-backend-tests.sh
>          # or
>          ctest
>
>          As in the frontend tests, you can specify which test you want to
>          run:
>
>          ctest --output-on-failure -R expr-test # also shows what broke,
>          if anything
>
>          To only build the backend test run:
>
>          make be-test
>
>          On Mon, Mar 21, 2016 at 4:12 AM, Nishidha Panpaliya <
>          *[email protected]* <[email protected]>> wrote:
>          Thanks Jim and Tim for your replies. Really appreciate your
>          co-operation and promptness.
>
>          I've a few more queries -
>
>          1. What is the memory requirement of Impala to run all the
>          tests? Currently, I see test data creation and loading is consuming 
> almost
>          7GB of RAM. And after this, it gets stopped with bad_alloc 
> exception. I've
>          already requested to increase RAM of my VM. But just wanted to know 
> if 16GB
>          will suffice.
>
>          2. Can we skip load testing at this stage and simply run basic
>          unit tests at first? Or is there any setting by means of which we 
> can lower
>          the volume of test data being generated/loaded? Once basic tests are
>          working, we can focus on load testing.
>
>          Also, we wish to have a call with you to discuss all this. We
>          are located in India.
>
>          Thanks,
>          Nishidha
>
>
>          Sudarshan Jagadale---03/18/2016 11:04:45 AM---Thanks and Regards
>          Sudarshan Jagadale
>
>          From: Sudarshan Jagadale/Austin/Contr/IBM
>          To: Nishidha Panpaliya/Austin/Contr/IBM@IBMUS
>          Cc: *[email protected]* <[email protected]>, Manish
>          Patil/Austin/Contr/IBM@IBMUS
>          Date: 03/18/2016 11:04 AM
>          Subject: Fw: Debugging Impala code
>          ------------------------------
>
>
>
>          Thanks and Regards
>          Sudarshan Jagadale
>          Power Open Source Solutions
>          ----- Forwarded by Sudarshan Jagadale/Austin/Contr/IBM on
>          03/18/2016 11:04 AM -----
>
>          From: Tim Armstrong <*[email protected]*
>          <[email protected]>>
>          To: *[email protected]*
>          <[email protected]>
>          Cc: Sudarshan Jagadale/Austin/Contr/IBM@IBMUS
>          Date: 03/17/2016 10:39 PM
>          Subject: Re: Debugging Impala code
>          ------------------------------
>
>
>
>          Was it the impalad process that crashed? If so, there are a few
>          places you can check:
>             - Look in /tmp/impalad.ERROR, /tmp/impalad_node1.ERROR and
>                                  /tmp/impalad_node2.ERROR for error messages. 
> If it hit an assertion, you
>                                  will get the message in there.
>                                  - Look in the equivalent INFO logs for
>                                  other error messages (for some crashes, 
> there is info sent to INFO but not
>                                  ERROR)
>                                  - Look for hs_err_pid*.log files in the
>                                  directory you ran Impala from. These are 
> crash reports from the embedded
>                                  JVM in the impalad process
>                                  - Get impala to produce a core dump
>                                  (make sure you have ulimit -c unlimited set 
> when starting the cluster. I
>                                  have it set in my .bashrc file) then debug 
> with gdb.
>
>
>          On Thu, Mar 17, 2016 at 8:59 AM, Jim Apple <
>          *[email protected]* <[email protected]>> wrote:
>             I believe Hive is sometimes used for data loading, though I'm
>                      not sure.
>
>                      I haven't debugged impala during data loading, but
>                      when I do need to debug
>                      the backend, I often do
>
>                      sudo gdb -p $(ps -C impalad -o pid | tail -1 | awk
>                      '{print $1}')
>
>
>                      On Thu, Mar 17, 2016 at 8:50 AM, Nishidha Panpaliya <
>                      *[email protected]* <[email protected]>>
>                      wrote:
>
>                      >
>                      > Hi All,
>                      >
>                      > I'm able to build Impala on Ubuntu ppc64le but
>                      getting crashes while
>                      > loading test data.
>                      >
>                      > I wanted to know how do you normally debug Impala
>                      code while loading test
>                      > data before running unit tests. Other than core
>                      dump, what are the other
>                      > ways to find out causes of crash in Impala?
>                      >
>                      >
>                      > Thanks,
>                      > Nishidha
>                      >
>
>
>
>
>

Reply via email to