2a.
Exhaustive is a superset of core. We run the core tests pre-commit on
CentOS 6 + HDFS and the full exhaustive tests post-commit on a wider range
of configurations. We don't release Impala unless all exhaustive tests
passed on all configurations we test (if there's a valid reason why
something doesn't work on a given platform we skip the test).

2b.
Exhaustive is a superset of core, so if exhaustive passes then core should
do. The exhaustive build takes much longer than core so it makes sense to
run it less frequently (e.g. we run it nightly for some configurations and
weekly for others).

2c.
Confusingly, the core/exhaustive data load doesn't map to core/exhaustive
tests. We actually use the same data load for all test configurations. See
testdata/bin/create-load-data.sh for how the core/exhaustive data load is
invoked. E.g. we load the functional data with exhaustive (i.e. all
supported file formats) and the larger tpc-h/tpc-ds data sets for only a
subset of file forms.


On Wed, Jul 20, 2016 at 9:39 PM, Valencia Serrao <vser...@us.ibm.com> wrote:

> Hi Tim,
>
> Thank you for the insight on the issues.
>
> 1. *BE test -issue: benchmark-test hangs*
> As you suggested, I increased the "batch_size" value to upto 125000000,
> however, the sw.ElapsedTime() does not increase inside the while and again
> gets caught up in an infinite loop. The optimization level seems to cause
> this behavior. I am still working in this.
>
> 2. *Custom cluster tests:* skipping some tests in test_spilling
> I found in the logs that the "test_spilling" test was skipped as the
> exploration strategy was set to "core" on our Impala setup.
>
> Some question here,
> a. From a Impala release perspective how significant are these strategies
> (core, exhaustive, etc.) ?
> b. Do we have to test with all combinations (core|release mode build and
> exhaustive|release mode build).
> c. Does the exploration strategy selection also affect the test data
> loaded ? (data loaded is different in each exploration strategy ? )
>
> Please let me know your comments.
>
> Regards,
> Valencia
>
> [image: Inactive hide details for Tim Armstrong ---07/19/2016 09:11:48
> PM---With 2, it's a little strange that test_spilling is being s]Tim
> Armstrong ---07/19/2016 09:11:48 PM---With 2, it's a little strange that
> test_spilling is being skipped - I think that one should be run.
>
> From: Tim Armstrong <tarmstr...@cloudera.com>
> To: Valencia Serrao/Austin/Contr/IBM@IBMUS
> Cc: dev@impala.incubator.apache.org, Manish Patil/Austin/Contr/IBM@IBMUS,
> Nishidha Panpaliya/Austin/Contr/IBM@IBMUS, Sudarshan
> Jagadale/Austin/Contr/IBM@IBMUS
> Date: 07/19/2016 09:11 PM
>
> Subject: Re: Issues with tests in Release-mode Impala build
> ------------------------------
>
>
>
> With 2, it's a little strange that test_spilling is being skipped - I
> think that one should be run.
>
> On Tue, Jul 19, 2016 at 8:39 AM, Tim Armstrong <*tarmstr...@cloudera.com*
> <tarmstr...@cloudera.com>> wrote:
>
>    It looks like the benchmark-test issue is something to do with the
>    granularity of the clock. It can get stuck in an infinite loop if the
>    function call below always takes less than the smallest measurable unit of
>    time (i.e. Start() and Stop() are called in the same time quantum).
>
>      while (sw.ElapsedTime() < target_cycles) {
>        sw.Start();
>        function(batch_size, args);
>        sw.Stop();
>        iters += batch_size;
>      }
>
>    We use Intel's rdtsc instruction for a timer here, so I guess whatever
>    PPC alternative you used may work a little differently. This is probably
>    ok, but it's possible that it could affect timers elsewhere in Impala.
>
>    One solution would be to increase the default batch size.
>
>    On Tue, Jul 19, 2016 at 5:29 AM, Valencia Serrao <*vser...@us.ibm.com*
>    <vser...@us.ibm.com>> wrote:
>    Hi Tim,
>
>    Following are some observations:
>
>    1. *BE test -issue: benchmark-test hangs*
>    Putting trace logs like below in benchmark.cc:
>
>
>
>
>
>
>
> * while (sw.ElapsedTime() < target_cycles) { LOG(INFO) <<" in
>    while(sw.ElapsedTime() < target_cycles)"; sw.Start(); function(batch_size,
>    args); sw.Stop(); iters += batch_size; LOG(INFO) <<" In while::::::::
>    sw.ElapsedTime() "<< sw.ElapsedTime(); LOG(INFO) <<" In while:::::::: iters
>    = " << iters ;*
>
>    In Release mode, I observed that the *sw.ElapsedTime()* remains
>    constant and does not increase, therefore, it is caught up in an infinite
>    loop and the benchmark-test hangs. In Debug mode, *sw.ElapsedTime()*
>    keeps on increasing and therefore is able to come out of the while loop and
>    benchmark-test doesn't hang in Debug mode.
>    I'm working on this issue, however, if you could give any pointers
>    about it, that would be really great.
>
>    2. *Custom cluster tests: *I have included the code changes in my
>    branch and many of the earlier 36 skipped tests have now executed and they
>    pass, but with the following exception(when compared to the output in the
>    *https://issues.cloudera.org/browse/IMPALA-3614*
>    <https://issues.cloudera.org/browse/IMPALA-3614> ):
>    custom_cluster/test_spilling.py sss.
>
> * Current CC test stats:* 34 passed, 7 skipped, 3 warnings.
>
>    3.* End-to-End tests:* I couldn't dive into the EE tests. I will
>    surely let you know more about them as soon as I'm done with them.
>
>    Regards,
>    Valencia
>
>    [image: Inactive hide details for Valencia Serrao---07/19/2016
>    10:26:31 AM---Hi Tim, Thank you for the information.]Valencia
>    Serrao---07/19/2016 10:26:31 AM---Hi Tim, Thank you for the information.
>
>    From: Valencia Serrao/Austin/Contr/IBM
>    To: Tim Armstrong <*tarmstr...@cloudera.com* <tarmstr...@cloudera.com>>
>    Cc: *dev@impala.incubator.apache.org* <dev@impala.incubator.apache.org>,
>    Manish Patil/Austin/Contr/IBM@IBMUS, Nishidha
>    Panpaliya/Austin/Contr/IBM@IBMUS, Sudarshan
>    Jagadale/Austin/Contr/IBM@IBMUS
>    Date: 07/19/2016 10:26 AM
>    Subject: Re: Issues with tests in Release-mode Impala build
>    ------------------------------
>
>
>    Hi Tim,
>
>    Thank you for the information.
>
>    I am working on the pointers you have given and also on the fix for
>    Custom cluster (skipped) tests. I will inform you on the findings.
>
>    Regards,
>    Valencia
>
>
>
>    [image: Inactive hide details for Tim Armstrong ---07/18/2016 09:19:52
>    PM---Hi Valencia, 1. We run tests in release mode nightly and it]Tim
>    Armstrong ---07/18/2016 09:19:52 PM---Hi Valencia, 1. We run tests in
>    release mode nightly and it doesn't look like we've seen
>
>    From: Tim Armstrong <*tarmstr...@cloudera.com*
>    <tarmstr...@cloudera.com>>
>    To: *dev@impala.incubator.apache.org* <dev@impala.incubator.apache.org>
>    Cc: Valencia Serrao/Austin/Contr/IBM@IBMUS, Nishidha
>    Panpaliya/Austin/Contr/IBM@IBMUS, Sudarshan
>    Jagadale/Austin/Contr/IBM@IBMUS, Manish Patil/Austin/Contr/IBM@IBMUS
>    Date: 07/18/2016 09:19 PM
>    Subject: Re: Issues with tests in Release-mode Impala build
>    ------------------------------
>
>
>
>    Hi Valencia,
>
>    1. We run tests in release mode nightly and it doesn't look like we've
>    seen this hang. I'd suggest you attach a debugger to the benchmark-test
>    process and see what it's doing. It could either be an actual hang, or an
>    infinite/very long loop. That test is only testing our benchmarking
>    utilities, not Impala itself, but IMO it's always good to understand why
>    something like that is happening in case there's a more general problem.
>    2. Sounds like *https://issues.cloudera.org/browse/IMPALA-3614*
>    <https://issues.cloudera.org/browse/IMPALA-3614> . Have you got the
>    fix for that in your branch?
>    3. Look forward to hearing more.
>
>    Cheers,
>    Tim
>
>    On Mon, Jul 18, 2016 at 2:49 AM, Valencia Serrao <*vser...@us.ibm.com*
>    <vser...@us.ibm.com>> wrote:
>
>          Hi All,
>
>          I have built Impala in Release mode. I executed the tests,
>          following are
>          some observations:
>
>          1. BE test: The test execution hangs at the "benchmark-test".
>          There are no
>          errors shown and it hangs at this test. Earlier, running the BE
>          tests in
>          debug mode this issue did not occur.
>          2. Custom Cluster test: 5 tests passed and 36 tests skipped. All
>          of the
>          skipped cases give the message: "INSERT not implemented for S3"
>          3. EE tests: I've also seen some failures here (yet to check the
>          details)
>
>          As for FE and JDBC tests, everything works fine, release mode
>          test output
>          is same as that of debug mode test output.
>
>          Is the  "benchmark-test" test known to fail in Release mode or
>          am I missing
>          out on any configuration. Also, I want to understand the
>          significance of
>          this test, if in case we could ignore it and move ahead.
>
>
>
>          Regards,
>          Valencia
>
>
>
>
>
>
>
>

Reply via email to