[jira] [Updated] (IMPALA-7582) Generate junit style symptoms for issues during cluster startup (minicluster and impala)

2018-09-19 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7582:
---
Summary: Generate junit style symptoms for issues during cluster startup 
(minicluster and impala)  (was: Generate symptoms for issues during cluster 
startup (minicluster and impala))

> Generate junit style symptoms for issues during cluster startup (minicluster 
> and impala)
> 
>
> Key: IMPALA-7582
> URL: https://issues.apache.org/jira/browse/IMPALA-7582
> Project: IMPALA
>  Issue Type: Task
>  Components: Infrastructure
>Reporter: nithya
>Assignee: nithya
>Priority: Major
>
> Generate symptoms for steps in cluster startup (see testdata/bin/run-all.sh)
> As a sub-task for https://issues.apache.org/jira/browse/IMPALA-7399, 
> run-all.sh script will be updated to generate junit type output that can be 
> used to produce test reports. These reports can then be used to triage any 
> failures/errors that happened during "Starting mini cluster" via run-all.sh 
> script
>  
> {noformat}
> 
> 
>      name="generate_junitxml.Run_sentry_service.run_sentry" skipped="0" tests="1" 
> time="0" timestamp="2018-09-17 18:06:34+00:00" url="None">
>          name="run_sentry">
>             
>              
> 
>         
>     
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-7582) Generate symptoms for issues during cluster startup (minicluster and impala)

2018-09-19 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7582:
---
Description: 
Generate symptoms for steps in cluster startup (see testdata/bin/run-all.sh)

As a sub-task for https://issues.apache.org/jira/browse/IMPALA-7399, run-all.sh 
script will be updated to generate junit type output that can be used to 
produce test reports. These reports can then be used to triage any 
failures/errors that happened during "Starting mini cluster" via run-all.sh 
script

 
{noformat}


    
        
            
             

        
    
{noformat}

  was:Generate symptoms for steps in cluster startup (see 
testdata/bin/run-all.sh)


> Generate symptoms for issues during cluster startup (minicluster and impala)
> 
>
> Key: IMPALA-7582
> URL: https://issues.apache.org/jira/browse/IMPALA-7582
> Project: IMPALA
>  Issue Type: Task
>  Components: Infrastructure
>Reporter: nithya
>Assignee: nithya
>Priority: Major
>
> Generate symptoms for steps in cluster startup (see testdata/bin/run-all.sh)
> As a sub-task for https://issues.apache.org/jira/browse/IMPALA-7399, 
> run-all.sh script will be updated to generate junit type output that can be 
> used to produce test reports. These reports can then be used to triage any 
> failures/errors that happened during "Starting mini cluster" via run-all.sh 
> script
>  
> {noformat}
> 
> 
>      name="generate_junitxml.Run_sentry_service.run_sentry" skipped="0" tests="1" 
> time="0" timestamp="2018-09-17 18:06:34+00:00" url="None">
>          name="run_sentry">
>             
>              
> 
>         
>     
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-7582) Generate symptoms for issues during cluster startup (minicluster and impala)

2018-09-17 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7582:
---
Epic Color:   (was: ghx-label-3)

> Generate symptoms for issues during cluster startup (minicluster and impala)
> 
>
> Key: IMPALA-7582
> URL: https://issues.apache.org/jira/browse/IMPALA-7582
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Reporter: nithya
>Assignee: nithya
>Priority: Major
>
> Generate symptoms for steps in cluster startup (see testdata/bin/run-all.sh)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-7582) Generate symptoms for issues during cluster startup (minicluster and impala)

2018-09-17 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7582:
---
Issue Type: Task  (was: Bug)

> Generate symptoms for issues during cluster startup (minicluster and impala)
> 
>
> Key: IMPALA-7582
> URL: https://issues.apache.org/jira/browse/IMPALA-7582
> Project: IMPALA
>  Issue Type: Task
>  Components: Infrastructure
>Reporter: nithya
>Assignee: nithya
>Priority: Major
>
> Generate symptoms for steps in cluster startup (see testdata/bin/run-all.sh)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-7582) Generate symptoms for issues during cluster startup (minicluster and impala)

2018-09-17 Thread nithya (JIRA)
nithya created IMPALA-7582:
--

 Summary: Generate symptoms for issues during cluster startup 
(minicluster and impala)
 Key: IMPALA-7582
 URL: https://issues.apache.org/jira/browse/IMPALA-7582
 Project: IMPALA
  Issue Type: Bug
  Components: Infrastructure
Reporter: nithya
Assignee: nithya


Generate symptoms for steps in cluster startup (see testdata/bin/run-all.sh)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-6923) Update/Cleanup $IMPALA_HOME/tests/benchmark folder

2018-08-01 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-6923:
---
Description: 
Update/Cleanup scripts in $IMPALA_HOME/tests/benchmark to address two items,

Item-1:

Out of the 3 scripts in the benchmark folder (report_benchmark_results.py, 
create_database.py and 

perf_result_datastore.py), only report_benchmark_results.py is currently being 
used upstream to generate a report comparing performance benchmark numbers 
between two given runs of the performance tests. The other two scripts have 
some code that inserts some metrics from a given performance test run to a 
database on a specified impala instance. But these scripts depend on some 
internal resources to generate a meaningful interpretation of these metrics and 
these resources are not available to external apache community. Hence removing 
these scripts. While removing these scripts report_benchmark_results.py needs 
to be cleaned up to remove any code pointing to these scripts.

 

Item-2:

Add some checks for metadata queries which don't have summaries and for command 
line option hive_results to report_benchmark_results.py script

 

  was:
Update/Cleanup scripts in $IMPALA_HOME/tests/benchmark to address two items,

Item-1:

Out of these 3 scripts in the benchmark folder, only 
report_benchmark_results.py is currently being used upstream to generate a 
report comparing performance benchmark numbers between two given runs of the 
performance tests. The other two scripts have some code that inserts some 
metrics from a given performance test run to a database on a specified impala 
instance. But these scripts depend on some internal resources to generate a 
meaningful interpretation of these metrics and these resources are not 
available to external apache community. Hence removing these scripts. While 
removing these scripts report_benchmark_results.py needs to be cleaned up to 
remove any code pointing to these scripts.

 

Item-2:

Add some checks for metadata queries which don't have summaries and for command 
line option hive_results to report_benchmark_results.py script

 


> Update/Cleanup  $IMPALA_HOME/tests/benchmark folder
> ---
>
> Key: IMPALA-6923
> URL: https://issues.apache.org/jira/browse/IMPALA-6923
> Project: IMPALA
>  Issue Type: Task
>Reporter: nithya
>Assignee: nithya
>Priority: Major
>
> Update/Cleanup scripts in $IMPALA_HOME/tests/benchmark to address two items,
> Item-1:
> Out of the 3 scripts in the benchmark folder (report_benchmark_results.py, 
> create_database.py and 
> perf_result_datastore.py), only report_benchmark_results.py is currently 
> being used upstream to generate a report comparing performance benchmark 
> numbers between two given runs of the performance tests. The other two 
> scripts have some code that inserts some metrics from a given performance 
> test run to a database on a specified impala instance. But these scripts 
> depend on some internal resources to generate a meaningful interpretation of 
> these metrics and these resources are not available to external apache 
> community. Hence removing these scripts. While removing these scripts 
> report_benchmark_results.py needs to be cleaned up to remove any code 
> pointing to these scripts.
>  
> Item-2:
> Add some checks for metadata queries which don't have summaries and for 
> command line option hive_results to report_benchmark_results.py script
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-6923) Update/Cleanup $IMPALA_HOME/tests/benchmark folder

2018-08-01 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya reassigned IMPALA-6923:
--

Assignee: nithya

> Update/Cleanup  $IMPALA_HOME/tests/benchmark folder
> ---
>
> Key: IMPALA-6923
> URL: https://issues.apache.org/jira/browse/IMPALA-6923
> Project: IMPALA
>  Issue Type: Task
>Reporter: nithya
>Assignee: nithya
>Priority: Major
>
> Update/Cleanup scripts in $IMPALA_HOME/tests/benchmark to address two items,
> Item-1:
> Out of these 3 scripts in the benchmark folder, only 
> report_benchmark_results.py is currently being used upstream to generate a 
> report comparing performance benchmark numbers between two given runs of the 
> performance tests. The other two scripts have some code that inserts some 
> metrics from a given performance test run to a database on a specified impala 
> instance. But these scripts depend on some internal resources to generate a 
> meaningful interpretation of these metrics and these resources are not 
> available to external apache community. Hence removing these scripts. While 
> removing these scripts report_benchmark_results.py needs to be cleaned up to 
> remove any code pointing to these scripts.
>  
> Item-2:
> Add some checks for metadata queries which don't have summaries and for 
> command line option hive_results to report_benchmark_results.py script
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-6923) Update/Cleanup $IMPALA_HOME/tests/benchmark folder

2018-08-01 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-6923:
---
Description: 
Update/Cleanup scripts in $IMPALA_HOME/tests/benchmark to address two items,

Item-1:

Out of these 3 scripts in the benchmark folder, only 
report_benchmark_results.py is currently being used upstream to generate a 
report comparing performance benchmark numbers between two given runs of the 
performance tests. The other two scripts have some code that inserts some 
metrics from a given performance test run to a database on a specified impala 
instance. But these scripts depend on some internal resources to generate a 
meaningful interpretation of these metrics and these resources are not 
available to external apache community. Hence removing these scripts. While 
removing these scripts report_benchmark_results.py needs to be cleaned up to 
remove any code pointing to these scripts.

 

Item-2:

Add some checks for metadata queries which don't have summaries and for command 
line option hive_results to report_benchmark_results.py script

 

  was:
Update scripts in $IMPALA_HOME/tests/benchmark to address two items,

Item-1:

Out of these 3 scripts in the benchmark folder, only 
report_benchmark_results.py is currently being used upstream to generate a 
report comparing performance benchmark numbers between two given runs of the 
performance tests. The other two scripts have some code that inserts some 
metrics from a given performance test run to a database on a specified impala 
instance. But these scripts depend on some internal resources to generate a 
meaningful interpretation of these metrics and these resources are not 
available to external apache community. Hence removing these scripts. While 
removing these scripts report_benchmark_results.py needs to be cleaned up to 
remove any code pointing to these scripts.

 

Item-2:

Add some checks for metadata queries which don't have summaries and for command 
line option hive_results to report_benchmark_results.py script

 


> Update/Cleanup  $IMPALA_HOME/tests/benchmark folder
> ---
>
> Key: IMPALA-6923
> URL: https://issues.apache.org/jira/browse/IMPALA-6923
> Project: IMPALA
>  Issue Type: Task
>Reporter: nithya
>Priority: Major
>
> Update/Cleanup scripts in $IMPALA_HOME/tests/benchmark to address two items,
> Item-1:
> Out of these 3 scripts in the benchmark folder, only 
> report_benchmark_results.py is currently being used upstream to generate a 
> report comparing performance benchmark numbers between two given runs of the 
> performance tests. The other two scripts have some code that inserts some 
> metrics from a given performance test run to a database on a specified impala 
> instance. But these scripts depend on some internal resources to generate a 
> meaningful interpretation of these metrics and these resources are not 
> available to external apache community. Hence removing these scripts. While 
> removing these scripts report_benchmark_results.py needs to be cleaned up to 
> remove any code pointing to these scripts.
>  
> Item-2:
> Add some checks for metadata queries which don't have summaries and for 
> command line option hive_results to report_benchmark_results.py script
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-6923) Update/Cleanup $IMPALA_HOME/tests/benchmark folder

2018-08-01 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-6923:
---
Summary: Update/Cleanup  $IMPALA_HOME/tests/benchmark folder  (was: Update 
scripts in $IMPALA_HOME/tests/benchmark)

> Update/Cleanup  $IMPALA_HOME/tests/benchmark folder
> ---
>
> Key: IMPALA-6923
> URL: https://issues.apache.org/jira/browse/IMPALA-6923
> Project: IMPALA
>  Issue Type: Task
>Reporter: nithya
>Priority: Major
>
> Update scripts in $IMPALA_HOME/tests/benchmark to address two items,
> Item-1:
> Out of these 3 scripts in the benchmark folder, only 
> report_benchmark_results.py is currently being used upstream to generate a 
> report comparing performance benchmark numbers between two given runs of the 
> performance tests. The other two scripts have some code that inserts some 
> metrics from a given performance test run to a database on a specified impala 
> instance. But these scripts depend on some internal resources to generate a 
> meaningful interpretation of these metrics and these resources are not 
> available to external apache community. Hence removing these scripts. While 
> removing these scripts report_benchmark_results.py needs to be cleaned up to 
> remove any code pointing to these scripts.
>  
> Item-2:
> Add some checks for metadata queries which don't have summaries and for 
> command line option hive_results to report_benchmark_results.py script
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-6923) Update scripts in $IMPALA_HOME/tests/benchmark

2018-08-01 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-6923:
---
Description: 
Update scripts in $IMPALA_HOME/tests/benchmark to address two items,

Item-1:

Out of these 3 scripts in the benchmark folder, only 
report_benchmark_results.py is currently being used upstream to generate a 
report comparing performance benchmark numbers between two given runs of the 
performance tests. The other two scripts have some code that inserts some 
metrics from a given performance test run to a database on a specified impala 
instance. But these scripts depend on some internal resources to generate a 
meaningful interpretation of these metrics and these resources are not 
available to external apache community. Hence removing these scripts. While 
removing these scripts report_benchmark_results.py needs to be cleaned up to 
remove any code pointing to these scripts.

 

Item-2:

Add some checks for metadata queries which don't have summaries and for command 
line option hive_results to report_benchmark_results.py script

 

  was:
 

Update scripts in 

Out of these 3 scripts, only report_benchmark_results.py is currently being 
used upstream to generate a report comparing performance benchmark numbers 
between two given runs of the performance tests. The other two scripts have 
some code that inserts some metrics from a given performance test run to a 
database on a specified impala instance. But these scripts depend on some 
internal resources to generate a meaningful interpretation of these metrics and 
these resources are not available to external apache community. Hence removing 
these scripts. While removing these scripts report_benchmark_results.py needs 
to be cleaned up to remove any code pointing to these scripts.

 

 


> Update scripts in $IMPALA_HOME/tests/benchmark
> --
>
> Key: IMPALA-6923
> URL: https://issues.apache.org/jira/browse/IMPALA-6923
> Project: IMPALA
>  Issue Type: Task
>Reporter: nithya
>Priority: Major
>
> Update scripts in $IMPALA_HOME/tests/benchmark to address two items,
> Item-1:
> Out of these 3 scripts in the benchmark folder, only 
> report_benchmark_results.py is currently being used upstream to generate a 
> report comparing performance benchmark numbers between two given runs of the 
> performance tests. The other two scripts have some code that inserts some 
> metrics from a given performance test run to a database on a specified impala 
> instance. But these scripts depend on some internal resources to generate a 
> meaningful interpretation of these metrics and these resources are not 
> available to external apache community. Hence removing these scripts. While 
> removing these scripts report_benchmark_results.py needs to be cleaned up to 
> remove any code pointing to these scripts.
>  
> Item-2:
> Add some checks for metadata queries which don't have summaries and for 
> command line option hive_results to report_benchmark_results.py script
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-6923) Update scripts in $IMPALA_HOME/tests/benchmark

2018-08-01 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-6923:
---
Description: 
 

Update scripts in 

Out of these 3 scripts, only report_benchmark_results.py is currently being 
used upstream to generate a report comparing performance benchmark numbers 
between two given runs of the performance tests. The other two scripts have 
some code that inserts some metrics from a given performance test run to a 
database on a specified impala instance. But these scripts depend on some 
internal resources to generate a meaningful interpretation of these metrics and 
these resources are not available to external apache community. Hence removing 
these scripts. While removing these scripts report_benchmark_results.py needs 
to be cleaned up to remove any code pointing to these scripts.

 

 

  was:
$IMPALA_HOME/tests/benchmark folder has scripts that can be used collect useful 
information about test execution.

 


> Update scripts in $IMPALA_HOME/tests/benchmark
> --
>
> Key: IMPALA-6923
> URL: https://issues.apache.org/jira/browse/IMPALA-6923
> Project: IMPALA
>  Issue Type: Task
>Reporter: nithya
>Priority: Major
>
>  
> Update scripts in 
> Out of these 3 scripts, only report_benchmark_results.py is currently being 
> used upstream to generate a report comparing performance benchmark numbers 
> between two given runs of the performance tests. The other two scripts have 
> some code that inserts some metrics from a given performance test run to a 
> database on a specified impala instance. But these scripts depend on some 
> internal resources to generate a meaningful interpretation of these metrics 
> and these resources are not available to external apache community. Hence 
> removing these scripts. While removing these scripts 
> report_benchmark_results.py needs to be cleaned up to remove any code 
> pointing to these scripts.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-6923) Update scripts in $IMPALA_HOME/tests/benchmark

2018-08-01 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-6923:
---
Description: 
$IMPALA_HOME/tests/benchmark folder has scripts that can be used collect useful 
information about test execution.

 

  was:
$IMPALA_HOME/tests/benchmark folder has scripts that can be used collect useful 
information about test execution.

Updating these scripts to capture
 * Test Run time execution information
 * Workload data
 * Add options to mark test run as official and to not save profiles.


> Update scripts in $IMPALA_HOME/tests/benchmark
> --
>
> Key: IMPALA-6923
> URL: https://issues.apache.org/jira/browse/IMPALA-6923
> Project: IMPALA
>  Issue Type: Task
>Reporter: nithya
>Priority: Major
>
> $IMPALA_HOME/tests/benchmark folder has scripts that can be used collect 
> useful information about test execution.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-7336) Build failure: Backing channel '' is disconnected

2018-07-27 Thread nithya (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559764#comment-16559764
 ] 

nithya commented on IMPALA-7336:


Issue is no not happening in the recent builds

> Build failure: Backing channel '' is disconnected
> ---
>
> Key: IMPALA-7336
> URL: https://issues.apache.org/jira/browse/IMPALA-7336
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 3.1.0
>Reporter: nithya
>Assignee: Tim Armstrong
>Priority: Major
>
> Impala build failures
> {code:java}
> FATAL: command execution failed*00:39:12* java.io.EOFException*00:39:12*  
> at 
> java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2624)*00:39:12*
>at 
> java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3099)*00:39:12*
>   at 
> java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:853)*00:39:12*
>  at 
> java.io.ObjectInputStream.(ObjectInputStream.java:349)*00:39:12*   
> at 
> hudson.remoting.ObjectInputStreamEx.(ObjectInputStreamEx.java:48)*00:39:12*
> at 
> hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)*00:39:12*
> at 
> hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:59)*00:39:12*
>  Caused: java.io.IOException: Unexpected termination of the channel*00:39:12* 
> at 
> hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:73)*00:39:12*
>  Caused: java.io.IOException: Backing channel '' is 
> disconnected.*00:39:12* at 
> hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:192)*00:39:12*
> at 
> hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:257)*00:39:12*
>at com.sun.proxy.$Proxy79.isAlive(Unknown Source)*00:39:12* at 
> hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1138)*00:39:12* 
>at 
> hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1130)*00:39:12*   
> at 
> hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:155)*00:39:12*  
> at 
> hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:109)*00:39:12*
>at 
> hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)*00:39:12* 
>at 
> hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)*00:39:12*  
> at 
> hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:736)*00:39:12*
>   at hudson.model.Build$BuildExecution.build(Build.java:206)*00:39:12*at 
> hudson.model.Build$BuildExecution.doRun(Build.java:163)*00:39:12*at 
> hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:496)*00:39:12*
>   at hudson.model.Run.execute(Run.java:1737)*00:39:12*at 
> hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)*00:39:12*at 
> hudson.model.ResourceController.execute(ResourceController.java:97)*00:39:12* 
>at hudson.model.Executor.run(Executor.java:419)*00:39:12* Build step 
> 'Execute shell' marked build as failure
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-7328) Errors in HdfsScanner::Open() errors get swallowed up

2018-07-27 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya reassigned IMPALA-7328:
--

Assignee: Sailesh Mukil

> Errors in HdfsScanner::Open() errors get swallowed up
> -
>
> Key: IMPALA-7328
> URL: https://issues.apache.org/jira/browse/IMPALA-7328
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.1.0
>Reporter: Michael Ho
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: broken-build
> Attachments: IMPALA-7328.tar.gz
>
>
> [https://jenkins.impala.io/job/parallel-all-tests/3826/|https://jenkins.impala.io/job/parallel-all-tests/3826/failed]
>  failed with at test_udfs.py:
> {noformat}
> 03:50:23 ] FAIL 
> query_test/test_udfs.py::TestUdfExecution::()::test_udf_errors[exec_option: 
> {'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'exec_single_node_rows_threshold': 100, 'enable_expr_rewrites': True} | 
> table_format: text/none]
> 03:50:23 ] === FAILURES 
> ===
> 03:50:23 ]  TestUdfExecution.test_udf_errors[exec_option: 
> {'disable_codegen_rows_threshold': 0, 'disable_codegen': True, 
> 'exec_single_node_rows_threshold': 100, 'enable_expr_rewrites': True} | 
> table_format: text/none] 
> 03:50:23 ] [gw13] linux2 -- Python 2.7.12 
> /home/ubuntu/Impala/bin/../infra/python/env/bin/python
> 03:50:23 ] query_test/test_udfs.py:415: in test_udf_errors
> 03:50:23 ] self.run_test_case('QueryTest/udf-errors', vector, 
> use_db=unique_database)
> 03:50:23 ] common/impala_test_suite.py:408: in run_test_case
> 03:50:23 ] self.__verify_exceptions(test_section['CATCH'], str(e), use_db)
> 03:50:23 ] common/impala_test_suite.py:286: in __verify_exceptions
> 03:50:23 ] (expected_str, actual_str)
> 03:50:23 ] E   AssertionError: Unexpected exception string. Expected: 
> BadExpr2 prepare error
> 03:50:23 ] E   Not found in actual: ImpalaBeeswaxException: Query 
> aborted:Cancelled
> 03:50:23 ]  Captured stderr setup 
> -
> {noformat}
> Digging through the log, the query which triggered the failure is 
> {{774db10632a21589:5a62e772}}
> It appears that the error which this test intends to fault at isn't shown at 
> the coordinator:
> {noformat}
> ExecState: query id=774db10632a21589:5a62e772 
> finstance=774db10632a21589:5a62e7720003 on host=ip-172-31-0-127:22001 
> (EXECUTING -> ERROR) status=Cancelled
> {noformat}
> In particular, the test aims to trigger a failure in {{HdfsScanner::Open()}} 
> when scalar expr evaluator is cloned:
> {noformat}
> // This prepare function always fails for cloned evaluators to exercise 
> IMPALA-6184.
> // It does so by detecting whether the caller is a cloned evaluator and 
> inserts an error
> // in FunctionContext if that's the case.
> void BadExpr2Prepare(FunctionContext* context,
> FunctionContext::FunctionStateScope scope) {
>   if (scope == FunctionContext::FRAGMENT_LOCAL) {
> int32_t* state = 
> reinterpret_cast(context->Allocate(sizeof(int32_t)));
> *state = 0xf001cafe;
> context->SetFunctionState(scope, state);
> // Set the thread local state too to differentiate from cloned evaluators.
> context->SetFunctionState(FunctionContext::THREAD_LOCAL, state);
>   } else {
> if (context->GetFunctionState(FunctionContext::THREAD_LOCAL) == nullptr) {
>   context->SetError("BadExpr2 prepare error");
> }
>   }
> }
> {noformat}
> However, for some reasons, the actual failure to be propagated and instead 
> the cancellation status was propagated instead. Staring at the code in 
> {{HdfsScanNode}}, it's not immediately clear where the race is.
> For the reference, the following is the expected error message:
> {noformat}
> ExecState: query id=64404101d8857592:173298a7 
> finstance=64404101d8857592:173298a70002 on host=ip-172-31-0-127:22002 
> (EXECUTING -> ERROR) status=BadExpr2 prepare error
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-6910) Multiple tests failing on S3 build: error reading from HDFS file

2018-07-26 Thread nithya (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558678#comment-16558678
 ] 

nithya commented on IMPALA-6910:


This issue happening again in another test - 

query_test/test_tpcds_queries.py - test_tpcds_q67a

> Multiple tests failing on S3 build: error reading from HDFS file
> 
>
> Key: IMPALA-6910
> URL: https://issues.apache.org/jira/browse/IMPALA-6910
> Project: IMPALA
>  Issue Type: Bug
>  Components: Backend
>Affects Versions: Impala 3.0
>Reporter: David Knupp
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: broken-build, flaky, s3
>
> Stacktrace
> {noformat}
> query_test/test_compressed_formats.py:149: in test_seq_writer
> self.run_test_case('QueryTest/seq-writer', vector, unique_database)
> common/impala_test_suite.py:397: in run_test_case
> result = self.__execute_query(target_impalad_client, query, user=user)
> common/impala_test_suite.py:612: in __execute_query
> return impalad_client.execute(query, user=user)
> common/impala_connection.py:160: in execute
> return self.__beeswax_client.execute(sql_stmt, user=user)
> beeswax/impala_beeswax.py:173: in execute
> handle = self.__execute_query(query_string.strip(), user=user)
> beeswax/impala_beeswax.py:341: in __execute_query
> self.wait_for_completion(handle)
> beeswax/impala_beeswax.py:361: in wait_for_completion
> raise ImpalaBeeswaxException("Query aborted:" + error_log, None)
> E   ImpalaBeeswaxException: ImpalaBeeswaxException:
> EQuery aborted:Disk I/O error: Error reading from HDFS file: 
> s3a://impala-cdh5-s3-test/test-warehouse/tpcds.store_sales_parquet/ss_sold_date_sk=2452585/a5482dcb946b6c98-7543e0dd0004_95929617_data.0.parq
> E   Error(255): Unknown error 255
> E   Root cause: SdkClientException: Data read has a different length than the 
> expected: dataLength=8576; expectedLength=17785; includeSkipped=true; 
> in.getClass()=class com.amazonaws.services.s3.AmazonS3Client$2; 
> markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; 
> resetCount=0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-7361) test_heterogeneous_proc_mem_limit - Assertion Failure

2018-07-26 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7361:
---
Description: 
test_heterogeneous_proc_mem_limit fails with the following assertion error

 
{code:java}
AssertionError: ImpalaBeeswaxException:    Query aborted:Admission for query 
exceeded timeout 200ms in pool default-pool. Queued reason: Not enough memory 
available on host :22001.Needed 2.00 GB but only 1.00 GB out of 3.00 
GB was available.       assert None  +  where None = ('Queued reason: Not enough memory available on host \\S+.Needed 
2.00 GB but only 1.00 GB out of 2.00 GB was available.', 
'ImpalaBeeswaxException:\n Query aborted:Admission for query exceeded timeout 
200ms in pool default-pool. Queued reaso...:22001.Needed 2.00 GB but 
only 1.00 GB out of 3.00 GB was available.\n\n')  +    where  = re.search  +    and   'ImpalaBeeswaxException:\n Query 
aborted:Admission for query exceeded timeout 200ms in pool default-pool. Queued 
reaso...:22001.Needed 2.00 GB but only 1.00 GB out of 3.00 GB was 
available.\n\n' = str(ImpalaBeeswaxException())

{code}
 

stack trace
{code:java}
*Stacktrace*

custom_cluster/test_admission_controller.py:514: in 
test_heterogeneous_proc_mem_limit

    assert re.search("Queued reason: Not enough memory available on host 
\S+.Needed "

E   AssertionError: ImpalaBeeswaxException:

E      Query aborted:Admission for query exceeded timeout 200ms in pool 
default-pool. Queued reason: Not enough memory available on host 
:22001.Needed 2.00 GB but only 1.00 GB out of 3.00 GB was available.

E     

E     

E   assert None

E    +  where None = ('Queued reason: Not 
enough memory available on host \\S+.Needed 2.00 GB but only 1.00 GB out of 
2.00 GB was available.', 'ImpalaBeeswaxException:\n Query aborted:Admission for 
query exceeded timeout 200ms in pool default-pool. Queued 
reaso...:22001.Needed 2.00 GB but only 1.00 GB out of 3.00 GB was 
available.\n\n')

E    +    where  = re.search

E    +    and   'ImpalaBeeswaxException:\n Query aborted:Admission for query 
exceeded timeout 200ms in pool default-pool. Queued 
reaso...:22001.Needed 2.00 GB but only 1.00 GB out of 3.00 GB was 
available.\n\n' = str(ImpalaBeeswaxException())

*Standard Error*

08:55:51 MainThread: Starting State Store logging to 
/data/jenkins/workspace/impala-asf-master-core-s3/repos/Impala/logs/custom_cluster_tests/statestored.INFO

08:55:52 MainThread: Starting Catalog Service logging to 
/data/jenkins/workspace/impala-asf-master-core-s3/repos/Impala/logs/custom_cluster_tests/catalogd.INFO

08:55:53 MainThread: Starting Impala Daemon logging to 
/data/jenkins/workspace/impala-asf-master-core-s3/repos/Impala/logs/custom_cluster_tests/impalad.INFO

08:55:54 MainThread: Starting Impala Daemon logging to 
/data/jenkins/workspace/impala-asf-master-core-s3/repos/Impala/logs/custom_cluster_tests/impalad_node1.INFO

08:55:55 MainThread: Starting Impala Daemon logging to 
/data/jenkins/workspace/impala-asf-master-core-s3/repos/Impala/logs/custom_cluster_tests/impalad_node2.INFO

08:55:58 MainThread: Found 3 impalad/1 statestored/1 catalogd process(es)

08:55:58 MainThread: Getting num_known_live_backends from :25000

08:55:58 MainThread: Waiting for num_known_live_backends=3. Current value: 0

08:55:59 MainThread: Getting num_known_live_backends from:25000

08:55:59 MainThread: Waiting for num_known_live_backends=3. Current value: 1

08:56:00 MainThread: Getting num_known_live_backends from :25000

08:56:00 MainThread: Waiting for num_known_live_backends=3. Current value: 2

08:56:01 MainThread: Getting num_known_live_backends from :25000

08:56:01 MainThread: num_known_live_backends has reached value: 3

08:56:01 MainThread: Getting num_known_live_backends from :25001

08:56:01 MainThread: num_known_live_backends has reached value: 3

08:56:01 MainThread: Getting num_known_live_backends from :25002

08:56:01 MainThread: num_known_live_backends has reached value: 3

08:56:01 MainThread: Impala Cluster Running with 3 nodes (3 coordinators, 3 
executors).

MainThread: Found 3 impalad/1 statestored/1 catalogd process(es)

MainThread: Getting metric: statestore.live-backends from :25010

MainThread: Metric 'statestore.live-backends' has reached desired value: 4

MainThread: Getting num_known_live_backends from :25000

MainThread: num_known_live_backends has reached value: 3

MainThread: Getting num_known_live_backends from :25001

MainThread: num_known_live_backends has reached value: 3

MainThread: Getting num_known_live_backends from :25002

MainThread: num_known_live_backends has reached value: 3

-- connecting to: localhost:21000

-- executing against localhost:21000

use default;

 

SET sync_ddl=1;

-- executing against localhost:21000

drop database if exists `hs2_db` cascade;

 

SET disable_codegen_rows_threshold=5000;

SET disable_codegen=False;

SET abort_on_error=1;

SET exec_single_node_rows_threshold=0;


[jira] [Updated] (IMPALA-7348) PlannerTest.testKuduSelectivity failing due to missing Cardinality information

2018-07-25 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7348:
---
Description: 
PlannerTest.testKuduSelectivity failed in the recent run. It is an assertion 
failure to unavailable cardinality information.

Assertion failure as follows
{code:java}
Actual does not match expected result:
F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
Per-Host Resources: mem-estimate=0B mem-reservation=0B
  PLAN-ROOT SINK
  |  mem-estimate=0B mem-reservation=0B
  |
  00:SCAN KUDU [functional_kudu.zipcode_incomes]
     kudu predicates: id = '860US00601'
     mem-estimate=0B mem-reservation=0B
     tuple-ids=0 row-size=68B cardinality=unavailable
^

Expected:
F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
Per-Host Resources: mem-estimate=0B mem-reservation=0B
  PLAN-ROOT SINK
  |  mem-estimate=0B mem-reservation=0B
  |
  00:SCAN KUDU [functional_kudu.zipcode_incomes]
     kudu predicates: id = '860US00601'
     mem-estimate=0B mem-reservation=0B
     tuple-ids=0 row-size=124B cardinality=1 {code}
Verbose plan
{code:java}
Verbose plan:
F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
Per-Host Resources: mem-estimate=0B mem-reservation=0B
  PLAN-ROOT SINK
  |  mem-estimate=0B mem-reservation=0B
  |
  00:SCAN KUDU [functional_kudu.zipcode_incomes]
     kudu predicates: id = '860US00601'
     mem-estimate=0B mem-reservation=0B
     tuple-ids=0 row-size=68B cardinality=unavailable

Section DISTRIBUTEDPLAN of query:
select * from functional_kudu.zipcode_incomes where id = '860US00601'

Actual does not match expected result:
F01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
Per-Host Resources: mem-estimate=0B mem-reservation=0B
  PLAN-ROOT SINK
  |  mem-estimate=0B mem-reservation=0B
  |
  01:EXCHANGE [UNPARTITIONED]
     mem-estimate=0B mem-reservation=0B
     tuple-ids=0 row-size=68B cardinality=unavailable
^

F00:PLAN FRAGMENT [RANDOM] hosts=3 instances=3
Per-Host Resources: mem-estimate=0B mem-reservation=0B
  DATASTREAM SINK [FRAGMENT=F01, EXCHANGE=01, UNPARTITIONED]
  |  mem-estimate=0B mem-reservation=0B
  00:SCAN KUDU [functional_kudu.zipcode_incomes]
     kudu predicates: id = '860US00601'
     mem-estimate=0B mem-reservation=0B
     tuple-ids=0 row-size=68B cardinality=unavailable

Expected:
F01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
Per-Host Resources: mem-estimate=0B mem-reservation=0B
  PLAN-ROOT SINK
  |  mem-estimate=0B mem-reservation=0B
  |
  01:EXCHANGE [UNPARTITIONED]
     mem-estimate=0B mem-reservation=0B
     tuple-ids=0 row-size=124B cardinality=1

F00:PLAN FRAGMENT [RANDOM] hosts=3 instances=3
Per-Host Resources: mem-estimate=0B mem-reservation=0B
  DATASTREAM SINK [FRAGMENT=F01, EXCHANGE=01, UNPARTITIONED]
  |  mem-estimate=0B mem-reservation=0B
  00:SCAN KUDU [functional_kudu.zipcode_incomes]
     kudu predicates: id = '860US00601'
     mem-estimate=0B mem-reservation=0B
     tuple-ids=0 row-size=124B cardinality=1

Verbose plan:
F01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
Per-Host Resources: mem-estimate=0B mem-reservation=0B
  PLAN-ROOT SINK
  |  mem-estimate=0B mem-reservation=0B
  |
  01:EXCHANGE [UNPARTITIONED]
     mem-estimate=0B mem-reservation=0B
     tuple-ids=0 row-size=68B cardinality=unavailable

F00:PLAN FRAGMENT [RANDOM] hosts=3 instances=3
Per-Host Resources: mem-estimate=0B mem-reservation=0B
  DATASTREAM SINK [FRAGMENT=F01, EXCHANGE=01, UNPARTITIONED]
  |  mem-estimate=0B mem-reservation=0B
  00:SCAN KUDU [functional_kudu.zipcode_incomes]
     kudu predicates: id = '860US00601'
     mem-estimate=0B mem-reservation=0B
     tuple-ids=0 row-size=68B cardinality=unavailable

Section PLAN of query:
select * from functional_kudu.zipcode_incomes where id != '1' and zip = '2'

Actual does not match expected result:
F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
Per-Host Resources: mem-estimate=0B mem-reservation=0B
  PLAN-ROOT SINK
  |  mem-estimate=0B mem-reservation=0B
  |
  00:SCAN KUDU [functional_kudu.zipcode_incomes]
     predicates: id != '1'
     kudu predicates: zip = '2'
     mem-estimate=0B mem-reservation=0B
     tuple-ids=0 row-size=68B cardinality=unavailable
^

Expected:
F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
Per-Host Resources: mem-estimate=0B mem-reservation=0B
  PLAN-ROOT SINK
  |  mem-estimate=0B mem-reservation=0B
  |
  00:SCAN KUDU [functional_kudu.zipcode_incomes]
     predicates: id != '1'
     kudu predicates: zip = '2'
     mem-estimate=0B mem-reservation=0B
     tuple-ids=0 row-size=124B cardinality=1

Verbose plan:
F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
Per-Host Resources: mem-estimate=0B mem-reservation=0B
  PLAN-ROOT SINK
  |  mem-estimate=0B mem-reservation=0B
  |
  00:SCAN KUDU 

[jira] [Updated] (IMPALA-7348) PlannerTest.testKuduSelectivity failing due to missing Cardinality information

2018-07-25 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7348:
---
Epic Color:   (was: ghx-label-5)

> PlannerTest.testKuduSelectivity failing due to missing Cardinality information
> --
>
> Key: IMPALA-7348
> URL: https://issues.apache.org/jira/browse/IMPALA-7348
> Project: IMPALA
>  Issue Type: Bug
>Reporter: nithya
>Priority: Blocker
>
> PlannerTest.testKuduSelectivity failed in the recent run. It is an assertion 
> failure to unavailable cardinality information.
> Assertion failure as follows
> {code}
> Actual does not match expected result: F00:PLAN FRAGMENT [UNPARTITIONED] 
> hosts=1 instances=1 Per-Host Resources: mem-estimate=0B mem-reservation=0B 
> PLAN-ROOT SINK | mem-estimate=0B mem-reservation=0B | 00:SCAN KUDU 
> [functional_kudu.zipcode_incomes] kudu predicates: id = '860US00601' 
> mem-estimate=0B mem-reservation=0B tuple-ids=0 row-size=68B 
> cardinality=unavailable ^ 
> Expected: F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1 Per-Host 
> Resources: mem-estimate=0B mem-reservation=0B PLAN-ROOT SINK | 
> mem-estimate=0B mem-reservation=0B | 00:SCAN KUDU 
> [functional_kudu.zipcode_incomes] kudu predicates: id = '860US00601' 
> mem-estimate=0B mem-reservation=0B tuple-ids=0 row-size=124B cardinality=1
> {code}
>  
> Verbose plan
> {code}
> Verbose plan: F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1 Per-Host 
> Resources: mem-estimate=0B mem-reservation=0B PLAN-ROOT SINK | 
> mem-estimate=0B mem-reservation=0B | 00:SCAN KUDU 
> [functional_kudu.zipcode_incomes] kudu predicates: id = '860US00601' 
> mem-estimate=0B mem-reservation=0B tuple-ids=0 row-size=68B 
> cardinality=unavailable Section DISTRIBUTEDPLAN of query: select * from 
> functional_kudu.zipcode_incomes where id = '860US00601' Actual does not 
> match expected result: F01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1 
> Per-Host Resources: mem-estimate=0B mem-reservation=0B PLAN-ROOT SINK | 
> mem-estimate=0B mem-reservation=0B | 01:EXCHANGE [UNPARTITIONED] 
> mem-estimate=0B mem-reservation=0B tuple-ids=0 row-size=68B 
> cardinality=unavailable ^ 
> F00:PLAN FRAGMENT [RANDOM] hosts=3 instances=3 Per-Host Resources: 
> mem-estimate=0B mem-reservation=0B DATASTREAM SINK [FRAGMENT=F01, 
> EXCHANGE=01, UNPARTITIONED] | mem-estimate=0B mem-reservation=0B 00:SCAN KUDU 
> [functional_kudu.zipcode_incomes] kudu predicates: id = '860US00601' 
> mem-estimate=0B mem-reservation=0B tuple-ids=0 row-size=68B 
> cardinality=unavailable Expected: F01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 
> instances=1 Per-Host Resources: mem-estimate=0B mem-reservation=0B PLAN-ROOT 
> SINK | mem-estimate=0B mem-reservation=0B | 01:EXCHANGE [UNPARTITIONED] 
> mem-estimate=0B mem-reservation=0B tuple-ids=0 row-size=124B cardinality=1 
> F00:PLAN FRAGMENT [RANDOM] hosts=3 instances=3 Per-Host Resources: 
> mem-estimate=0B mem-reservation=0B DATASTREAM SINK [FRAGMENT=F01, 
> EXCHANGE=01, UNPARTITIONED] | mem-estimate=0B mem-reservation=0B 00:SCAN KUDU 
> [functional_kudu.zipcode_incomes] kudu predicates: id = '860US00601' 
> mem-estimate=0B mem-reservation=0B tuple-ids=0 row-size=124B cardinality=1 
> Verbose plan: F01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1 Per-Host 
> Resources: mem-estimate=0B mem-reservation=0B PLAN-ROOT SINK | 
> mem-estimate=0B mem-reservation=0B | 01:EXCHANGE [UNPARTITIONED] 
> mem-estimate=0B mem-reservation=0B tuple-ids=0 row-size=68B 
> cardinality=unavailable F00:PLAN FRAGMENT [RANDOM] hosts=3 instances=3 
> Per-Host Resources: mem-estimate=0B mem-reservation=0B DATASTREAM SINK 
> [FRAGMENT=F01, EXCHANGE=01, UNPARTITIONED] | mem-estimate=0B 
> mem-reservation=0B 00:SCAN KUDU [functional_kudu.zipcode_incomes] kudu 
> predicates: id = '860US00601' mem-estimate=0B mem-reservation=0B 
> tuple-ids=0 row-size=68B cardinality=unavailable Section PLAN of query: 
> select * from functional_kudu.zipcode_incomes where id != '1' and zip = '2' 
> Actual does not match expected result: F00:PLAN FRAGMENT [UNPARTITIONED] 
> hosts=1 instances=1 Per-Host Resources: mem-estimate=0B mem-reservation=0B 
> PLAN-ROOT SINK | mem-estimate=0B mem-reservation=0B | 00:SCAN KUDU 
> [functional_kudu.zipcode_incomes] predicates: id != '1' kudu predicates: zip 
> = '2' mem-estimate=0B mem-reservation=0B tuple-ids=0 row-size=68B 
> cardinality=unavailable ^ 
> Expected: F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1 Per-Host 
> Resources: mem-estimate=0B mem-reservation=0B PLAN-ROOT SINK | 
> mem-estimate=0B mem-reservation=0B | 00:SCAN KUDU 
> [functional_kudu.zipcode_incomes] 

[jira] [Updated] (IMPALA-6402) PlannerTest.testFkPkJoinDetection failure due to missing partition

2018-07-25 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-6402:
---
Epic Color:   (was: ghx-label-9)

> PlannerTest.testFkPkJoinDetection failure due to missing partition
> --
>
> Key: IMPALA-6402
> URL: https://issues.apache.org/jira/browse/IMPALA-6402
> Project: IMPALA
>  Issue Type: Bug
>  Components: Infrastructure
>Affects Versions: Impala 2.11.0
> Environment: PlannerTest.testFkPkJoinDetection
>Reporter: Sailesh Mukil
>Assignee: Alexander Behm
>Priority: Blocker
>  Labels: broken-build
>
>  
> PlannerTest.testFkPkJoinDetection failed recent test run. From the output 
> below, it looks like the test expects 1823 partitions and files to be 
> present, however, there is one extra file erroneously present. This may be 
> due to filesystem flakiness (a file that was supposed to be deleted wasn't, 
> etc.), but we're not certain yet.
>  
> {code:java}
> ---
> Test set: org.apache.impala.planner.PlannerTest
> ---
> Tests run: 64, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 62.354 sec 
> <<< FAILURE! - in org.apache.impala.planner.PlannerTest
> testFkPkJoinDetection(org.apache.impala.planner.PlannerTest) Time elapsed: 
> 4.821 sec <<< FAILURE!
> java.lang.AssertionError: 
> Section PLAN of query:
> select /* +straight_join */ 1 from
> tpcds_seq_snap.store_sales inner join tpcds.customer
> on ss_customer_sk = c_customer_sk
> Actual does not match expected result:
> F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
> | Per-Host Resources: mem-estimate=177.94MB mem-reservation=1.94MB
> PLAN-ROOT SINK
> | mem-estimate=0B mem-reservation=0B
> |
> 02:HASH JOIN [INNER JOIN]
> | hash predicates: ss_customer_sk = c_customer_sk
> | fk/pk conjuncts: assumed fk/pk
> | runtime filters: RF000[bloom] <- c_customer_sk
> | mem-estimate=1.94MB mem-reservation=1.94MB spill-buffer=64.00KB
> | tuple-ids=0,1 row-size=8B cardinality=unavailable
> |
> |--01:SCAN HDFS [tpcds.customer]
> | partitions=1/1 files=1 size=12.60MB
> | stored statistics:
> | table: rows=10 size=12.60MB
> | columns: all
> | extrapolated-rows=disabled
> | mem-estimate=48.00MB mem-reservation=0B
> | tuple-ids=1 row-size=4B cardinality=10
> |
> 00:SCAN HDFS [tpcds_seq_snap.store_sales]
>  partitions=1823/1823 files=1823 size=207.90MB
> 
>  runtime filters: RF000[bloom] -> ss_customer_sk
>  stored statistics:
>  table: rows=unavailable size=unavailable
>  partitions: 0/1823 rows=unavailable
>  columns: unavailable
>  extrapolated-rows=disabled
>  mem-estimate=128.00MB mem-reservation=0B
>  tuple-ids=0 row-size=4B cardinality=unavailable
> Expected:
> F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
> | Per-Host Resources: mem-estimate=177.94MB mem-reservation=1.94MB
> PLAN-ROOT SINK
> | mem-estimate=0B mem-reservation=0B
> |
> 02:HASH JOIN [INNER JOIN]
> | hash predicates: ss_customer_sk = c_customer_sk
> | fk/pk conjuncts: assumed fk/pk
> | runtime filters: RF000[bloom] <- c_customer_sk
> | mem-estimate=1.94MB mem-reservation=1.94MB spill-buffer=64.00KB
> | tuple-ids=0,1 row-size=8B cardinality=unavailable
> |
> |--01:SCAN HDFS [tpcds.customer]
> | partitions=1/1 files=1 size=12.60MB
> | stored statistics:
> | table: rows=10 size=12.60MB
> | columns: all
> | extrapolated-rows=disabled
> | mem-estimate=48.00MB mem-reservation=0B
> | tuple-ids=1 row-size=4B cardinality=10
> |
> 00:SCAN HDFS [tpcds_seq_snap.store_sales]
>  partitions=1824/1824 files=1824 size=207.90MB
>  runtime filters: RF000[bloom] -> ss_customer_sk
>  stored statistics:
>  table: rows=unavailable size=unavailable
>  partitions: 0/1824 rows=unavailable
>  columns: unavailable
>  extrapolated-rows=disabled
>  mem-estimate=128.00MB mem-reservation=0B
>  tuple-ids=0 row-size=4B cardinality=unavailable
> Verbose plan:
> F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
> Per-Host Resources: mem-estimate=177.94MB mem-reservation=1.94MB
>  PLAN-ROOT SINK
>  | mem-estimate=0B mem-reservation=0B
>  |
>  02:HASH JOIN [INNER JOIN]
>  | hash predicates: ss_customer_sk = c_customer_sk
>  | fk/pk conjuncts: assumed fk/pk
>  | runtime filters: RF000[bloom] <- c_customer_sk
>  | mem-estimate=1.94MB mem-reservation=1.94MB spill-buffer=64.00KB
>  | tuple-ids=0,1 row-size=8B cardinality=unavailable
>  |
>  |--01:SCAN HDFS [tpcds.customer]
>  | partitions=1/1 files=1 size=12.60MB
>  | stored statistics:
>  | table: rows=10 size=12.60MB
>  | columns: all
>  | extrapolated-rows=disabled
>  | mem-estimate=48.00MB mem-reservation=0B
>  | tuple-ids=1 row-size=4B cardinality=10
>  

[jira] [Created] (IMPALA-7348) PlannerTest.testKuduSelectivity failing due to missing Cardinality information

2018-07-25 Thread nithya (JIRA)
nithya created IMPALA-7348:
--

 Summary: PlannerTest.testKuduSelectivity failing due to missing 
Cardinality information
 Key: IMPALA-7348
 URL: https://issues.apache.org/jira/browse/IMPALA-7348
 Project: IMPALA
  Issue Type: Bug
Reporter: nithya


PlannerTest.testKuduSelectivity failed in the recent run. It is an assertion 
failure to unavailable cardinality information.

Assertion failure as follows

{code}

Actual does not match expected result: F00:PLAN FRAGMENT [UNPARTITIONED] 
hosts=1 instances=1 Per-Host Resources: mem-estimate=0B mem-reservation=0B 
PLAN-ROOT SINK | mem-estimate=0B mem-reservation=0B | 00:SCAN KUDU 
[functional_kudu.zipcode_incomes] kudu predicates: id = '860US00601' 
mem-estimate=0B mem-reservation=0B tuple-ids=0 row-size=68B 
cardinality=unavailable ^ 
Expected: F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1 Per-Host 
Resources: mem-estimate=0B mem-reservation=0B PLAN-ROOT SINK | mem-estimate=0B 
mem-reservation=0B | 00:SCAN KUDU [functional_kudu.zipcode_incomes] kudu 
predicates: id = '860US00601' mem-estimate=0B mem-reservation=0B 
tuple-ids=0 row-size=124B cardinality=1

{code}

 

Verbose plan

{code}

Verbose plan: F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1 Per-Host 
Resources: mem-estimate=0B mem-reservation=0B PLAN-ROOT SINK | mem-estimate=0B 
mem-reservation=0B | 00:SCAN KUDU [functional_kudu.zipcode_incomes] kudu 
predicates: id = '860US00601' mem-estimate=0B mem-reservation=0B 
tuple-ids=0 row-size=68B cardinality=unavailable Section DISTRIBUTEDPLAN of 
query: select * from functional_kudu.zipcode_incomes where id = 
'860US00601' Actual does not match expected result: F01:PLAN FRAGMENT 
[UNPARTITIONED] hosts=1 instances=1 Per-Host Resources: mem-estimate=0B 
mem-reservation=0B PLAN-ROOT SINK | mem-estimate=0B mem-reservation=0B | 
01:EXCHANGE [UNPARTITIONED] mem-estimate=0B mem-reservation=0B tuple-ids=0 
row-size=68B cardinality=unavailable 
^ F00:PLAN FRAGMENT 
[RANDOM] hosts=3 instances=3 Per-Host Resources: mem-estimate=0B 
mem-reservation=0B DATASTREAM SINK [FRAGMENT=F01, EXCHANGE=01, UNPARTITIONED] | 
mem-estimate=0B mem-reservation=0B 00:SCAN KUDU 
[functional_kudu.zipcode_incomes] kudu predicates: id = '860US00601' 
mem-estimate=0B mem-reservation=0B tuple-ids=0 row-size=68B 
cardinality=unavailable Expected: F01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 
instances=1 Per-Host Resources: mem-estimate=0B mem-reservation=0B PLAN-ROOT 
SINK | mem-estimate=0B mem-reservation=0B | 01:EXCHANGE [UNPARTITIONED] 
mem-estimate=0B mem-reservation=0B tuple-ids=0 row-size=124B cardinality=1 
F00:PLAN FRAGMENT [RANDOM] hosts=3 instances=3 Per-Host Resources: 
mem-estimate=0B mem-reservation=0B DATASTREAM SINK [FRAGMENT=F01, EXCHANGE=01, 
UNPARTITIONED] | mem-estimate=0B mem-reservation=0B 00:SCAN KUDU 
[functional_kudu.zipcode_incomes] kudu predicates: id = '860US00601' 
mem-estimate=0B mem-reservation=0B tuple-ids=0 row-size=124B cardinality=1 
Verbose plan: F01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1 Per-Host 
Resources: mem-estimate=0B mem-reservation=0B PLAN-ROOT SINK | mem-estimate=0B 
mem-reservation=0B | 01:EXCHANGE [UNPARTITIONED] mem-estimate=0B 
mem-reservation=0B tuple-ids=0 row-size=68B cardinality=unavailable F00:PLAN 
FRAGMENT [RANDOM] hosts=3 instances=3 Per-Host Resources: mem-estimate=0B 
mem-reservation=0B DATASTREAM SINK [FRAGMENT=F01, EXCHANGE=01, UNPARTITIONED] | 
mem-estimate=0B mem-reservation=0B 00:SCAN KUDU 
[functional_kudu.zipcode_incomes] kudu predicates: id = '860US00601' 
mem-estimate=0B mem-reservation=0B tuple-ids=0 row-size=68B 
cardinality=unavailable Section PLAN of query: select * from 
functional_kudu.zipcode_incomes where id != '1' and zip = '2' Actual does not 
match expected result: F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1 
Per-Host Resources: mem-estimate=0B mem-reservation=0B PLAN-ROOT SINK | 
mem-estimate=0B mem-reservation=0B | 00:SCAN KUDU 
[functional_kudu.zipcode_incomes] predicates: id != '1' kudu predicates: zip = 
'2' mem-estimate=0B mem-reservation=0B tuple-ids=0 row-size=68B 
cardinality=unavailable ^ 
Expected: F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1 Per-Host 
Resources: mem-estimate=0B mem-reservation=0B PLAN-ROOT SINK | mem-estimate=0B 
mem-reservation=0B | 00:SCAN KUDU [functional_kudu.zipcode_incomes] predicates: 
id != '1' kudu predicates: zip = '2' mem-estimate=0B mem-reservation=0B 
tuple-ids=0 row-size=124B cardinality=1 Verbose plan: F00:PLAN FRAGMENT 
[UNPARTITIONED] hosts=1 instances=1 Per-Host Resources: mem-estimate=0B 
mem-reservation=0B PLAN-ROOT SINK | mem-estimate=0B mem-reservation=0B | 
00:SCAN KUDU [functional_kudu.zipcode_incomes] predicates: id != '1' kudu 
predicates: 

[jira] [Updated] (IMPALA-7347) Assertion Failure - test_show_create_table

2018-07-25 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7347:
---
Labels: build-failure  (was: )

> Assertion Failure - test_show_create_table 
> ---
>
> Key: IMPALA-7347
> URL: https://issues.apache.org/jira/browse/IMPALA-7347
> Project: IMPALA
>  Issue Type: Test
>Reporter: nithya
>Priority: Major
>  Labels: build-failure
>
> test_show_create_table in metadata/test_show_create_table.py is failing with 
> the following assertion error
> {code}
> metadata/test_show_create_table.py:58: in test_show_create_table 
> unique_database) metadata/test_show_create_table.py:106: in 
> __run_show_create_table_test_case self.__compare_result(expected_result, 
> create_table_result) metadata/test_show_create_table.py:134: in 
> __compare_result assert expected_tbl_props == actual_tbl_props E assert {} == 
> \{'numFilesErasureCoded': '0'} E Right contains more items: E 
> \{'numFilesErasureCoded': '0'} E Use -v to get the full diff
> {code}
>  
> It appears that table property "numFilesErasureCoded" is showing up in table 
> properties.
> Either the test needs updating or a bug.
>  
> {code}
> h3. Error Message
> metadata/test_show_create_table.py:58: in test_show_create_table 
> unique_database) metadata/test_show_create_table.py:106: in 
> __run_show_create_table_test_case self.__compare_result(expected_result, 
> create_table_result) metadata/test_show_create_table.py:134: in 
> __compare_result assert expected_tbl_props == actual_tbl_props E assert {} == 
> \{'numFilesErasureCoded': '0'} E Right contains more items: E 
> \{'numFilesErasureCoded': '0'} E Use -v to get the full diff
> {code}
>  
> ---
> {code}
> h3. Standard Error
> -- connecting to: localhost:21000 SET sync_ddl=False; -- executing against 
> localhost:21000 DROP DATABASE IF EXISTS `test_show_create_table_f1598d0b` 
> CASCADE; SET sync_ddl=False; -- executing against localhost:21000 CREATE 
> DATABASE `test_show_create_table_f1598d0b`; MainThread: Created database 
> "test_show_create_table_f1598d0b" for test ID 
> "metadata/test_show_create_table.py::TestShowCreateTable::()::test_show_create_table[table_format:
>  text/none]" -- executing against localhost:21000 CREATE TABLE 
> test_show_create_table_f1598d0b.test1 ( id INT ) STORED AS TEXTFILE; -- 
> executing against localhost:21000 show create table 
> test_show_create_table_f1598d0b.test1; -- executing against localhost:21000 
> drop table test_show_create_table_f1598d0b.test1; -- executing against 
> localhost:21000 CREATE TABLE test_show_create_table_f1598d0b.test1 (id INT) 
> STORED AS TEXTFILE LOCATION 
> 'hdfs://localhost:20500/test-warehouse/test_show_create_table_f1598d0b.db/test1';
>  -- executing against localhost:21000 show create table 
> test_show_create_table_f1598d0b.test1; -- executing against localhost:21000 
> drop table test_show_create_table_f1598d0b.test1; -- executing against 
> localhost:21000 CREATE TABLE test_show_create_table_f1598d0b.test2 ( year 
> INT, month INT, id INT COMMENT 'Add a comment', bool_col BOOLEAN, tinyint_col 
> TINYINT, smallint_col SMALLINT, int_col INT, bigint_col BIGINT, float_col 
> FLOAT, double_col DOUBLE, date_string_col STRING, string_col STRING, 
> timestamp_col TIMESTAMP ) STORED AS TEXTFILE; -- executing against 
> localhost:21000 show create table test_show_create_table_f1598d0b.test2; -- 
> executing against localhost:21000 drop table 
> test_show_create_table_f1598d0b.test2; -- executing against localhost:21000 
> CREATE TABLE test_show_create_table_f1598d0b.test2 (year INT, month INT, id 
> INT COMMENT 'Add a comment', bool_col BOOLEAN, tinyint_col TINYINT, 
> smallint_col SMALLINT, int_col INT, bigint_col BIGINT, float_col FLOAT, 
> double_col DOUBLE, date_string_col STRING, string_col STRING, timestamp_col 
> TIMESTAMP) STORED AS TEXTFILE LOCATION 
> 'hdfs://localhost:20500/test-warehouse/test_show_create_table_f1598d0b.db/test2';
>  -- executing against localhost:21000 show create table 
> test_show_create_table_f1598d0b.test2; -- executing against localhost:21000 
> drop table test_show_create_table_f1598d0b.test2; -- executing against 
> localhost:21000 CREATE TABLE test_show_create_table_f1598d0b.test3 ( year 
> INT, month INT, id INT COMMENT 'Add a comment', bool_col BOOLEAN, tinyint_col 
> TINYINT, smallint_col SMALLINT, int_col INT, bigint_col BIGINT, float_col 
> FLOAT, double_col DOUBLE, date_string_col STRING, string_col STRING, 
> timestamp_col TIMESTAMP ) PARTITIONED BY ( x INT, y INT, a BOOLEAN ) COMMENT 
> 'This is a test' STORED AS TEXTFILE; -- executing against localhost:21000 
> show create table test_show_create_table_f1598d0b.test3; -- executing against 
> localhost:21000 drop table test_show_create_table_f1598d0b.test3; -- 
> executing against localhost:21000 CREATE 

[jira] [Created] (IMPALA-7347) Assertion Failure - test_show_create_table

2018-07-25 Thread nithya (JIRA)
nithya created IMPALA-7347:
--

 Summary: Assertion Failure - test_show_create_table 
 Key: IMPALA-7347
 URL: https://issues.apache.org/jira/browse/IMPALA-7347
 Project: IMPALA
  Issue Type: Test
Reporter: nithya


test_show_create_table in metadata/test_show_create_table.py is failing with 
the following assertion error

{code}

metadata/test_show_create_table.py:58: in test_show_create_table 
unique_database) metadata/test_show_create_table.py:106: in 
__run_show_create_table_test_case self.__compare_result(expected_result, 
create_table_result) metadata/test_show_create_table.py:134: in 
__compare_result assert expected_tbl_props == actual_tbl_props E assert {} == 
\{'numFilesErasureCoded': '0'} E Right contains more items: E 
\{'numFilesErasureCoded': '0'} E Use -v to get the full diff

{code}

 

It appears that table property "numFilesErasureCoded" is showing up in table 
properties.

Either the test needs updating or a bug.

 

{code}
h3. Error Message

metadata/test_show_create_table.py:58: in test_show_create_table 
unique_database) metadata/test_show_create_table.py:106: in 
__run_show_create_table_test_case self.__compare_result(expected_result, 
create_table_result) metadata/test_show_create_table.py:134: in 
__compare_result assert expected_tbl_props == actual_tbl_props E assert {} == 
\{'numFilesErasureCoded': '0'} E Right contains more items: E 
\{'numFilesErasureCoded': '0'} E Use -v to get the full diff

{code}

 

---

{code}
h3. Standard Error

-- connecting to: localhost:21000 SET sync_ddl=False; -- executing against 
localhost:21000 DROP DATABASE IF EXISTS `test_show_create_table_f1598d0b` 
CASCADE; SET sync_ddl=False; -- executing against localhost:21000 CREATE 
DATABASE `test_show_create_table_f1598d0b`; MainThread: Created database 
"test_show_create_table_f1598d0b" for test ID 
"metadata/test_show_create_table.py::TestShowCreateTable::()::test_show_create_table[table_format:
 text/none]" -- executing against localhost:21000 CREATE TABLE 
test_show_create_table_f1598d0b.test1 ( id INT ) STORED AS TEXTFILE; -- 
executing against localhost:21000 show create table 
test_show_create_table_f1598d0b.test1; -- executing against localhost:21000 
drop table test_show_create_table_f1598d0b.test1; -- executing against 
localhost:21000 CREATE TABLE test_show_create_table_f1598d0b.test1 (id INT) 
STORED AS TEXTFILE LOCATION 
'hdfs://localhost:20500/test-warehouse/test_show_create_table_f1598d0b.db/test1';
 -- executing against localhost:21000 show create table 
test_show_create_table_f1598d0b.test1; -- executing against localhost:21000 
drop table test_show_create_table_f1598d0b.test1; -- executing against 
localhost:21000 CREATE TABLE test_show_create_table_f1598d0b.test2 ( year INT, 
month INT, id INT COMMENT 'Add a comment', bool_col BOOLEAN, tinyint_col 
TINYINT, smallint_col SMALLINT, int_col INT, bigint_col BIGINT, float_col 
FLOAT, double_col DOUBLE, date_string_col STRING, string_col STRING, 
timestamp_col TIMESTAMP ) STORED AS TEXTFILE; -- executing against 
localhost:21000 show create table test_show_create_table_f1598d0b.test2; -- 
executing against localhost:21000 drop table 
test_show_create_table_f1598d0b.test2; -- executing against localhost:21000 
CREATE TABLE test_show_create_table_f1598d0b.test2 (year INT, month INT, id INT 
COMMENT 'Add a comment', bool_col BOOLEAN, tinyint_col TINYINT, smallint_col 
SMALLINT, int_col INT, bigint_col BIGINT, float_col FLOAT, double_col DOUBLE, 
date_string_col STRING, string_col STRING, timestamp_col TIMESTAMP) STORED AS 
TEXTFILE LOCATION 
'hdfs://localhost:20500/test-warehouse/test_show_create_table_f1598d0b.db/test2';
 -- executing against localhost:21000 show create table 
test_show_create_table_f1598d0b.test2; -- executing against localhost:21000 
drop table test_show_create_table_f1598d0b.test2; -- executing against 
localhost:21000 CREATE TABLE test_show_create_table_f1598d0b.test3 ( year INT, 
month INT, id INT COMMENT 'Add a comment', bool_col BOOLEAN, tinyint_col 
TINYINT, smallint_col SMALLINT, int_col INT, bigint_col BIGINT, float_col 
FLOAT, double_col DOUBLE, date_string_col STRING, string_col STRING, 
timestamp_col TIMESTAMP ) PARTITIONED BY ( x INT, y INT, a BOOLEAN ) COMMENT 
'This is a test' STORED AS TEXTFILE; -- executing against localhost:21000 show 
create table test_show_create_table_f1598d0b.test3; -- executing against 
localhost:21000 drop table test_show_create_table_f1598d0b.test3; -- executing 
against localhost:21000 CREATE TABLE test_show_create_table_f1598d0b.test3 
(year INT, month INT, id INT COMMENT 'Add a comment', bool_col BOOLEAN, 
tinyint_col TINYINT, smallint_col SMALLINT, int_col INT, bigint_col BIGINT, 
float_col FLOAT, double_col DOUBLE, date_string_col STRING, string_col STRING, 
timestamp_col TIMESTAMP) PARTITIONED BY (x INT, y INT, a BOOLEAN) COMMENT 'This 
is a test' STORED AS TEXTFILE LOCATION 

[jira] [Updated] (IMPALA-7347) Assertion Failure - test_show_create_table

2018-07-25 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7347:
---
Priority: Critical  (was: Major)

> Assertion Failure - test_show_create_table 
> ---
>
> Key: IMPALA-7347
> URL: https://issues.apache.org/jira/browse/IMPALA-7347
> Project: IMPALA
>  Issue Type: Test
>Reporter: nithya
>Priority: Critical
>  Labels: build-failure
>
> test_show_create_table in metadata/test_show_create_table.py is failing with 
> the following assertion error
> {code}
> metadata/test_show_create_table.py:58: in test_show_create_table 
> unique_database) metadata/test_show_create_table.py:106: in 
> __run_show_create_table_test_case self.__compare_result(expected_result, 
> create_table_result) metadata/test_show_create_table.py:134: in 
> __compare_result assert expected_tbl_props == actual_tbl_props E assert {} == 
> \{'numFilesErasureCoded': '0'} E Right contains more items: E 
> \{'numFilesErasureCoded': '0'} E Use -v to get the full diff
> {code}
>  
> It appears that table property "numFilesErasureCoded" is showing up in table 
> properties.
> Either the test needs updating or a bug.
>  
> {code}
> h3. Error Message
> metadata/test_show_create_table.py:58: in test_show_create_table 
> unique_database) metadata/test_show_create_table.py:106: in 
> __run_show_create_table_test_case self.__compare_result(expected_result, 
> create_table_result) metadata/test_show_create_table.py:134: in 
> __compare_result assert expected_tbl_props == actual_tbl_props E assert {} == 
> \{'numFilesErasureCoded': '0'} E Right contains more items: E 
> \{'numFilesErasureCoded': '0'} E Use -v to get the full diff
> {code}
>  
> ---
> {code}
> h3. Standard Error
> -- connecting to: localhost:21000 SET sync_ddl=False; -- executing against 
> localhost:21000 DROP DATABASE IF EXISTS `test_show_create_table_f1598d0b` 
> CASCADE; SET sync_ddl=False; -- executing against localhost:21000 CREATE 
> DATABASE `test_show_create_table_f1598d0b`; MainThread: Created database 
> "test_show_create_table_f1598d0b" for test ID 
> "metadata/test_show_create_table.py::TestShowCreateTable::()::test_show_create_table[table_format:
>  text/none]" -- executing against localhost:21000 CREATE TABLE 
> test_show_create_table_f1598d0b.test1 ( id INT ) STORED AS TEXTFILE; -- 
> executing against localhost:21000 show create table 
> test_show_create_table_f1598d0b.test1; -- executing against localhost:21000 
> drop table test_show_create_table_f1598d0b.test1; -- executing against 
> localhost:21000 CREATE TABLE test_show_create_table_f1598d0b.test1 (id INT) 
> STORED AS TEXTFILE LOCATION 
> 'hdfs://localhost:20500/test-warehouse/test_show_create_table_f1598d0b.db/test1';
>  -- executing against localhost:21000 show create table 
> test_show_create_table_f1598d0b.test1; -- executing against localhost:21000 
> drop table test_show_create_table_f1598d0b.test1; -- executing against 
> localhost:21000 CREATE TABLE test_show_create_table_f1598d0b.test2 ( year 
> INT, month INT, id INT COMMENT 'Add a comment', bool_col BOOLEAN, tinyint_col 
> TINYINT, smallint_col SMALLINT, int_col INT, bigint_col BIGINT, float_col 
> FLOAT, double_col DOUBLE, date_string_col STRING, string_col STRING, 
> timestamp_col TIMESTAMP ) STORED AS TEXTFILE; -- executing against 
> localhost:21000 show create table test_show_create_table_f1598d0b.test2; -- 
> executing against localhost:21000 drop table 
> test_show_create_table_f1598d0b.test2; -- executing against localhost:21000 
> CREATE TABLE test_show_create_table_f1598d0b.test2 (year INT, month INT, id 
> INT COMMENT 'Add a comment', bool_col BOOLEAN, tinyint_col TINYINT, 
> smallint_col SMALLINT, int_col INT, bigint_col BIGINT, float_col FLOAT, 
> double_col DOUBLE, date_string_col STRING, string_col STRING, timestamp_col 
> TIMESTAMP) STORED AS TEXTFILE LOCATION 
> 'hdfs://localhost:20500/test-warehouse/test_show_create_table_f1598d0b.db/test2';
>  -- executing against localhost:21000 show create table 
> test_show_create_table_f1598d0b.test2; -- executing against localhost:21000 
> drop table test_show_create_table_f1598d0b.test2; -- executing against 
> localhost:21000 CREATE TABLE test_show_create_table_f1598d0b.test3 ( year 
> INT, month INT, id INT COMMENT 'Add a comment', bool_col BOOLEAN, tinyint_col 
> TINYINT, smallint_col SMALLINT, int_col INT, bigint_col BIGINT, float_col 
> FLOAT, double_col DOUBLE, date_string_col STRING, string_col STRING, 
> timestamp_col TIMESTAMP ) PARTITIONED BY ( x INT, y INT, a BOOLEAN ) COMMENT 
> 'This is a test' STORED AS TEXTFILE; -- executing against localhost:21000 
> show create table test_show_create_table_f1598d0b.test3; -- executing against 
> localhost:21000 drop table test_show_create_table_f1598d0b.test3; -- 
> executing against localhost:21000 

[jira] [Updated] (IMPALA-7336) Build failure: Backing channel '' is disconnected

2018-07-23 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7336:
---
Summary: Build failure: Backing channel '' is disconnected  (was: 
Build failure: Backing channel '..' is disconnected)

> Build failure: Backing channel '' is disconnected
> ---
>
> Key: IMPALA-7336
> URL: https://issues.apache.org/jira/browse/IMPALA-7336
> Project: IMPALA
>  Issue Type: Bug
>Affects Versions: Impala 3.1.0
>Reporter: nithya
>Priority: Major
>
> Impala build failures
> {code:java}
> FATAL: command execution failed*00:39:12* java.io.EOFException*00:39:12*  
> at 
> java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2624)*00:39:12*
>at 
> java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3099)*00:39:12*
>   at 
> java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:853)*00:39:12*
>  at 
> java.io.ObjectInputStream.(ObjectInputStream.java:349)*00:39:12*   
> at 
> hudson.remoting.ObjectInputStreamEx.(ObjectInputStreamEx.java:48)*00:39:12*
> at 
> hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:35)*00:39:12*
> at 
> hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:59)*00:39:12*
>  Caused: java.io.IOException: Unexpected termination of the channel*00:39:12* 
> at 
> hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:73)*00:39:12*
>  Caused: java.io.IOException: Backing channel '' is 
> disconnected.*00:39:12* at 
> hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:192)*00:39:12*
> at 
> hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:257)*00:39:12*
>at com.sun.proxy.$Proxy79.isAlive(Unknown Source)*00:39:12* at 
> hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1138)*00:39:12* 
>at 
> hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1130)*00:39:12*   
> at 
> hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:155)*00:39:12*  
> at 
> hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:109)*00:39:12*
>at 
> hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)*00:39:12* 
>at 
> hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)*00:39:12*  
> at 
> hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:736)*00:39:12*
>   at hudson.model.Build$BuildExecution.build(Build.java:206)*00:39:12*at 
> hudson.model.Build$BuildExecution.doRun(Build.java:163)*00:39:12*at 
> hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:496)*00:39:12*
>   at hudson.model.Run.execute(Run.java:1737)*00:39:12*at 
> hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)*00:39:12*at 
> hudson.model.ResourceController.execute(ResourceController.java:97)*00:39:12* 
>at hudson.model.Executor.run(Executor.java:419)*00:39:12* Build step 
> 'Execute shell' marked build as failure
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Updated] (IMPALA-7335) Assertion Failure - test_corrupt_files

2018-07-23 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7335:
---
Description: 
test_corrupt_files fails 

 

query_test.test_scanners.TestParquet.test_corrupt_files[exec_option: 
\\{'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 0, 
'disable_codegen': False, 'abort_on_error': 1, 'debug_action': None, 
'exec_single_node_rows_threshold': 0} | table_format: parquet/none] (from 
pytest)

 
{code:java}
Error Message

query_test/test_scanners.py:300: in test_corrupt_files     
self.run_test_case('QueryTest/parquet-abort-on-error', vector) 
common/impala_test_suite.py:420: in run_test_case     assert False, "Expected 
exception: %s" % expected_str E   AssertionError: Expected exception: Column 
metadata states there are 11 values, but read 10 values from column id.

STACKTRACE

query_test/test_scanners.py:300: in test_corrupt_files
    self.run_test_case('QueryTest/parquet-abort-on-error', vector)
common/impala_test_suite.py:420: in run_test_case
    assert False, "Expected exception: %s" % expected_str
E   AssertionError: Expected exception: Column metadata states there are 11 
values, but read 10 values from column id.


Standard Error

-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=0;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id from bad_column_metadata;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
SELECT * from bad_parquet_strings_negative_len;

-- executing against localhost:21000
SELECT * from bad_parquet_strings_out_of_bounds;

-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=1;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id from bad_column_metadata;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

{code}
 

 

  was:
test_corrupt_files fails 

 

query_test.test_scanners.TestParquet.test_corrupt_files[exec_option: 
\{'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 0, 
'disable_codegen': False, 'abort_on_error': 1, 'debug_action': None, 
'exec_single_node_rows_threshold': 0} | table_format: parquet/none] (from 
pytest)

 
{code:java}
Error Message

query_test/test_scanners.py:300: in test_corrupt_files     
self.run_test_case('QueryTest/parquet-abort-on-error', vector) 
common/impala_test_suite.py:420: in run_test_case     assert False, "Expected 
exception: %s" % expected_str E   AssertionError: Expected exception: Column 
metadata states there are 11 values, but read 10 values from column id.
Stacktrace

query_test/test_scanners.py:300: in test_corrupt_files
    self.run_test_case('QueryTest/parquet-abort-on-error', vector)
common/impala_test_suite.py:420: in run_test_case
    assert False, "Expected exception: %s" % expected_str
E   AssertionError: Expected exception: Column metadata states there are 11 
values, but read 10 values from column id.




-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=0;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing 

[jira] [Updated] (IMPALA-7335) Assertion Failure - test_corrupt_files

2018-07-23 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7335:
---
Description: 
test_corrupt_files fails 

 

query_test.test_scanners.TestParquet.test_corrupt_files[exec_option: 
\{'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 0, 
'disable_codegen': False, 'abort_on_error': 1, 'debug_action': None, 
'exec_single_node_rows_threshold': 0} | table_format: parquet/none] (from 
pytest)

 
{code:java}
Error Message

query_test/test_scanners.py:300: in test_corrupt_files     
self.run_test_case('QueryTest/parquet-abort-on-error', vector) 
common/impala_test_suite.py:420: in run_test_case     assert False, "Expected 
exception: %s" % expected_str E   AssertionError: Expected exception: Column 
metadata states there are 11 values, but read 10 values from column id.
Stacktrace

query_test/test_scanners.py:300: in test_corrupt_files
    self.run_test_case('QueryTest/parquet-abort-on-error', vector)
common/impala_test_suite.py:420: in run_test_case
    assert False, "Expected exception: %s" % expected_str
E   AssertionError: Expected exception: Column metadata states there are 11 
values, but read 10 values from column id.




-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=0;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id from bad_column_metadata;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
SELECT * from bad_parquet_strings_negative_len;

-- executing against localhost:21000
SELECT * from bad_parquet_strings_out_of_bounds;

-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=1;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id from bad_column_metadata;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

{code}
 

 

  was:
test_corrupt_files fails 

{code}

query_test.test_scanners.TestParquet.test_corrupt_files[exec_option: 
\{'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 0, 
'disable_codegen': False, 'abort_on_error': 1, 'debug_action': None, 
'exec_single_node_rows_threshold': 0} | table_format: parquet/none] (from 
pytest)

 \{code}

 

 

 
{code:java}
Error Message

query_test/test_scanners.py:300: in test_corrupt_files     
self.run_test_case('QueryTest/parquet-abort-on-error', vector) 
common/impala_test_suite.py:420: in run_test_case     assert False, "Expected 
exception: %s" % expected_str E   AssertionError: Expected exception: Column 
metadata states there are 11 values, but read 10 values from column id.
Stacktrace

query_test/test_scanners.py:300: in test_corrupt_files
    self.run_test_case('QueryTest/parquet-abort-on-error', vector)
common/impala_test_suite.py:420: in run_test_case
    assert False, "Expected exception: %s" % expected_str
E   AssertionError: Expected exception: Column metadata states there are 11 
values, but read 10 values from column id.




-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=0;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- 

[jira] [Updated] (IMPALA-7335) Assertion Failure - test_corrupt_files

2018-07-23 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7335:
---
Description: 
test_corrupt_files fails 

{code}

query_test.test_scanners.TestParquet.test_corrupt_files[exec_option: 
\{'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 0, 
'disable_codegen': False, 'abort_on_error': 1, 'debug_action': None, 
'exec_single_node_rows_threshold': 0} | table_format: parquet/none] (from 
pytest)

 \{code}

 

 

 
{code:java}
Error Message

query_test/test_scanners.py:300: in test_corrupt_files     
self.run_test_case('QueryTest/parquet-abort-on-error', vector) 
common/impala_test_suite.py:420: in run_test_case     assert False, "Expected 
exception: %s" % expected_str E   AssertionError: Expected exception: Column 
metadata states there are 11 values, but read 10 values from column id.
Stacktrace

query_test/test_scanners.py:300: in test_corrupt_files
    self.run_test_case('QueryTest/parquet-abort-on-error', vector)
common/impala_test_suite.py:420: in run_test_case
    assert False, "Expected exception: %s" % expected_str
E   AssertionError: Expected exception: Column metadata states there are 11 
values, but read 10 values from column id.




-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=0;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id from bad_column_metadata;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
SELECT * from bad_parquet_strings_negative_len;

-- executing against localhost:21000
SELECT * from bad_parquet_strings_out_of_bounds;

-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=1;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id from bad_column_metadata;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

{code}
 

 

  was:
test_corrupt_files fails 

{code}

query_test.test_scanners.TestParquet.test_corrupt_files[exec_option: 
\{'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 0, 
'disable_codegen': False, 'abort_on_error': 1, 'debug_action': None, 
'exec_single_node_rows_threshold': 0} | table_format: parquet/none] (from 
pytest)

 \{code}
{code:java}
Error Message

query_test/test_scanners.py:300: in test_corrupt_files     
self.run_test_case('QueryTest/parquet-abort-on-error', vector) 
common/impala_test_suite.py:420: in run_test_case     assert False, "Expected 
exception: %s" % expected_str E   AssertionError: Expected exception: Column 
metadata states there are 11 values, but read 10 values from column id.
Stacktrace

query_test/test_scanners.py:300: in test_corrupt_files
    self.run_test_case('QueryTest/parquet-abort-on-error', vector)
common/impala_test_suite.py:420: in run_test_case
    assert False, "Expected exception: %s" % expected_str
E   AssertionError: Expected exception: Column metadata states there are 11 
values, but read 10 values from column id.




-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=0;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";


[jira] [Updated] (IMPALA-7335) Assertion Failure - test_corrupt_files

2018-07-23 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7335:
---
Description: 
test_corrupt_files fails 

{code}

query_test.test_scanners.TestParquet.test_corrupt_files[exec_option: 
\{'batch_size': 0, 'num_nodes': 0, 'disable_codegen_rows_threshold': 0, 
'disable_codegen': False, 'abort_on_error': 1, 'debug_action': None, 
'exec_single_node_rows_threshold': 0} | table_format: parquet/none] (from 
pytest)

 \{code}
{code:java}
Error Message

query_test/test_scanners.py:300: in test_corrupt_files     
self.run_test_case('QueryTest/parquet-abort-on-error', vector) 
common/impala_test_suite.py:420: in run_test_case     assert False, "Expected 
exception: %s" % expected_str E   AssertionError: Expected exception: Column 
metadata states there are 11 values, but read 10 values from column id.
Stacktrace

query_test/test_scanners.py:300: in test_corrupt_files
    self.run_test_case('QueryTest/parquet-abort-on-error', vector)
common/impala_test_suite.py:420: in run_test_case
    assert False, "Expected exception: %s" % expected_str
E   AssertionError: Expected exception: Column metadata states there are 11 
values, but read 10 values from column id.




-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=0;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id from bad_column_metadata;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
SELECT * from bad_parquet_strings_negative_len;

-- executing against localhost:21000
SELECT * from bad_parquet_strings_out_of_bounds;

-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=1;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id from bad_column_metadata;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

{code}
 

 

  was:
test_corrupt_files fails 

 
{code:java}
Error Message

query_test/test_scanners.py:300: in test_corrupt_files     
self.run_test_case('QueryTest/parquet-abort-on-error', vector) 
common/impala_test_suite.py:420: in run_test_case     assert False, "Expected 
exception: %s" % expected_str E   AssertionError: Expected exception: Column 
metadata states there are 11 values, but read 10 values from column id.
Stacktrace

query_test/test_scanners.py:300: in test_corrupt_files
    self.run_test_case('QueryTest/parquet-abort-on-error', vector)
common/impala_test_suite.py:420: in run_test_case
    assert False, "Expected exception: %s" % expected_str
E   AssertionError: Expected exception: Column metadata states there are 11 
values, but read 10 values from column id.




-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=0;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id from bad_column_metadata;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";


[jira] [Updated] (IMPALA-7335) Assertion Failure - test_corrupt_files

2018-07-23 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7335:
---
Description: 
test_corrupt_files fails 

 
{code:java}
Error Message

query_test/test_scanners.py:300: in test_corrupt_files     
self.run_test_case('QueryTest/parquet-abort-on-error', vector) 
common/impala_test_suite.py:420: in run_test_case     assert False, "Expected 
exception: %s" % expected_str E   AssertionError: Expected exception: Column 
metadata states there are 11 values, but read 10 values from column id.
Stacktrace

query_test/test_scanners.py:300: in test_corrupt_files
    self.run_test_case('QueryTest/parquet-abort-on-error', vector)
common/impala_test_suite.py:420: in run_test_case
    assert False, "Expected exception: %s" % expected_str
E   AssertionError: Expected exception: Column metadata states there are 11 
values, but read 10 values from column id.




-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=0;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id from bad_column_metadata;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
SELECT * from bad_parquet_strings_negative_len;

-- executing against localhost:21000
SELECT * from bad_parquet_strings_out_of_bounds;

-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=1;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id from bad_column_metadata;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

{code}
 

 

  was:
test_corrupt_files fails 

ERROR Details

 
{code:java}
Error Message

query_test/test_scanners.py:300: in test_corrupt_files     
self.run_test_case('QueryTest/parquet-abort-on-error', vector) 
common/impala_test_suite.py:420: in run_test_case     assert False, "Expected 
exception: %s" % expected_str E   AssertionError: Expected exception: Column 
metadata states there are 11 values, but read 10 values from column id.
Stacktrace

query_test/test_scanners.py:300: in test_corrupt_files
    self.run_test_case('QueryTest/parquet-abort-on-error', vector)
common/impala_test_suite.py:420: in run_test_case
    assert False, "Expected exception: %s" % expected_str
E   AssertionError: Expected exception: Column metadata states there are 11 
values, but read 10 values from column id.




-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=0;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id from bad_column_metadata;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
SELECT * from bad_parquet_strings_negative_len;

-- executing against localhost:21000
SELECT * from bad_parquet_strings_out_of_bounds;

-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET 

[jira] [Updated] (IMPALA-7335) Assertion Failure - test_corrupt_files

2018-07-23 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya updated IMPALA-7335:
---
Description: 
test_corrupt_files fails 

ERROR Details

 
{code:java}
Error Message

query_test/test_scanners.py:300: in test_corrupt_files     
self.run_test_case('QueryTest/parquet-abort-on-error', vector) 
common/impala_test_suite.py:420: in run_test_case     assert False, "Expected 
exception: %s" % expected_str E   AssertionError: Expected exception: Column 
metadata states there are 11 values, but read 10 values from column id.
Stacktrace

query_test/test_scanners.py:300: in test_corrupt_files
    self.run_test_case('QueryTest/parquet-abort-on-error', vector)
common/impala_test_suite.py:420: in run_test_case
    assert False, "Expected exception: %s" % expected_str
E   AssertionError: Expected exception: Column metadata states there are 11 
values, but read 10 values from column id.




-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=0;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id from bad_column_metadata;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
SELECT * from bad_parquet_strings_negative_len;

-- executing against localhost:21000
SELECT * from bad_parquet_strings_out_of_bounds;

-- executing against localhost:21000
use functional_parquet;

SET batch_size=0;
SET num_nodes=0;
SET disable_codegen_rows_threshold=0;
SET disable_codegen=False;
SET abort_on_error=1;
SET exec_single_node_rows_threshold=0;
-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id, cnt from bad_column_metadata t, (select count(*) cnt from 
t.int_array) v;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

-- executing against localhost:21000
set num_nodes=1;

-- executing against localhost:21000

set num_scanner_threads=1;

-- executing against localhost:21000

select id from bad_column_metadata;

-- executing against localhost:21000
SET NUM_NODES="0";

-- executing against localhost:21000
SET NUM_SCANNER_THREADS="0";

{code}
 

 

  was:
test_corrupt_files fails 

ERROR Details

 

{code}
h3. Error Message

query_test/test_scanners.py:300: in test_corrupt_files 
self.run_test_case('QueryTest/parquet-abort-on-error', vector) 
common/impala_test_suite.py:420: in run_test_case assert False, "Expected 
exception: %s" % expected_str E AssertionError: Expected exception: Column 
metadata states there are 11 values, but read 10 values from column id.
h3. Stacktrace

query_test/test_scanners.py:300: in test_corrupt_files 
self.run_test_case('QueryTest/parquet-abort-on-error', vector) 
common/impala_test_suite.py:420: in run_test_case assert False, "Expected 
exception: %s" % expected_str E AssertionError: Expected exception: Column 
metadata states there are 11 values, but read 10 values from column id.
h3. Standard Error

-- executing against localhost:21000 use functional_parquet; SET batch_size=0; 
SET num_nodes=0; SET disable_codegen_rows_threshold=0; SET 
disable_codegen=False; SET abort_on_error=0; SET 
exec_single_node_rows_threshold=0; -- executing against localhost:21000 set 
num_nodes=1; -- executing against localhost:21000 set num_scanner_threads=1; -- 
executing against localhost:21000 select id, cnt from bad_column_metadata t, 
(select count(*) cnt from t.int_array) v; -- executing against localhost:21000 
SET NUM_NODES="0"; -- executing against localhost:21000 SET 
NUM_SCANNER_THREADS="0"; -- executing against localhost:21000 set num_nodes=1; 
-- executing against localhost:21000 set num_scanner_threads=1; -- executing 
against localhost:21000 select id from bad_column_metadata; -- executing 
against localhost:21000 SET NUM_NODES="0"; -- executing against localhost:21000 
SET NUM_SCANNER_THREADS="0"; -- executing against localhost:21000 SELECT * from 
bad_parquet_strings_negative_len; -- executing against localhost:21000 SELECT * 
from bad_parquet_strings_out_of_bounds; -- executing against localhost:21000 
use functional_parquet; SET batch_size=0; SET num_nodes=0; SET 

[jira] [Created] (IMPALA-7335) Assertion Failure - test_corrupt_files

2018-07-23 Thread nithya (JIRA)
nithya created IMPALA-7335:
--

 Summary: Assertion Failure - test_corrupt_files
 Key: IMPALA-7335
 URL: https://issues.apache.org/jira/browse/IMPALA-7335
 Project: IMPALA
  Issue Type: Task
Affects Versions: Impala 3.1.0
Reporter: nithya


test_corrupt_files fails 

ERROR Details

 

{code}
h3. Error Message

query_test/test_scanners.py:300: in test_corrupt_files 
self.run_test_case('QueryTest/parquet-abort-on-error', vector) 
common/impala_test_suite.py:420: in run_test_case assert False, "Expected 
exception: %s" % expected_str E AssertionError: Expected exception: Column 
metadata states there are 11 values, but read 10 values from column id.
h3. Stacktrace

query_test/test_scanners.py:300: in test_corrupt_files 
self.run_test_case('QueryTest/parquet-abort-on-error', vector) 
common/impala_test_suite.py:420: in run_test_case assert False, "Expected 
exception: %s" % expected_str E AssertionError: Expected exception: Column 
metadata states there are 11 values, but read 10 values from column id.
h3. Standard Error

-- executing against localhost:21000 use functional_parquet; SET batch_size=0; 
SET num_nodes=0; SET disable_codegen_rows_threshold=0; SET 
disable_codegen=False; SET abort_on_error=0; SET 
exec_single_node_rows_threshold=0; -- executing against localhost:21000 set 
num_nodes=1; -- executing against localhost:21000 set num_scanner_threads=1; -- 
executing against localhost:21000 select id, cnt from bad_column_metadata t, 
(select count(*) cnt from t.int_array) v; -- executing against localhost:21000 
SET NUM_NODES="0"; -- executing against localhost:21000 SET 
NUM_SCANNER_THREADS="0"; -- executing against localhost:21000 set num_nodes=1; 
-- executing against localhost:21000 set num_scanner_threads=1; -- executing 
against localhost:21000 select id from bad_column_metadata; -- executing 
against localhost:21000 SET NUM_NODES="0"; -- executing against localhost:21000 
SET NUM_SCANNER_THREADS="0"; -- executing against localhost:21000 SELECT * from 
bad_parquet_strings_negative_len; -- executing against localhost:21000 SELECT * 
from bad_parquet_strings_out_of_bounds; -- executing against localhost:21000 
use functional_parquet; SET batch_size=0; SET num_nodes=0; SET 
disable_codegen_rows_threshold=0; SET disable_codegen=False; SET 
abort_on_error=1; SET exec_single_node_rows_threshold=0; -- executing against 
localhost:21000 set num_nodes=1; -- executing against localhost:21000 set 
num_scanner_threads=1; -- executing against localhost:21000 select id, cnt from 
bad_column_metadata t, (select count(*) cnt from t.int_array) v; -- executing 
against localhost:21000 SET NUM_NODES="0"; -- executing against localhost:21000 
SET NUM_SCANNER_THREADS="0"; -- executing against localhost:21000 set 
num_nodes=1; -- executing against localhost:21000 set num_scanner_threads=1; -- 
executing against localhost:21000 select id from bad_column_metadata; -- 
executing against localhost:21000 SET NUM_NODES="0"; -- executing against 
localhost:21000 SET NUM_SCANNER_THREADS="0";

{code}

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Closed] (IMPALA-7228) Add tpcds-unmodified to single-node-perf-run

2018-07-13 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya closed IMPALA-7228.
--

> Add tpcds-unmodified to single-node-perf-run
> 
>
> Key: IMPALA-7228
> URL: https://issues.apache.org/jira/browse/IMPALA-7228
> Project: IMPALA
>  Issue Type: Task
>  Components: Perf Investigation
>Affects Versions: Impala 3.1.0
>Reporter: Jim Apple
>Assignee: nithya
>Priority: Minor
>  Labels: newbie
>
> IMPALA-6819 added the tpcds-unmodified workload. This doesn't work with 
> single-node-perf-run yet:
> {noformat}
> Traceback (most recent call last):
>   File "./bin/single_node_perf_run.py", line 334, in 
> main()
>   File "./bin/single_node_perf_run.py", line 324, in main
> perf_ab_test(options, args)
>   File "./bin/single_node_perf_run.py", line 231, in perf_ab_test
> datasets = set([WORKLOAD_TO_DATASET[workload] for workload in workloads])
> KeyError: 'tpcds-unmodified'
> {noformat}
> cc: [~njanarthanan]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-7228) Add tpcds-unmodified to single-node-perf-run

2018-07-13 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya resolved IMPALA-7228.

Resolution: Fixed

> Add tpcds-unmodified to single-node-perf-run
> 
>
> Key: IMPALA-7228
> URL: https://issues.apache.org/jira/browse/IMPALA-7228
> Project: IMPALA
>  Issue Type: Task
>  Components: Perf Investigation
>Affects Versions: Impala 3.1.0
>Reporter: Jim Apple
>Assignee: nithya
>Priority: Minor
>  Labels: newbie
>
> IMPALA-6819 added the tpcds-unmodified workload. This doesn't work with 
> single-node-perf-run yet:
> {noformat}
> Traceback (most recent call last):
>   File "./bin/single_node_perf_run.py", line 334, in 
> main()
>   File "./bin/single_node_perf_run.py", line 324, in main
> perf_ab_test(options, args)
>   File "./bin/single_node_perf_run.py", line 231, in perf_ab_test
> datasets = set([WORKLOAD_TO_DATASET[workload] for workload in workloads])
> KeyError: 'tpcds-unmodified'
> {noformat}
> cc: [~njanarthanan]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Assigned] (IMPALA-7228) Add tpcds-unmodified to single-node-perf-run

2018-07-02 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-7228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya reassigned IMPALA-7228:
--

Assignee: nithya

> Add tpcds-unmodified to single-node-perf-run
> 
>
> Key: IMPALA-7228
> URL: https://issues.apache.org/jira/browse/IMPALA-7228
> Project: IMPALA
>  Issue Type: Task
>  Components: Perf Investigation
>Affects Versions: Impala 3.1.0
>Reporter: Jim Apple
>Assignee: nithya
>Priority: Minor
>  Labels: newbie
>
> IMPALA-6819 added the tpcds-unmodified workload. This doesn't work with 
> single-node-perf-run yet:
> {noformat}
> Traceback (most recent call last):
>   File "./bin/single_node_perf_run.py", line 334, in 
> main()
>   File "./bin/single_node_perf_run.py", line 324, in main
> perf_ab_test(options, args)
>   File "./bin/single_node_perf_run.py", line 231, in perf_ab_test
> datasets = set([WORKLOAD_TO_DATASET[workload] for workload in workloads])
> KeyError: 'tpcds-unmodified'
> {noformat}
> cc: [~njanarthanan]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Commented] (IMPALA-7228) Add tpcds-unmodified to single-node-perf-run

2018-06-29 Thread nithya (JIRA)


[ 
https://issues.apache.org/jira/browse/IMPALA-7228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528367#comment-16528367
 ] 

nithya commented on IMPALA-7228:


[~jbapple] - I think the new workload needs to be added here for the single 
node perf tests to be able to run them- 
[https://github.com/apache/impala/blob/master/bin/single_node_perf_run.py#L230] 

I can update it if you want to use this workload as part of single node perf 
tests

> Add tpcds-unmodified to single-node-perf-run
> 
>
> Key: IMPALA-7228
> URL: https://issues.apache.org/jira/browse/IMPALA-7228
> Project: IMPALA
>  Issue Type: Task
>  Components: Perf Investigation
>Affects Versions: Impala 3.1.0
>Reporter: Jim Apple
>Priority: Minor
>  Labels: newbie
>
> IMPALA-6819 added the tpcds-unmodified workload. This doesn't work with 
> single-node-perf-run yet:
> {noformat}
> Traceback (most recent call last):
>   File "./bin/single_node_perf_run.py", line 334, in 
> main()
>   File "./bin/single_node_perf_run.py", line 324, in main
> perf_ab_test(options, args)
>   File "./bin/single_node_perf_run.py", line 231, in perf_ab_test
> datasets = set([WORKLOAD_TO_DATASET[workload] for workload in workloads])
> KeyError: 'tpcds-unmodified'
> {noformat}
> cc: [~njanarthanan]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Closed] (IMPALA-6976) Script to Parse Query Profiles

2018-06-28 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya closed IMPALA-6976.
--
Resolution: Won't Do

> Script to Parse Query Profiles
> --
>
> Key: IMPALA-6976
> URL: https://issues.apache.org/jira/browse/IMPALA-6976
> Project: IMPALA
>  Issue Type: Task
>Reporter: nithya
>Assignee: nithya
>Priority: Major
>
> Script to parse Query profiles to generate a one line summary for each of the 
> profiles, which can then be used for generating reports
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Resolved] (IMPALA-6819) Add new performance test workloads

2018-06-28 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya resolved IMPALA-6819.

Resolution: Fixed

[https://gerrit.cloudera.org/#/c/9973/]

[https://gerrit.cloudera.org/#/c/9979/]

> Add new performance test workloads 
> ---
>
> Key: IMPALA-6819
> URL: https://issues.apache.org/jira/browse/IMPALA-6819
> Project: IMPALA
>  Issue Type: Task
>  Components: Infrastructure
>Reporter: nithya
>Assignee: nithya
>Priority: Major
>
> Add additional workloads to impala-asf rep
> Workloads that will be added
> {code:java}
> [targeted-perf]
> [tpcds-unmodified]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Closed] (IMPALA-6819) Add new performance test workloads

2018-06-28 Thread nithya (JIRA)


 [ 
https://issues.apache.org/jira/browse/IMPALA-6819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nithya closed IMPALA-6819.
--

> Add new performance test workloads 
> ---
>
> Key: IMPALA-6819
> URL: https://issues.apache.org/jira/browse/IMPALA-6819
> Project: IMPALA
>  Issue Type: Task
>  Components: Infrastructure
>Reporter: nithya
>Assignee: nithya
>Priority: Major
>
> Add additional workloads to impala-asf rep
> Workloads that will be added
> {code:java}
> [targeted-perf]
> [tpcds-unmodified]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



[jira] [Created] (IMPALA-6976) Script to Parse Query Profiles

2018-05-04 Thread nithya (JIRA)
nithya created IMPALA-6976:
--

 Summary: Script to Parse Query Profiles
 Key: IMPALA-6976
 URL: https://issues.apache.org/jira/browse/IMPALA-6976
 Project: IMPALA
  Issue Type: Task
Reporter: nithya
Assignee: nithya


Script to parse Query profiles to generate a one line summary for each of the 
profiles, which can then be used for generating reports

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org