[jira] [Updated] (HIVE-15708) Upgrade calcite version to 1.12

2017-03-03 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-15708:

Attachment: HIVE-15708.15.patch

.15.patch is identical with .14.patch, trigger another run (last failure was 
infra out-of-disk)

> Upgrade calcite version to 1.12
> ---
>
> Key: HIVE-15708
> URL: https://issues.apache.org/jira/browse/HIVE-15708
> Project: Hive
>  Issue Type: Task
>  Components: CBO, Logical Optimizer
>Affects Versions: 2.2.0
>Reporter: Ashutosh Chauhan
>Assignee: Remus Rusanu
> Attachments: HIVE-15708.01.patch, HIVE-15708.02.patch, 
> HIVE-15708.03.patch, HIVE-15708.04.patch, HIVE-15708.05.patch, 
> HIVE-15708.06.patch, HIVE-15708.07.patch, HIVE-15708.08.patch, 
> HIVE-15708.09.patch, HIVE-15708.10.patch, HIVE-15708.11.patch, 
> HIVE-15708.12.patch, HIVE-15708.13.patch, HIVE-15708.14.patch, 
> HIVE-15708.15.patch
>
>
> Currently we are on 1.10 Need to upgrade calcite version to 1.11



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16104) LLAP: preemption may be too aggressive if the pre-empted task doesn't die immediately

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895542#comment-15895542
 ] 

Hive QA commented on HIVE-16104:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855954/HIVE-16104.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 26 failed/errored test(s), 10285 tests 
executed
*Failed tests:*
{noformat}
TestCommandProcessorFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=272)
TestDbTxnManager - did not produce a TEST-*.xml file (likely timed out) 
(batchId=272)
TestDummyTxnManager - did not produce a TEST-*.xml file (likely timed out) 
(batchId=272)
TestHiveInputSplitComparator - did not produce a TEST-*.xml file (likely timed 
out) (batchId=272)
TestIndexType - did not produce a TEST-*.xml file (likely timed out) 
(batchId=272)
TestSplitFilter - did not produce a TEST-*.xml file (likely timed out) 
(batchId=272)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=229)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_table]
 (batchId=147)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
 (batchId=140)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vector_between_in] 
(batchId=119)
org.apache.hive.beeline.TestSchemaTool.testNestedScriptsForDerby (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testNestedScriptsForMySQL (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testNestedScriptsForOracle (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testPostgresFilter (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testSchemaInit (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testSchemaInitDryRun (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgrade (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgradeDryRun (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testScriptMultiRowComment (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testScriptWithDelimiter (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testScripts (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testValidateLocations (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testValidateNullValues (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testValidateSchemaTables (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testValidateSchemaVersions (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testValidateSequences (batchId=212)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3934/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3934/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3934/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 26 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12855954 - PreCommit-HIVE-Build

> LLAP: preemption may be too aggressive if the pre-empted task doesn't die 
> immediately
> -
>
> Key: HIVE-16104
> URL: https://issues.apache.org/jira/browse/HIVE-16104
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16104.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-1010) Implement INFORMATION_SCHEMA in Hive

2017-03-03 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-1010:
-
Attachment: HIVE-1010.9.patch

> Implement INFORMATION_SCHEMA in Hive
> 
>
> Key: HIVE-1010
> URL: https://issues.apache.org/jira/browse/HIVE-1010
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Query Processor, Server Infrastructure
>Reporter: Jeff Hammerbacher
>Assignee: Gunther Hagleitner
> Attachments: HIVE-1010.7.patch, HIVE-1010.8.patch, HIVE-1010.9.patch
>
>
> INFORMATION_SCHEMA is part of the SQL92 standard and would be useful to 
> implement using our metastore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-1010) Implement INFORMATION_SCHEMA in Hive

2017-03-03 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-1010:
-
Status: Patch Available  (was: Open)

> Implement INFORMATION_SCHEMA in Hive
> 
>
> Key: HIVE-1010
> URL: https://issues.apache.org/jira/browse/HIVE-1010
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Query Processor, Server Infrastructure
>Reporter: Jeff Hammerbacher
>Assignee: Gunther Hagleitner
> Attachments: HIVE-1010.7.patch, HIVE-1010.8.patch, HIVE-1010.9.patch
>
>
> INFORMATION_SCHEMA is part of the SQL92 standard and would be useful to 
> implement using our metastore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-1010) Implement INFORMATION_SCHEMA in Hive

2017-03-03 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-1010:
-
Attachment: (was: HIVE-1010.6.patch)

> Implement INFORMATION_SCHEMA in Hive
> 
>
> Key: HIVE-1010
> URL: https://issues.apache.org/jira/browse/HIVE-1010
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Query Processor, Server Infrastructure
>Reporter: Jeff Hammerbacher
>Assignee: Gunther Hagleitner
> Attachments: HIVE-1010.7.patch, HIVE-1010.8.patch, HIVE-1010.9.patch
>
>
> INFORMATION_SCHEMA is part of the SQL92 standard and would be useful to 
> implement using our metastore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-1010) Implement INFORMATION_SCHEMA in Hive

2017-03-03 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-1010:
-
Attachment: (was: HIVE-1010.5.patch)

> Implement INFORMATION_SCHEMA in Hive
> 
>
> Key: HIVE-1010
> URL: https://issues.apache.org/jira/browse/HIVE-1010
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Query Processor, Server Infrastructure
>Reporter: Jeff Hammerbacher
>Assignee: Gunther Hagleitner
> Attachments: HIVE-1010.7.patch, HIVE-1010.8.patch, HIVE-1010.9.patch
>
>
> INFORMATION_SCHEMA is part of the SQL92 standard and would be useful to 
> implement using our metastore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-1010) Implement INFORMATION_SCHEMA in Hive

2017-03-03 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-1010:
-
Status: Open  (was: Patch Available)

> Implement INFORMATION_SCHEMA in Hive
> 
>
> Key: HIVE-1010
> URL: https://issues.apache.org/jira/browse/HIVE-1010
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Query Processor, Server Infrastructure
>Reporter: Jeff Hammerbacher
>Assignee: Gunther Hagleitner
> Attachments: HIVE-1010.5.patch, HIVE-1010.6.patch, HIVE-1010.7.patch, 
> HIVE-1010.8.patch
>
>
> INFORMATION_SCHEMA is part of the SQL92 standard and would be useful to 
> implement using our metastore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-15857) Vectorization: Add string conversion case for UDFToInteger, etc

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895520#comment-15895520
 ] 

Hive QA commented on HIVE-15857:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855959/HIVE-15857.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3933/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3933/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3933/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Tests exited with: ExecutionException: java.util.concurrent.ExecutionException: 
java.io.IOException: Could not create 
/data/hiveptest/logs/PreCommit-HIVE-Build-3933/succeeded/28-TestCliDriver-input11_limit.q-nonreserved_keywords_input37.q-partition_char.q-and-27-more
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12855959 - PreCommit-HIVE-Build

> Vectorization: Add string conversion case for UDFToInteger, etc
> ---
>
> Key: HIVE-15857
> URL: https://issues.apache.org/jira/browse/HIVE-15857
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-15857.01.patch, HIVE-15857.02.patch, 
> HIVE-15857.03.patch
>
>
> Otherwise, VectorUDFAdaptor is used to convert a column from String to Int, 
> etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-15947) Enhance Templeton service job operations reliability

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895516#comment-15895516
 ] 

Hive QA commented on HIVE-15947:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855957/HIVE-15947.4.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 27 failed/errored test(s), 10300 tests 
executed
*Failed tests:*
{noformat}
TestCommandProcessorFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=272)
TestDbTxnManager - did not produce a TEST-*.xml file (likely timed out) 
(batchId=272)
TestDummyTxnManager - did not produce a TEST-*.xml file (likely timed out) 
(batchId=272)
TestHiveInputSplitComparator - did not produce a TEST-*.xml file (likely timed 
out) (batchId=272)
TestIndexType - did not produce a TEST-*.xml file (likely timed out) 
(batchId=272)
TestSplitFilter - did not produce a TEST-*.xml file (likely timed out) 
(batchId=272)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=229)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_table]
 (batchId=147)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] 
(batchId=224)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query23] 
(batchId=224)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vector_between_in] 
(batchId=119)
org.apache.hive.beeline.TestSchemaTool.testNestedScriptsForDerby (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testNestedScriptsForMySQL (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testNestedScriptsForOracle (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testPostgresFilter (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testSchemaInit (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testSchemaInitDryRun (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgrade (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgradeDryRun (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testScriptMultiRowComment (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testScriptWithDelimiter (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testScripts (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testValidateLocations (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testValidateNullValues (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testValidateSchemaTables (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testValidateSchemaVersions (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testValidateSequences (batchId=212)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3932/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3932/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3932/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 27 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12855957 - PreCommit-HIVE-Build

> Enhance Templeton service job operations reliability
> 
>
> Key: HIVE-15947
> URL: https://issues.apache.org/jira/browse/HIVE-15947
> Project: Hive
>  Issue Type: Bug
>Reporter: Subramanyam Pattipaka
>Assignee: Subramanyam Pattipaka
> Attachments: HIVE-15947.2.patch, HIVE-15947.3.patch, 
> HIVE-15947.4.patch, HIVE-15947.patch
>
>
> Currently Templeton service doesn't restrict number of job operation 
> requests. It simply accepts and tries to run all operations. If more number 
> of concurrent job submit requests comes then the time to submit job 
> operations can increase significantly. Templetonused hdfs to store staging 
> file for job. If HDFS storage can't respond to large number of requests and 
> throttles then the job submission can take very large times in order of 
> minutes.
> This behavior may not be suitable for all applications and client 
> applications  may be looking for predictable and low response for successful 
> request or send throttle response to client to wait for some time before 
> re-requesting job operation.
> In this JIRA, I am trying to address following job operations 
> 1) Submit new Job
> 2) Get Job Status
> 3) List jobs
> These three operations has different complexity due to variance in use of 
> cluster resources like YARN/HDFS.
> The idea is to introduce a new config templeton.job.submit.exec.max-procs 
> which controls maximum number of concurrent active job submissions within 
> Templeton and use 

[jira] [Commented] (HIVE-16078) improve abort checking in Tez/LLAP

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895483#comment-15895483
 ] 

Hive QA commented on HIVE-16078:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855952/HIVE-16078.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3931/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3931/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3931/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Tests exited with: ExecutionException: java.util.concurrent.ExecutionException: 
org.apache.hive.ptest.execution.ssh.SSHExecutionException: RSyncResult 
[localFile=/data/hiveptest/logs/PreCommit-HIVE-Build-3931/failed/272_UTBatch_ql_10_tests,
 remoteFile=/home/hiveptest/104.154.128.27-hiveptest-0/logs/, getExitCode()=11, 
getException()=null, getUser()=hiveptest, getHost()=104.154.128.27, 
getInstance()=0]: 'Warning: Permanently added '104.154.128.27' (ECDSA) to the 
list of known hosts.
receiving incremental file list
./
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.TestTxnCommands.xml

  0   0%0.00kB/s0:00:00  
  8,238 100%7.86MB/s0:00:00 (xfr#1, to-chk=9/11)
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.lockmgr.TestEmbeddedLockManager.xml

  0   0%0.00kB/s0:00:00  
  5,265 100%5.02MB/s0:00:00 (xfr#2, to-chk=8/11)
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.lockmgr.TestHiveLockObject.xml

  0   0%0.00kB/s0:00:00  
  5,265 100%5.02MB/s0:00:00 (xfr#3, to-chk=7/11)
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.lockmgr.zookeeper.TestZookeeperLockManager.xml

  0   0%0.00kB/s0:00:00  
  5,558 100%5.30MB/s0:00:00 (xfr#4, to-chk=6/11)
maven-test.txt

  0   0%0.00kB/s0:00:00  
  6,509 100%6.21MB/s0:00:00 (xfr#5, to-chk=5/11)
logs/
logs/derby.log

  0   0%0.00kB/s0:00:00  
974 100%  951.17kB/s0:00:00 (xfr#6, to-chk=2/11)
logs/hive.log

  0   0%0.00kB/s0:00:00  
 41,877,504   0%   39.94MB/s0:12:41  
 99,975,168   0%   47.67MB/s0:10:36  
158,007,296   0%   50.25MB/s0:10:02  
216,334,336   0%   51.60MB/s0:09:45  
265,256,960   0%   53.18MB/s0:09:27  
312,868,864   1%   50.51MB/s0:09:56  
364,707,840   1%   49.04MB/s0:10:13  
414,973,952   1%   47.12MB/s0:10:37  
466,059,264   1%   47.74MB/s0:10:28  
519,372,800   1%   49.25MB/s0:10:07  
573,407,232   1%   49.77MB/s0:10:00  
631,570,432   2%   51.64MB/s0:09:37  
687,276,032   2%   52.70MB/s0:09:24  
745,963,520   2%   54.00MB/s0:09:10  
804,913,152   2%   55.10MB/s0:08:58  
862,158,848   2%   54.92MB/s0:08:58  
920,551,424   2%   55.60MB/s0:08:51  
979,042,304   3%   55.56MB/s0:08:50  
  1,037,041,664   3%   55.37MB/s0:08:51  
  1,080,426,496   3%   52.07MB/s0:09:24  
  1,133,346,816   3%   50.76MB/s0:09:37  
  1,185,644,544   3%   49.26MB/s0:09:54  
  1,243,021,312   3%   49.11MB/s0:09:55  
  1,301,151,744   4%   52.62MB/s0:09:14  
  1,359,511,552   4%   53.91MB/s0:09:00  
  1,417,543,680   4%   55.30MB/s0:08:45  
  1,476,100,096   4%   55.57MB/s0:08:41  
  1,523,154,944   4%   52.92MB/s0:09:07  
  1,580,990,464   5%   52.80MB/s0:09:07  
  1,625,915,392   5%   49.68MB/s0:09:40  
  1,684,832,256   5%   49.78MB/s0:09:38  
  1,741,258,752   5%   52.00MB/s0:09:12  
  1,797,783,552   5%   51.69MB/s0:09:14  
  1,856,077,824   5%   54.88MB/s0:08:41  
  1,912,897,536   6%   54.25MB/s0:08:46  
  1,972,207,616   6%   54.82MB/s0:08:40  
  2,031,288,320   6%   55.27MB/s0:08:34  
  2,090,336,256   6%   55.28MB/s0:08:33  
  2,149,416,960   6%   55.79MB/s0:08:28  
  2,197,848,064   7%   53.41MB/s0:08:49  
  2,254,864,384   7%   53.07MB/s0:08:52  
  2,313,322,496   7%   52.99MB/s0:08:51  
  2,372,370,432   7%   52.97MB/s0:08:50  
  2,431,188,992   7%   55.33MB/s0:08:27  
  2,490,269,696   7%   55.67MB/s0:08:23  
  2,549,284,864   8%   55.74MB/s0:08:21  
  2,608,332,800   8%   55.76MB/s0:08:20  
  2,667,315,200   8%   55.75MB/s0:08:19  
  2,726,330,368   8%   55.74MB/s0:08:18  
  2,785,312,768   8%   55.73MB/s0:08:17  
  2,844,327,936   9%   55.74MB/s0:08:16  
  2,903,375,872   9%   55.74MB/s0:08:15  
  2,962,456,576   9%   55.75MB/s0:08:14  
  3,021,504,512   9%   55.77MB/s0:08:12  
  

[jira] [Resolved] (HIVE-16112) Sync TestHiveMetastoreChecker unit tests with the configuration change in HIVE-16014

2017-03-03 Thread Kiran Kumar Kolli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Kolli resolved HIVE-16112.
--
Resolution: Duplicate

> Sync TestHiveMetastoreChecker unit tests with the configuration change in 
> HIVE-16014
> 
>
> Key: HIVE-16112
> URL: https://issues.apache.org/jira/browse/HIVE-16112
> Project: Hive
>  Issue Type: Bug
>Reporter: Kiran Kumar Kolli
>Assignee: Kiran Kumar Kolli
> Fix For: 2.2.0
>
> Attachments: HIVE-16112.01.patch
>
>
> HIVE-16014 changed the hive configuration setting name which defines the pool 
> size. The changes are not reflected in unit tests. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (HIVE-16112) Sync TestHiveMetastoreChecker unit tests with the configuration change in HIVE-16014

2017-03-03 Thread Kiran Kumar Kolli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Kolli reopened HIVE-16112:
--

Its showing as resolved. Re-opening it to mark duplicate.

> Sync TestHiveMetastoreChecker unit tests with the configuration change in 
> HIVE-16014
> 
>
> Key: HIVE-16112
> URL: https://issues.apache.org/jira/browse/HIVE-16112
> Project: Hive
>  Issue Type: Bug
>Reporter: Kiran Kumar Kolli
>Assignee: Kiran Kumar Kolli
> Fix For: 2.2.0
>
> Attachments: HIVE-16112.01.patch
>
>
> HIVE-16014 changed the hive configuration setting name which defines the pool 
> size. The changes are not reflected in unit tests. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16112) Sync TestHiveMetastoreChecker unit tests with the configuration change in HIVE-16014

2017-03-03 Thread Kiran Kumar Kolli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Kolli updated HIVE-16112:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Duplicate of HIVE-16090.

> Sync TestHiveMetastoreChecker unit tests with the configuration change in 
> HIVE-16014
> 
>
> Key: HIVE-16112
> URL: https://issues.apache.org/jira/browse/HIVE-16112
> Project: Hive
>  Issue Type: Bug
>Reporter: Kiran Kumar Kolli
>Assignee: Kiran Kumar Kolli
> Fix For: 2.2.0
>
> Attachments: HIVE-16112.01.patch
>
>
> HIVE-16014 changed the hive configuration setting name which defines the pool 
> size. The changes are not reflected in unit tests. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16106) Upgrade to Datanucleus 4.2.12

2017-03-03 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-16106:

Affects Version/s: 2.1.1

> Upgrade to Datanucleus 4.2.12
> -
>
> Key: HIVE-16106
> URL: https://issues.apache.org/jira/browse/HIVE-16106
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-16106.1.patch
>
>
> As described in HIVE-14698, the datanucleus-rdbms package that we have 
> currently (4.1.7) has a bug which generates incorrect synatx for MS SQL 
> Server. The bug has been fixed in later releases. HIVE-14698 was a workaround 
> for hive, but since DN has the fix in its 4.2.x line, we should pick it from 
> there



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16106) Upgrade to Datanucleus 4.2.12

2017-03-03 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-16106:

Description: As described in HIVE-14698, the datanucleus-rdbms package that 
we have currently (4.1.7) has a bug which generates incorrect synatx for MS SQL 
Server. The bug has been fixed in later releases. HIVE-14698 was a workaround 
for hive, but since DN has the fix in its 4.2.x line, we should pick it from 
there  (was: As described in HIVE-14698, the datanucleus-rdbms package that we 
have currently (4.1.7) has a bug which generates incorrect synatx for MS SQL 
Server. The bug has been fixed in later releases. HIVE-14698 was a workaround 
for hive, but since DN has the fix in its 4.1.x line, we should pick it from 
there)

> Upgrade to Datanucleus 4.2.12
> -
>
> Key: HIVE-16106
> URL: https://issues.apache.org/jira/browse/HIVE-16106
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-16106.1.patch
>
>
> As described in HIVE-14698, the datanucleus-rdbms package that we have 
> currently (4.1.7) has a bug which generates incorrect synatx for MS SQL 
> Server. The bug has been fixed in later releases. HIVE-14698 was a workaround 
> for hive, but since DN has the fix in its 4.2.x line, we should pick it from 
> there



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16106) Upgrade to Datanucleus 4.2.12

2017-03-03 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895455#comment-15895455
 ] 

Vaibhav Gumashta commented on HIVE-16106:
-

[~sushanth] Can you please take a look? Thanks

> Upgrade to Datanucleus 4.2.12
> -
>
> Key: HIVE-16106
> URL: https://issues.apache.org/jira/browse/HIVE-16106
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-16106.1.patch
>
>
> As described in HIVE-14698, the datanucleus-rdbms package that we have 
> currently (4.1.7) has a bug which generates incorrect synatx for MS SQL 
> Server. The bug has been fixed in later releases. HIVE-14698 was a workaround 
> for hive, but since DN has the fix in its 4.1.x line, we should pick it from 
> there



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16092) Generate and use universal mmId instead of per db/table

2017-03-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895447#comment-15895447
 ] 

Sergey Shelukhin commented on HIVE-16092:
-

I mean, eventually we won't be able to get out of having the global txnid, 
right? So might as well do it now. Do you have some existing code in particular 
that would break when that is done, or some specific future scenario?

> Generate and use universal mmId instead of per db/table
> ---
>
> Key: HIVE-16092
> URL: https://issues.apache.org/jira/browse/HIVE-16092
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>
> To facilitate later replacement for it with txnId



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16109) TestDbTxnManager generates a huge hive.log

2017-03-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895444#comment-15895444
 ] 

Sergey Shelukhin commented on HIVE-16109:
-

+1 I'm pretty sure logging could be reduced too :)

> TestDbTxnManager generates a huge hive.log
> --
>
> Key: HIVE-16109
> URL: https://issues.apache.org/jira/browse/HIVE-16109
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HIVE-16109.01.patch
>
>
> Pre-commit jobs are failing currently due to out of disk space. The issue is 
> happening due to huge size of hive.log when TestDbTxnManager test fails or 
> times-out. When this test fails or times-out Ptest tries to persist these 
> logs for debugging. Since this test has been timing out frequently, this 
> accumulates a lot of these log files and eventually Ptest server runs of disk 
> space. Each run of TestDbTxnManager is generating ~30 GB of hive.log. I tried 
> to run it locally and it quickly reached 7 GB until I had to cancel it.
> The issue seems to be coming from this code block in TxnHandler.java
> {noformat}
> if(LOG.isDebugEnabled()) {
> LOG.debug("Locks to check(full): ");
> for(LockInfo info : locks) {
>   LOG.debug("  " + info);
> }
>   }
> {noformat}
> We should either change it trace or change the log mode of this test to INFO 
> so that it generates smaller log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16106) Upgrade to Datanucleus 4.2.12

2017-03-03 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-16106:

Status: Patch Available  (was: Open)

> Upgrade to Datanucleus 4.2.12
> -
>
> Key: HIVE-16106
> URL: https://issues.apache.org/jira/browse/HIVE-16106
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-16106.1.patch
>
>
> As described in HIVE-14698, the datanucleus-rdbms package that we have 
> currently (4.1.7) has a bug which generates incorrect synatx for MS SQL 
> Server. The bug has been fixed in later releases. HIVE-14698 was a workaround 
> for hive, but since DN has the fix in its 4.1.x line, we should pick it from 
> there



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16106) Upgrade to Datanucleus 4.2.12

2017-03-03 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-16106:

Attachment: HIVE-16106.1.patch

> Upgrade to Datanucleus 4.2.12
> -
>
> Key: HIVE-16106
> URL: https://issues.apache.org/jira/browse/HIVE-16106
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-16106.1.patch
>
>
> As described in HIVE-14698, the datanucleus-rdbms package that we have 
> currently (4.1.7) has a bug which generates incorrect synatx for MS SQL 
> Server. The bug has been fixed in later releases. HIVE-14698 was a workaround 
> for hive, but since DN has the fix in its 4.1.x line, we should pick it from 
> there



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HIVE-16109) TestDbTxnManager generates a huge hive.log

2017-03-03 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895403#comment-15895403
 ] 

Vihang Karajgaonkar edited comment on HIVE-16109 at 3/4/17 2:23 AM:


Looked into this a little deeper and realized that HIVE-13335 changed 
{{TxnStore.TIMED_OUT_TXN_ABORT_BATCH_SIZE}} from 1000 to 5. The test 
{{TestDbTxnManager.testLockTimeout}} is using this value to create 50017 locks 
and waiting for some time so that they expire. This is causing the test batch 
to time-out since it takes very long time. The test batch timeout currently is 
40 min and it takes a lot longer than that.


was (Author: vihangk1):
Looked into this a little deeper and realized that HIVE-13335 changed 
{{TxnStore.TIMED_OUT_TXN_ABORT_BATCH_SIZE}} from 1000 to 5. The test 
{{TestDbTxnManager.testLockTimeout}} is using this value to create 50017 locks 
and waiting for 5 min so that they expire. This is causing the test batch to 
time-out since it takes very long time. The test batch timeout currently is 40 
min and it takes a lot longer than that.

> TestDbTxnManager generates a huge hive.log
> --
>
> Key: HIVE-16109
> URL: https://issues.apache.org/jira/browse/HIVE-16109
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HIVE-16109.01.patch
>
>
> Pre-commit jobs are failing currently due to out of disk space. The issue is 
> happening due to huge size of hive.log when TestDbTxnManager test fails or 
> times-out. When this test fails or times-out Ptest tries to persist these 
> logs for debugging. Since this test has been timing out frequently, this 
> accumulates a lot of these log files and eventually Ptest server runs of disk 
> space. Each run of TestDbTxnManager is generating ~30 GB of hive.log. I tried 
> to run it locally and it quickly reached 7 GB until I had to cancel it.
> The issue seems to be coming from this code block in TxnHandler.java
> {noformat}
> if(LOG.isDebugEnabled()) {
> LOG.debug("Locks to check(full): ");
> for(LockInfo info : locks) {
>   LOG.debug("  " + info);
> }
>   }
> {noformat}
> We should either change it trace or change the log mode of this test to INFO 
> so that it generates smaller log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16106) Upgrade to Datanucleus 4.2.12

2017-03-03 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-16106:

Summary: Upgrade to Datanucleus 4.2.12  (was: Upgrade to Datanucleus 4.1.17)

> Upgrade to Datanucleus 4.2.12
> -
>
> Key: HIVE-16106
> URL: https://issues.apache.org/jira/browse/HIVE-16106
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>
> As described in HIVE-14698, the datanucleus-rdbms package that we have 
> currently (4.1.7) has a bug which generates incorrect synatx for MS SQL 
> Server. The bug has been fixed in later releases. HIVE-14698 was a workaround 
> for hive, but since DN has the fix in its 4.1.x line, we should pick it from 
> there



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16112) Sync TestHiveMetastoreChecker unit tests with the configuration change in HIVE-16014

2017-03-03 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895436#comment-15895436
 ] 

Vihang Karajgaonkar commented on HIVE-16112:


This is already reported and in works in 
https://issues.apache.org/jira/browse/HIVE-16090

> Sync TestHiveMetastoreChecker unit tests with the configuration change in 
> HIVE-16014
> 
>
> Key: HIVE-16112
> URL: https://issues.apache.org/jira/browse/HIVE-16112
> Project: Hive
>  Issue Type: Bug
>Reporter: Kiran Kumar Kolli
>Assignee: Kiran Kumar Kolli
> Fix For: 2.2.0
>
> Attachments: HIVE-16112.01.patch
>
>
> HIVE-16014 changed the hive configuration setting name which defines the pool 
> size. The changes are not reflected in unit tests. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HIVE-16109) TestDbTxnManager generates a huge hive.log

2017-03-03 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895427#comment-15895427
 ] 

Vihang Karajgaonkar edited comment on HIVE-16109 at 3/4/17 2:23 AM:


Attaching the patch to fix the test. We don't really need to change the debug 
message in TxnHandler to log lockInfo to trace. Currently the log file 
generated is 80 MB and the runtime of the test is ~110 sec. Both the disk space 
and runtime are reasonable.


was (Author: vihangk1):
Attaching the patch to fix the test. We don't really need to change the debug 
message in TxnManager to log lockInfo to trace. Currently the log file 
generated is 80 MB and the runtime of the test is ~110 sec. Both the disk space 
and runtime are reasonable.

> TestDbTxnManager generates a huge hive.log
> --
>
> Key: HIVE-16109
> URL: https://issues.apache.org/jira/browse/HIVE-16109
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HIVE-16109.01.patch
>
>
> Pre-commit jobs are failing currently due to out of disk space. The issue is 
> happening due to huge size of hive.log when TestDbTxnManager test fails or 
> times-out. When this test fails or times-out Ptest tries to persist these 
> logs for debugging. Since this test has been timing out frequently, this 
> accumulates a lot of these log files and eventually Ptest server runs of disk 
> space. Each run of TestDbTxnManager is generating ~30 GB of hive.log. I tried 
> to run it locally and it quickly reached 7 GB until I had to cancel it.
> The issue seems to be coming from this code block in TxnHandler.java
> {noformat}
> if(LOG.isDebugEnabled()) {
> LOG.debug("Locks to check(full): ");
> for(LockInfo info : locks) {
>   LOG.debug("  " + info);
> }
>   }
> {noformat}
> We should either change it trace or change the log mode of this test to INFO 
> so that it generates smaller log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HIVE-16109) TestDbTxnManager generates a huge hive.log

2017-03-03 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895427#comment-15895427
 ] 

Vihang Karajgaonkar edited comment on HIVE-16109 at 3/4/17 2:24 AM:


Attaching the patch to fix the test. We don't really need to change the debug 
message in TxnHandler to log lockInfo to trace. With this patch the log file 
generated is 80 MB and the runtime of the test is ~110 sec. Both the disk space 
and runtime are reasonable.


was (Author: vihangk1):
Attaching the patch to fix the test. We don't really need to change the debug 
message in TxnHandler to log lockInfo to trace. Currently the log file 
generated is 80 MB and the runtime of the test is ~110 sec. Both the disk space 
and runtime are reasonable.

> TestDbTxnManager generates a huge hive.log
> --
>
> Key: HIVE-16109
> URL: https://issues.apache.org/jira/browse/HIVE-16109
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HIVE-16109.01.patch
>
>
> Pre-commit jobs are failing currently due to out of disk space. The issue is 
> happening due to huge size of hive.log when TestDbTxnManager test fails or 
> times-out. When this test fails or times-out Ptest tries to persist these 
> logs for debugging. Since this test has been timing out frequently, this 
> accumulates a lot of these log files and eventually Ptest server runs of disk 
> space. Each run of TestDbTxnManager is generating ~30 GB of hive.log. I tried 
> to run it locally and it quickly reached 7 GB until I had to cancel it.
> The issue seems to be coming from this code block in TxnHandler.java
> {noformat}
> if(LOG.isDebugEnabled()) {
> LOG.debug("Locks to check(full): ");
> for(LockInfo info : locks) {
>   LOG.debug("  " + info);
> }
>   }
> {noformat}
> We should either change it trace or change the log mode of this test to INFO 
> so that it generates smaller log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16109) TestDbTxnManager generates a huge hive.log

2017-03-03 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895433#comment-15895433
 ] 

Vihang Karajgaonkar commented on HIVE-16109:


Just to give one more data point if we change the debug message to trace the 
hive.log reduces from 80 MB to 4 MB. So may be worth to consider changing that 
too. [~ekoifman] Can you please review?

> TestDbTxnManager generates a huge hive.log
> --
>
> Key: HIVE-16109
> URL: https://issues.apache.org/jira/browse/HIVE-16109
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HIVE-16109.01.patch
>
>
> Pre-commit jobs are failing currently due to out of disk space. The issue is 
> happening due to huge size of hive.log when TestDbTxnManager test fails or 
> times-out. When this test fails or times-out Ptest tries to persist these 
> logs for debugging. Since this test has been timing out frequently, this 
> accumulates a lot of these log files and eventually Ptest server runs of disk 
> space. Each run of TestDbTxnManager is generating ~30 GB of hive.log. I tried 
> to run it locally and it quickly reached 7 GB until I had to cancel it.
> The issue seems to be coming from this code block in TxnHandler.java
> {noformat}
> if(LOG.isDebugEnabled()) {
> LOG.debug("Locks to check(full): ");
> for(LockInfo info : locks) {
>   LOG.debug("  " + info);
> }
>   }
> {noformat}
> We should either change it trace or change the log mode of this test to INFO 
> so that it generates smaller log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16109) TestDbTxnManager generates a huge hive.log

2017-03-03 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-16109:
---
Attachment: HIVE-16109.01.patch

Attaching the patch to fix the test. We don't really need to change the debug 
message in TxnManager to log lockInfo to trace. Currently the log file 
generated is 80 MB and the runtime of the test is ~110 sec. Both the disk space 
and runtime are reasonable.

> TestDbTxnManager generates a huge hive.log
> --
>
> Key: HIVE-16109
> URL: https://issues.apache.org/jira/browse/HIVE-16109
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HIVE-16109.01.patch
>
>
> Pre-commit jobs are failing currently due to out of disk space. The issue is 
> happening due to huge size of hive.log when TestDbTxnManager test fails or 
> times-out. When this test fails or times-out Ptest tries to persist these 
> logs for debugging. Since this test has been timing out frequently, this 
> accumulates a lot of these log files and eventually Ptest server runs of disk 
> space. Each run of TestDbTxnManager is generating ~30 GB of hive.log. I tried 
> to run it locally and it quickly reached 7 GB until I had to cancel it.
> The issue seems to be coming from this code block in TxnHandler.java
> {noformat}
> if(LOG.isDebugEnabled()) {
> LOG.debug("Locks to check(full): ");
> for(LockInfo info : locks) {
>   LOG.debug("  " + info);
> }
>   }
> {noformat}
> We should either change it trace or change the log mode of this test to INFO 
> so that it generates smaller log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16109) TestDbTxnManager generates a huge hive.log

2017-03-03 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-16109:
---
Status: Patch Available  (was: Open)

> TestDbTxnManager generates a huge hive.log
> --
>
> Key: HIVE-16109
> URL: https://issues.apache.org/jira/browse/HIVE-16109
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Attachments: HIVE-16109.01.patch
>
>
> Pre-commit jobs are failing currently due to out of disk space. The issue is 
> happening due to huge size of hive.log when TestDbTxnManager test fails or 
> times-out. When this test fails or times-out Ptest tries to persist these 
> logs for debugging. Since this test has been timing out frequently, this 
> accumulates a lot of these log files and eventually Ptest server runs of disk 
> space. Each run of TestDbTxnManager is generating ~30 GB of hive.log. I tried 
> to run it locally and it quickly reached 7 GB until I had to cancel it.
> The issue seems to be coming from this code block in TxnHandler.java
> {noformat}
> if(LOG.isDebugEnabled()) {
> LOG.debug("Locks to check(full): ");
> for(LockInfo info : locks) {
>   LOG.debug("  " + info);
> }
>   }
> {noformat}
> We should either change it trace or change the log mode of this test to INFO 
> so that it generates smaller log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16112) Sync TestHiveMetastoreChecker unit tests with the configuration change in HIVE-16014

2017-03-03 Thread Kiran Kumar Kolli (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895408#comment-15895408
 ] 

Kiran Kumar Kolli commented on HIVE-16112:
--

cc: [~ashitg], [~pattipaka]

> Sync TestHiveMetastoreChecker unit tests with the configuration change in 
> HIVE-16014
> 
>
> Key: HIVE-16112
> URL: https://issues.apache.org/jira/browse/HIVE-16112
> Project: Hive
>  Issue Type: Bug
>Reporter: Kiran Kumar Kolli
>Assignee: Kiran Kumar Kolli
> Fix For: 2.2.0
>
> Attachments: HIVE-16112.01.patch
>
>
> HIVE-16014 changed the hive configuration setting name which defines the pool 
> size. The changes are not reflected in unit tests. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16112) Sync TestHiveMetastoreChecker unit tests with the configuration change in HIVE-16014

2017-03-03 Thread Kiran Kumar Kolli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Kolli updated HIVE-16112:
-
Target Version/s: 2.2.0
  Status: Patch Available  (was: Open)

HiveMetastoreChecker tests are updated with the new configuration property 
METASTORE_FS_HANDLER_THREADS_COUNT.

> Sync TestHiveMetastoreChecker unit tests with the configuration change in 
> HIVE-16014
> 
>
> Key: HIVE-16112
> URL: https://issues.apache.org/jira/browse/HIVE-16112
> Project: Hive
>  Issue Type: Bug
>Reporter: Kiran Kumar Kolli
>Assignee: Kiran Kumar Kolli
> Fix For: 2.2.0
>
> Attachments: HIVE-16112.01.patch
>
>
> HIVE-16014 changed the hive configuration setting name which defines the pool 
> size. The changes are not reflected in unit tests. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16112) Sync TestHiveMetastoreChecker unit tests with the configuration change in HIVE-16014

2017-03-03 Thread Kiran Kumar Kolli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Kolli updated HIVE-16112:
-
Attachment: HIVE-16112.01.patch

> Sync TestHiveMetastoreChecker unit tests with the configuration change in 
> HIVE-16014
> 
>
> Key: HIVE-16112
> URL: https://issues.apache.org/jira/browse/HIVE-16112
> Project: Hive
>  Issue Type: Bug
>Reporter: Kiran Kumar Kolli
>Assignee: Kiran Kumar Kolli
> Fix For: 2.2.0
>
> Attachments: HIVE-16112.01.patch
>
>
> HIVE-16014 changed the hive configuration setting name which defines the pool 
> size. The changes are not reflected in unit tests. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16086) Fix HiveMetaStoreChecker.checkPartitionDirsSingleThreaded method

2017-03-03 Thread Kiran Kumar Kolli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Kolli updated HIVE-16086:
-
Attachment: (was: HIVE-16112.01.patch)

> Fix HiveMetaStoreChecker.checkPartitionDirsSingleThreaded method
> 
>
> Key: HIVE-16086
> URL: https://issues.apache.org/jira/browse/HIVE-16086
> Project: Hive
>  Issue Type: Bug
>Reporter: Kiran Kumar Kolli
>Assignee: Kiran Kumar Kolli
> Fix For: 2.2.0
>
> Attachments: HIVE-16086.01.patch
>
>
> checkPartitionDirsSingleThreaded DFS implementation has a bug and its 
> traversing the paths repeatedly. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16086) Fix HiveMetaStoreChecker.checkPartitionDirsSingleThreaded method

2017-03-03 Thread Kiran Kumar Kolli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Kolli updated HIVE-16086:
-
Attachment: HIVE-16112.01.patch

> Fix HiveMetaStoreChecker.checkPartitionDirsSingleThreaded method
> 
>
> Key: HIVE-16086
> URL: https://issues.apache.org/jira/browse/HIVE-16086
> Project: Hive
>  Issue Type: Bug
>Reporter: Kiran Kumar Kolli
>Assignee: Kiran Kumar Kolli
> Fix For: 2.2.0
>
> Attachments: HIVE-16086.01.patch
>
>
> checkPartitionDirsSingleThreaded DFS implementation has a bug and its 
> traversing the paths repeatedly. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16109) TestDbTxnManager generates a huge hive.log

2017-03-03 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895403#comment-15895403
 ] 

Vihang Karajgaonkar commented on HIVE-16109:


Looked into this a little deeper and realized that HIVE-13335 changed 
{{TxnStore.TIMED_OUT_TXN_ABORT_BATCH_SIZE}} from 1000 to 5. The test 
{{TestDbTxnManager.testLockTimeout}} is using this value to create 50017 locks 
and waiting for 5 min so that they expire. This is causing the test batch to 
time-out since it takes very long time. The test batch timeout currently is 40 
min and it takes a lot longer than that.

> TestDbTxnManager generates a huge hive.log
> --
>
> Key: HIVE-16109
> URL: https://issues.apache.org/jira/browse/HIVE-16109
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>
> Pre-commit jobs are failing currently due to out of disk space. The issue is 
> happening due to huge size of hive.log when TestDbTxnManager test fails or 
> times-out. When this test fails or times-out Ptest tries to persist these 
> logs for debugging. Since this test has been timing out frequently, this 
> accumulates a lot of these log files and eventually Ptest server runs of disk 
> space. Each run of TestDbTxnManager is generating ~30 GB of hive.log. I tried 
> to run it locally and it quickly reached 7 GB until I had to cancel it.
> The issue seems to be coming from this code block in TxnHandler.java
> {noformat}
> if(LOG.isDebugEnabled()) {
> LOG.debug("Locks to check(full): ");
> for(LockInfo info : locks) {
>   LOG.debug("  " + info);
> }
>   }
> {noformat}
> We should either change it trace or change the log mode of this test to INFO 
> so that it generates smaller log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16112) Sync TestHiveMetastoreChecker unit tests with the configuration change in HIVE-16014

2017-03-03 Thread Kiran Kumar Kolli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar Kolli reassigned HIVE-16112:



> Sync TestHiveMetastoreChecker unit tests with the configuration change in 
> HIVE-16014
> 
>
> Key: HIVE-16112
> URL: https://issues.apache.org/jira/browse/HIVE-16112
> Project: Hive
>  Issue Type: Bug
>Reporter: Kiran Kumar Kolli
>Assignee: Kiran Kumar Kolli
> Fix For: 2.2.0
>
>
> HIVE-16014 changed the hive configuration setting name which defines the pool 
> size. The changes are not reflected in unit tests. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16097) minor fixes to metrics and logs in LlapTaskScheduler

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895390#comment-15895390
 ] 

Hive QA commented on HIVE-16097:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855729/HIVE-16097.01.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3930/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3930/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3930/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/conf/Configuration.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/fs/Path.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/conf/HiveConfUtil.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/StringUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/VersionInfo.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Iterable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/io/Writable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/String.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/aggregate/jetty-all-server/7.6.0.v20120127/jetty-all-server-7.6.0.v20120127.jar(org/eclipse/jetty/http/HttpStatus.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/HashMap.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/MediaType.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/Response.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar(org/codehaus/jackson/map/ObjectMapper.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Exception.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Throwable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Serializable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Enum.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Comparable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-server/1.14/jersey-server-1.14.jar(com/sun/jersey/api/core/PackagesResourceConfig.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-servlet/1.14/jersey-servlet-1.14.jar(com/sun/jersey/spi/container/servlet/ServletContainer.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/FileInputStream.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/StringUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/ArrayUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/common/classification/InterfaceStability.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-hdfs/2.7.2/hadoop-hdfs-2.7.2.jar(org/apache/hadoop/hdfs/web/AuthFilter.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/Utils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/security/UserGroupInformation.class)]]
[loading 

[jira] [Commented] (HIVE-16094) queued containers may timeout if they don't get to run for a long time

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895384#comment-15895384
 ] 

Hive QA commented on HIVE-16094:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855786/HIVE-16094.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3929/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3929/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3929/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/conf/Configuration.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/fs/Path.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/conf/HiveConfUtil.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/StringUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/VersionInfo.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Iterable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/io/Writable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/String.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/aggregate/jetty-all-server/7.6.0.v20120127/jetty-all-server-7.6.0.v20120127.jar(org/eclipse/jetty/http/HttpStatus.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/HashMap.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/MediaType.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/Response.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar(org/codehaus/jackson/map/ObjectMapper.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Exception.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Throwable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Serializable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Enum.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Comparable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-server/1.14/jersey-server-1.14.jar(com/sun/jersey/api/core/PackagesResourceConfig.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-servlet/1.14/jersey-servlet-1.14.jar(com/sun/jersey/spi/container/servlet/ServletContainer.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/FileInputStream.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/StringUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/ArrayUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/common/classification/InterfaceStability.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-hdfs/2.7.2/hadoop-hdfs-2.7.2.jar(org/apache/hadoop/hdfs/web/AuthFilter.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/Utils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/security/UserGroupInformation.class)]]
[loading 

[jira] [Commented] (HIVE-16065) Vectorization: Wrong Key/Value information used by Vectorizer

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895377#comment-15895377
 ] 

Hive QA commented on HIVE-16065:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855934/HIVE-16065.08.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3928/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3928/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3928/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/conf/Configuration.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/fs/Path.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/conf/HiveConfUtil.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/StringUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/VersionInfo.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Iterable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/io/Writable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/String.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/aggregate/jetty-all-server/7.6.0.v20120127/jetty-all-server-7.6.0.v20120127.jar(org/eclipse/jetty/http/HttpStatus.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/HashMap.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/MediaType.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/Response.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar(org/codehaus/jackson/map/ObjectMapper.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Exception.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Throwable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Serializable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Enum.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Comparable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-server/1.14/jersey-server-1.14.jar(com/sun/jersey/api/core/PackagesResourceConfig.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-servlet/1.14/jersey-servlet-1.14.jar(com/sun/jersey/spi/container/servlet/ServletContainer.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/FileInputStream.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/StringUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/ArrayUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/common/classification/InterfaceStability.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-hdfs/2.7.2/hadoop-hdfs-2.7.2.jar(org/apache/hadoop/hdfs/web/AuthFilter.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/Utils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/security/UserGroupInformation.class)]]
[loading 

[jira] [Commented] (HIVE-16034) Hive/Druid integration: Fix type inference for Decimal DruidOutputFormat

2017-03-03 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895375#comment-15895375
 ] 

Jesus Camacho Rodriguez commented on HIVE-16034:


[~ashutoshc], you are right, I got confused with other issue. I had not pushed 
it because I was doing some testing in my environment. I will push it shortly.

> Hive/Druid integration: Fix type inference for Decimal DruidOutputFormat
> 
>
> Key: HIVE-16034
> URL: https://issues.apache.org/jira/browse/HIVE-16034
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-16034.01.patch, HIVE-16034.patch
>
>
> We are extracting the type name by String, which might cause issues, e.g., 
> for Decimal, where type includes precision and scale. Instead, we should 
> check the PrimitiveCategory enum.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16098) Describe table doesn't show stats for partitioned tables

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895374#comment-15895374
 ] 

Hive QA commented on HIVE-16098:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855988/HIVE-16098.3.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3927/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3927/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3927/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/conf/Configuration.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/fs/Path.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/conf/HiveConfUtil.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/StringUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/VersionInfo.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Iterable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/io/Writable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/String.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/aggregate/jetty-all-server/7.6.0.v20120127/jetty-all-server-7.6.0.v20120127.jar(org/eclipse/jetty/http/HttpStatus.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/HashMap.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/MediaType.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/Response.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar(org/codehaus/jackson/map/ObjectMapper.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Exception.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Throwable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Serializable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Enum.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Comparable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-server/1.14/jersey-server-1.14.jar(com/sun/jersey/api/core/PackagesResourceConfig.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-servlet/1.14/jersey-servlet-1.14.jar(com/sun/jersey/spi/container/servlet/ServletContainer.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/FileInputStream.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/StringUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/ArrayUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/common/classification/InterfaceStability.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-hdfs/2.7.2/hadoop-hdfs-2.7.2.jar(org/apache/hadoop/hdfs/web/AuthFilter.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/Utils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/security/UserGroupInformation.class)]]
[loading 

[jira] [Commented] (HIVE-16100) Dynamic Sorted Partition optimizer loses sibling operators

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895371#comment-15895371
 ] 

Hive QA commented on HIVE-16100:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855825/HIVE-16100.2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3925/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3925/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3925/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/conf/Configuration.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/fs/Path.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/conf/HiveConfUtil.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/StringUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/VersionInfo.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Iterable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/io/Writable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/String.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/aggregate/jetty-all-server/7.6.0.v20120127/jetty-all-server-7.6.0.v20120127.jar(org/eclipse/jetty/http/HttpStatus.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/HashMap.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/MediaType.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/Response.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar(org/codehaus/jackson/map/ObjectMapper.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Exception.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Throwable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Serializable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Enum.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Comparable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-server/1.14/jersey-server-1.14.jar(com/sun/jersey/api/core/PackagesResourceConfig.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-servlet/1.14/jersey-servlet-1.14.jar(com/sun/jersey/spi/container/servlet/ServletContainer.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/FileInputStream.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/StringUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/ArrayUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/common/classification/InterfaceStability.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-hdfs/2.7.2/hadoop-hdfs-2.7.2.jar(org/apache/hadoop/hdfs/web/AuthFilter.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/Utils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/security/UserGroupInformation.class)]]
[loading 

[jira] [Commented] (HIVE-15708) Upgrade calcite version to 1.12

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895369#comment-15895369
 ] 

Hive QA commented on HIVE-15708:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855824/HIVE-15708.14.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3924/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3924/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3924/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2017-03-04 01:23:02.994
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-3924/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-03-04 01:23:02.996
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 66bdab8 HIVE-16082. Allow user to change number of listener 
thread in
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 66bdab8 HIVE-16082. Allow user to change number of listener 
thread in
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-03-04 01:23:03.926
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Going to apply patch with: patch -p1
patching file druid-handler/pom.xml
patching file 
druid-handler/src/java/org/apache/hadoop/hive/druid/io/DruidQueryBasedInputFormat.java
patching file pom.xml
patching file ql/pom.xml
patching file 
ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveMaterializedViewsRegistry.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HivePlannerContext.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/reloperators/HiveExtractDate.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/HiveMaterializedViewFilterScanRule.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/stats/HiveRelMdPredicates.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTBuilder.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ExprNodeConverter.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/RexNodeConverter.java
patching file ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
patching file 
ql/src/test/org/apache/hadoop/hive/ql/optimizer/calcite/TestCBORuleFiredOnlyOnce.java
patching file ql/src/test/results/clientpositive/cbo_rp_auto_join1.q.out
patching file ql/src/test/results/clientpositive/cbo_rp_outer_join_ppr.q.out
patching file ql/src/test/results/clientpositive/constprog2.q.out
patching file ql/src/test/results/clientpositive/druid_basic2.q.out
patching file ql/src/test/results/clientpositive/druid_intervals.q.out
patching file ql/src/test/results/clientpositive/druid_timeseries.q.out
patching file ql/src/test/results/clientpositive/druid_topn.q.out
patching file ql/src/test/results/clientpositive/filter_cond_pushdown.q.out
patching file ql/src/test/results/clientpositive/fouter_join_ppr.q.out
patching file ql/src/test/results/clientpositive/index_auto_unused.q.out
patching file ql/src/test/results/clientpositive/join45.q.out
patching file ql/src/test/results/clientpositive/join_merging.q.out
patching file ql/src/test/results/clientpositive/llap/auto_smb_mapjoin_14.q.out
patching file 

[jira] [Commented] (HIVE-16101) QTest failure BeeLine escape_comments after HIVE-16045

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895368#comment-15895368
 ] 

Hive QA commented on HIVE-16101:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855820/HIVE-16101.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3923/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3923/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3923/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/VersionInfo.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Iterable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/io/Writable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/String.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/aggregate/jetty-all-server/7.6.0.v20120127/jetty-all-server-7.6.0.v20120127.jar(org/eclipse/jetty/http/HttpStatus.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/HashMap.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/MediaType.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/Response.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar(org/codehaus/jackson/map/ObjectMapper.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Exception.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Throwable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Serializable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Enum.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Comparable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-server/1.14/jersey-server-1.14.jar(com/sun/jersey/api/core/PackagesResourceConfig.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-servlet/1.14/jersey-servlet-1.14.jar(com/sun/jersey/spi/container/servlet/ServletContainer.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/FileInputStream.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/StringUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/ArrayUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/common/classification/InterfaceStability.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-hdfs/2.7.2/hadoop-hdfs-2.7.2.jar(org/apache/hadoop/hdfs/web/AuthFilter.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/Utils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/security/UserGroupInformation.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-auth/2.7.2/hadoop-auth-2.7.2.jar(org/apache/hadoop/security/authentication/client/PseudoAuthenticator.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-auth/2.7.2/hadoop-auth-2.7.2.jar(org/apache/hadoop/security/authentication/server/PseudoAuthenticationHandler.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/GenericOptionsParser.class)]]
[loading 

[jira] [Commented] (HIVE-16102) Grouping sets do not conform to SQL standard

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895366#comment-15895366
 ] 

Hive QA commented on HIVE-16102:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855816/HIVE-16102.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3922/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3922/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3922/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/conf/Configuration.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/fs/Path.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/conf/HiveConfUtil.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/StringUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/VersionInfo.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Iterable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/io/Writable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/String.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/aggregate/jetty-all-server/7.6.0.v20120127/jetty-all-server-7.6.0.v20120127.jar(org/eclipse/jetty/http/HttpStatus.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/HashMap.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/MediaType.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/Response.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar(org/codehaus/jackson/map/ObjectMapper.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Exception.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Throwable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Serializable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Enum.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Comparable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-server/1.14/jersey-server-1.14.jar(com/sun/jersey/api/core/PackagesResourceConfig.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-servlet/1.14/jersey-servlet-1.14.jar(com/sun/jersey/spi/container/servlet/ServletContainer.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/FileInputStream.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/StringUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/ArrayUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/common/classification/InterfaceStability.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-hdfs/2.7.2/hadoop-hdfs-2.7.2.jar(org/apache/hadoop/hdfs/web/AuthFilter.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/Utils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/security/UserGroupInformation.class)]]
[loading 

[jira] [Commented] (HIVE-16098) Describe table doesn't show stats for partitioned tables

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895362#comment-15895362
 ] 

Hive QA commented on HIVE-16098:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855988/HIVE-16098.3.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3921/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3921/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3921/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Tests exited with: ExecutionException: java.util.concurrent.ExecutionException: 
org.apache.hive.ptest.execution.ssh.SSHExecutionException: RSyncResult 
[localFile=/data/hiveptest/logs/PreCommit-HIVE-Build-3921/failed/272_UTBatch_ql_10_tests,
 remoteFile=/home/hiveptest/130.211.226.151-hiveptest-0/logs/, 
getExitCode()=11, getException()=null, getUser()=hiveptest, 
getHost()=130.211.226.151, getInstance()=0]: 'Warning: Permanently added 
'130.211.226.151' (ECDSA) to the list of known hosts.
receiving incremental file list
./
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.TestTxnCommands.xml

  0   0%0.00kB/s0:00:00  
  8,244 100%7.86MB/s0:00:00 (xfr#1, to-chk=9/11)
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.lockmgr.TestEmbeddedLockManager.xml

  0   0%0.00kB/s0:00:00  
  5,269 100%5.02MB/s0:00:00 (xfr#2, to-chk=8/11)
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.lockmgr.TestHiveLockObject.xml

  0   0%0.00kB/s0:00:00  
  5,269 100%5.02MB/s0:00:00 (xfr#3, to-chk=7/11)
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.lockmgr.zookeeper.TestZookeeperLockManager.xml

  0   0%0.00kB/s0:00:00  
  5,563 100%5.31MB/s0:00:00 (xfr#4, to-chk=6/11)
maven-test.txt

  0   0%0.00kB/s0:00:00  
  6,777 100%6.46MB/s0:00:00 (xfr#5, to-chk=5/11)
logs/
logs/derby.log

  0   0%0.00kB/s0:00:00  
978 100%  955.08kB/s0:00:00 (xfr#6, to-chk=2/11)
logs/hive.log

  0   0%0.00kB/s0:00:00  
 38,338,560   0%   36.56MB/s0:13:50  
 86,802,432   0%   41.37MB/s0:12:12  
135,233,536   0%   42.99MB/s0:11:43  
183,795,712   0%   43.82MB/s0:11:29  
232,947,712   0%   46.40MB/s0:10:50  
282,951,680   0%   46.79MB/s0:10:43  
334,462,976   1%   47.52MB/s0:10:32  
388,464,640   1%   48.80MB/s0:10:15  
435,126,272   1%   48.20MB/s0:10:21  
474,808,320   1%   45.74MB/s0:10:54  
513,048,576   1%   42.58MB/s0:11:42  
521,338,880   1%   30.94MB/s0:16:05  
559,611,904   1%   29.00MB/s0:17:09  
599,293,952   1%   28.99MB/s0:17:08  
639,369,216   2%   29.42MB/s0:16:51  
680,001,536   2%   37.86MB/s0:13:05  
721,846,272   2%   38.70MB/s0:12:47  
765,263,872   2%   39.57MB/s0:12:29  
810,909,696   2%   40.91MB/s0:12:03  
rsync: write failed on 
"/data/hiveptest/logs/PreCommit-HIVE-Build-3921/failed/272_UTBatch_ql_10_tests/logs/hive.log":
 No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1]
Warning: Permanently added '130.211.226.151' (ECDSA) to the list of known hosts.
receiving incremental file list
logs/
logs/hive.log

  0   0%0.00kB/s0:00:00  
rsync: write failed on 
"/data/hiveptest/logs/PreCommit-HIVE-Build-3921/failed/272_UTBatch_ql_10_tests/logs/hive.log":
 No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1]
Warning: Permanently added '130.211.226.151' (ECDSA) to the list of known hosts.
receiving incremental file list
logs/
logs/hive.log

  0   0%0.00kB/s0:00:00  
rsync: write failed on 
"/data/hiveptest/logs/PreCommit-HIVE-Build-3921/failed/272_UTBatch_ql_10_tests/logs/hive.log":
 No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1]
Warning: Permanently added '130.211.226.151' (ECDSA) to the list of known hosts.
receiving incremental file list
logs/
logs/hive.log

  0   0%0.00kB/s0:00:00  
rsync: write failed on 
"/data/hiveptest/logs/PreCommit-HIVE-Build-3921/failed/272_UTBatch_ql_10_tests/logs/hive.log":
 No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1]
Warning: Permanently added '130.211.226.151' (ECDSA) to the list of known hosts.
receiving incremental file list
logs/
logs/hive.log

  0   0%0.00kB/s

[jira] [Commented] (HIVE-16064) Allow ALL set quantifier with aggregate functions

2017-03-03 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895332#comment-15895332
 ] 

Ashutosh Chauhan commented on HIVE-16064:
-

KW_DISTINCT is not optional anymore. Intended?

> Allow ALL set quantifier with aggregate functions
> -
>
> Key: HIVE-16064
> URL: https://issues.apache.org/jira/browse/HIVE-16064
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-16064.1.patch
>
>
> SQL:2011  allows  ALL with aggregate functions which is 
> equivalent to aggregate function without ALL.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16110) Vectorization: Support CASE WHEN instead of fall back to VectorUDFAdaptor

2017-03-03 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline reassigned HIVE-16110:
---


> Vectorization: Support CASE WHEN instead of fall back to VectorUDFAdaptor
> -
>
> Key: HIVE-16110
> URL: https://issues.apache.org/jira/browse/HIVE-16110
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16109) TestDbTxnManager generates a huge hive.log

2017-03-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895278#comment-15895278
 ] 

Sergey Shelukhin commented on HIVE-16109:
-

cc [~ekoifman] 

> TestDbTxnManager generates a huge hive.log
> --
>
> Key: HIVE-16109
> URL: https://issues.apache.org/jira/browse/HIVE-16109
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>
> Pre-commit jobs are failing currently due to out of disk space. The issue is 
> happening due to huge size of hive.log when TestDbTxnManager test fails or 
> times-out. When this test fails or times-out Ptest tries to persist these 
> logs for debugging. Since this test has been timing out frequently, this 
> accumulates a lot of these log files and eventually Ptest server runs of disk 
> space. Each run of TestDbTxnManager is generating ~30 GB of hive.log. I tried 
> to run it locally and it quickly reached 7 GB until I had to cancel it.
> The issue seems to be coming from this code block in TxnHandler.java
> {noformat}
> if(LOG.isDebugEnabled()) {
> LOG.debug("Locks to check(full): ");
> for(LockInfo info : locks) {
>   LOG.debug("  " + info);
> }
>   }
> {noformat}
> We should either change it trace or change the log mode of this test to INFO 
> so that it generates smaller log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16094) queued containers may timeout if they don't get to run for a long time

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895271#comment-15895271
 ] 

Hive QA commented on HIVE-16094:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855786/HIVE-16094.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 28 failed/errored test(s), 10285 tests 
executed
*Failed tests:*
{noformat}
TestCommandProcessorFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=272)
TestDbTxnManager - did not produce a TEST-*.xml file (likely timed out) 
(batchId=272)
TestDummyTxnManager - did not produce a TEST-*.xml file (likely timed out) 
(batchId=272)
TestHiveInputSplitComparator - did not produce a TEST-*.xml file (likely timed 
out) (batchId=272)
TestIndexType - did not produce a TEST-*.xml file (likely timed out) 
(batchId=272)
TestSplitFilter - did not produce a TEST-*.xml file (likely timed out) 
(batchId=272)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=229)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_table]
 (batchId=147)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query14] 
(batchId=224)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query23] 
(batchId=224)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vector_between_in] 
(batchId=119)
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser 
(batchId=221)
org.apache.hive.beeline.TestSchemaTool.testNestedScriptsForDerby (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testNestedScriptsForMySQL (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testNestedScriptsForOracle (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testPostgresFilter (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testSchemaInit (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testSchemaInitDryRun (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgrade (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testSchemaUpgradeDryRun (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testScriptMultiRowComment (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testScriptWithDelimiter (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testScripts (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testValidateLocations (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testValidateNullValues (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testValidateSchemaTables (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testValidateSchemaVersions (batchId=212)
org.apache.hive.beeline.TestSchemaTool.testValidateSequences (batchId=212)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3920/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3920/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3920/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 28 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12855786 - PreCommit-HIVE-Build

> queued containers may timeout if they don't get to run for a long time
> --
>
> Key: HIVE-16094
> URL: https://issues.apache.org/jira/browse/HIVE-16094
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>Priority: Critical
> Attachments: HIVE-16094.01.patch, HIVE-16094.02.patch, 
> HIVE-16094.03.patch
>
>
> I believe this happened after HIVE-15958 - since we end up keeping amNodeInfo 
> in knownAppMaters, and that can result in the callable not being scheduled on 
> new task registration.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16109) TestDbTxnManager generates a huge hive.log

2017-03-03 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar reassigned HIVE-16109:
--


> TestDbTxnManager generates a huge hive.log
> --
>
> Key: HIVE-16109
> URL: https://issues.apache.org/jira/browse/HIVE-16109
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>
> Pre-commit jobs are failing currently due to out of disk space. The issue is 
> happening due to huge size of hive.log when TestDbTxnManager test fails or 
> times-out. When this test fails or times-out Ptest tries to persist these 
> logs for debugging. Since this test has been timing out frequently, this 
> accumulates a lot of these log files and eventually Ptest server runs of disk 
> space. Each run of TestDbTxnManager is generating ~30 GB of hive.log. I tried 
> to run it locally and it quickly reached 7 GB until I had to cancel it.
> The issue seems to be coming from this code block in TxnHandler.java
> {noformat}
> if(LOG.isDebugEnabled()) {
> LOG.debug("Locks to check(full): ");
> for(LockInfo info : locks) {
>   LOG.debug("  " + info);
> }
>   }
> {noformat}
> We should either change it trace or change the log mode of this test to INFO 
> so that it generates smaller log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16102) Grouping sets do not conform to SQL standard

2017-03-03 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895226#comment-15895226
 ] 

Ashutosh Chauhan commented on HIVE-16102:
-

checkForNoAggr() is explicitly checking for empty grouping set. Not sure when 
it was added if there was a good reason for it.  Can't think of any at the 
moment. But before removing that lets add a test case for it. Also there is 
HIVE_GROUPING_SETS_EXPR_NOT_IN_GROUPBY(10213) Can that be removed too?

> Grouping sets do not conform to SQL standard
> 
>
> Key: HIVE-16102
> URL: https://issues.apache.org/jira/browse/HIVE-16102
> Project: Hive
>  Issue Type: Bug
>  Components: Operators, Parser
>Affects Versions: 1.3.0, 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Attachments: HIVE-16102.patch
>
>
> [~ashutoshc] realized that the implementation of GROUPING__ID in Hive was not 
> returning values as specified by SQL standard and other execution engines.
> After digging into this, I found out that the implementation was bogus, as 
> internally it was changing between big-endian/little-endian representation of 
> GROUPING__ID indistinctly, and in some cases conversions in both directions 
> were cancelling each other.
> In the documentation in 
> https://cwiki.apache.org/confluence/display/Hive/Enhanced+Aggregation,+Cube,+Grouping+and+Rollup
>  we can already find the problem, even if we did not spot it at first.
> {quote}
> The following query: SELECT key, value, GROUPING__ID, count(\*) from T1 GROUP 
> BY key, value WITH ROLLUP
> will have the following results.
> | NULL | NULL | 0 | 6 |
> | 1 | NULL | 1 | 2 |
> | 1 | NULL | 3 | 1 |
> | 1 | 1 | 3 | 1 |
> ...
> {quote}
> Observe that value for GROUPING__ID in first row should be `3`, while for 
> third and fourth rows, it should be `0`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16103) LLAP: Scheduler timeout monitor never stops with slot nodes

2017-03-03 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-16103:
-
Attachment: HIVE-16103.2.patch

Done. 

> LLAP: Scheduler timeout monitor never stops with slot nodes
> ---
>
> Key: HIVE-16103
> URL: https://issues.apache.org/jira/browse/HIVE-16103
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-16103.1.patch, HIVE-16103.2.patch
>
>
> The scheduler timeout monitor is started when node count becomes 0 and 
> stopped when node count becomes 1. For node count, we were relying on the 
> paths under llap namespace. With addition of slot znodes, every node creates 
> 2 paths (worker and slot). As a result, the size of the instances cache will 
> never be 1 (always multiple of 2) which leads to condition where timeout 
> monitor is not stopped. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16098) Describe table doesn't show stats for partitioned tables

2017-03-03 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-16098:

Status: Open  (was: Patch Available)

> Describe table doesn't show stats for partitioned tables
> 
>
> Key: HIVE-16098
> URL: https://issues.apache.org/jira/browse/HIVE-16098
> Project: Hive
>  Issue Type: Improvement
>  Components: Diagnosability
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-16098.1.patch, HIVE-16098.2.patch, 
> HIVE-16098.3.patch, HIVE-16098.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16098) Describe table doesn't show stats for partitioned tables

2017-03-03 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-16098:

Attachment: HIVE-16098.3.patch

Incorporated [~vihangk1] suggestion.

> Describe table doesn't show stats for partitioned tables
> 
>
> Key: HIVE-16098
> URL: https://issues.apache.org/jira/browse/HIVE-16098
> Project: Hive
>  Issue Type: Improvement
>  Components: Diagnosability
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-16098.1.patch, HIVE-16098.2.patch, 
> HIVE-16098.3.patch, HIVE-16098.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16098) Describe table doesn't show stats for partitioned tables

2017-03-03 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-16098:

Status: Patch Available  (was: Open)

> Describe table doesn't show stats for partitioned tables
> 
>
> Key: HIVE-16098
> URL: https://issues.apache.org/jira/browse/HIVE-16098
> Project: Hive
>  Issue Type: Improvement
>  Components: Diagnosability
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-16098.1.patch, HIVE-16098.2.patch, 
> HIVE-16098.3.patch, HIVE-16098.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-5725) Separate out ql code from exec jar

2017-03-03 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895180#comment-15895180
 ] 

Zhijie Shen commented on HIVE-5725:
---

Is there any update on this jira? Any chance to get it fixed?

> Separate out ql code from exec jar
> --
>
> Key: HIVE-5725
> URL: https://issues.apache.org/jira/browse/HIVE-5725
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>
> We should publish our code independently from our dependencies. Since the 
> exec jar has to include the runtime dependencies, I'd propose that we make 
> two jars, a ql jar and and exec jar.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16092) Generate and use universal mmId instead of per db/table

2017-03-03 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895172#comment-15895172
 ] 

Wei Zheng commented on HIVE-16092:
--

sorry, data replication..

> Generate and use universal mmId instead of per db/table
> ---
>
> Key: HIVE-16092
> URL: https://issues.apache.org/jira/browse/HIVE-16092
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>
> To facilitate later replacement for it with txnId



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16103) LLAP: Scheduler timeout monitor never stops with slot nodes

2017-03-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895169#comment-15895169
 ] 

Sergey Shelukhin commented on HIVE-16103:
-

Nit: the ref could be final and created in init. Other than that +1

> LLAP: Scheduler timeout monitor never stops with slot nodes
> ---
>
> Key: HIVE-16103
> URL: https://issues.apache.org/jira/browse/HIVE-16103
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-16103.1.patch
>
>
> The scheduler timeout monitor is started when node count becomes 0 and 
> stopped when node count becomes 1. For node count, we were relying on the 
> paths under llap namespace. With addition of slot znodes, every node creates 
> 2 paths (worker and slot). As a result, the size of the instances cache will 
> never be 1 (always multiple of 2) which leads to condition where timeout 
> monitor is not stopped. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16092) Generate and use universal mmId instead of per db/table

2017-03-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895167#comment-15895167
 ] 

Sergey Shelukhin commented on HIVE-16092:
-

What is DR?

> Generate and use universal mmId instead of per db/table
> ---
>
> Key: HIVE-16092
> URL: https://issues.apache.org/jira/browse/HIVE-16092
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>
> To facilitate later replacement for it with txnId



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16103) LLAP: Scheduler timeout monitor never stops with slot nodes

2017-03-03 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895161#comment-15895161
 ] 

Prasanth Jayachandran commented on HIVE-16103:
--

Tested this patch on a cluster and timeout monitor is being stopped correctly.

> LLAP: Scheduler timeout monitor never stops with slot nodes
> ---
>
> Key: HIVE-16103
> URL: https://issues.apache.org/jira/browse/HIVE-16103
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-16103.1.patch
>
>
> The scheduler timeout monitor is started when node count becomes 0 and 
> stopped when node count becomes 1. For node count, we were relying on the 
> paths under llap namespace. With addition of slot znodes, every node creates 
> 2 paths (worker and slot). As a result, the size of the instances cache will 
> never be 1 (always multiple of 2) which leads to condition where timeout 
> monitor is not stopped. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16103) LLAP: Scheduler timeout monitor never stops with slot nodes

2017-03-03 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-16103:
-
Status: Patch Available  (was: Open)

> LLAP: Scheduler timeout monitor never stops with slot nodes
> ---
>
> Key: HIVE-16103
> URL: https://issues.apache.org/jira/browse/HIVE-16103
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-16103.1.patch
>
>
> The scheduler timeout monitor is started when node count becomes 0 and 
> stopped when node count becomes 1. For node count, we were relying on the 
> paths under llap namespace. With addition of slot znodes, every node creates 
> 2 paths (worker and slot). As a result, the size of the instances cache will 
> never be 1 (always multiple of 2) which leads to condition where timeout 
> monitor is not stopped. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16103) LLAP: Scheduler timeout monitor never stops with slot nodes

2017-03-03 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-16103:
-
Attachment: HIVE-16103.1.patch

> LLAP: Scheduler timeout monitor never stops with slot nodes
> ---
>
> Key: HIVE-16103
> URL: https://issues.apache.org/jira/browse/HIVE-16103
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-16103.1.patch
>
>
> The scheduler timeout monitor is started when node count becomes 0 and 
> stopped when node count becomes 1. For node count, we were relying on the 
> paths under llap namespace. With addition of slot znodes, every node creates 
> 2 paths (worker and slot). As a result, the size of the instances cache will 
> never be 1 (always multiple of 2) which leads to condition where timeout 
> monitor is not stopped. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16103) LLAP: Scheduler timeout monitor never stops with slot nodes

2017-03-03 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895157#comment-15895157
 ] 

Prasanth Jayachandran commented on HIVE-16103:
--

[~sseth] could you please take a look?

> LLAP: Scheduler timeout monitor never stops with slot nodes
> ---
>
> Key: HIVE-16103
> URL: https://issues.apache.org/jira/browse/HIVE-16103
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-16103.1.patch
>
>
> The scheduler timeout monitor is started when node count becomes 0 and 
> stopped when node count becomes 1. For node count, we were relying on the 
> paths under llap namespace. With addition of slot znodes, every node creates 
> 2 paths (worker and slot). As a result, the size of the instances cache will 
> never be 1 (always multiple of 2) which leads to condition where timeout 
> monitor is not stopped. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16092) Generate and use universal mmId instead of per db/table

2017-03-03 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895159#comment-15895159
 ] 

Wei Zheng commented on HIVE-16092:
--

On a second thought, this may not be a good idea. Keeping the mmId the way it 
is (per table) is actually beneficial for things like DR. Will need to seek 
alternatives of integrating ACID txn. [~ekoifman]

> Generate and use universal mmId instead of per db/table
> ---
>
> Key: HIVE-16092
> URL: https://issues.apache.org/jira/browse/HIVE-16092
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>
> To facilitate later replacement for it with txnId



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-15920) Implement a blocking version of a command to compact

2017-03-03 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895152#comment-15895152
 ] 

Wei Zheng commented on HIVE-15920:
--

Code change looks good. The test looks a little unclear.
{code}
//w/o AND WAIT the above alter table retuns almost immediately 
Assert.assertTrue(System.currentTimeMillis() > start + 2);
{code}
Do we expect a specific interval of delay here? Anyway this is hard to be 
tested in UT.

> Implement a blocking version of a command to compact
> 
>
> Key: HIVE-15920
> URL: https://issues.apache.org/jira/browse/HIVE-15920
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-15920.01.patch
>
>
> currently 
> {noformat}
> alter table AcidTable compact 'major'
> {noformat} 
> is supported which enqueues a msg to compact.
> Would be nice for testing and script building to support 
> {noformat} 
> alter table AcidTable compact 'major' blocking
> {noformat} 
> perhaps another variation is to block until either compaction is done or 
> until cleaning is finished.
> DDLTask.compact() gets a request id back so it can then just block and wait 
> for it using some new API
> may also be useful to let users compact all partitions but only if  a 
> separate queue has been set up for compaction jobs.
> The later is because with a 1M partition table, this may create very many 
> jobs and saturate the cluster.
> This probably requires HIVE-12376 to make sure the compaction queue does the 
> throttling, not the number of worker threads



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16108) LLAP: Instance state change listener is notified twice

2017-03-03 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran reassigned HIVE-16108:


Assignee: Prasanth Jayachandran

> LLAP: Instance state change listener is notified twice
> --
>
> Key: HIVE-16108
> URL: https://issues.apache.org/jira/browse/HIVE-16108
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>
> Need to see why we are being notified twice for same path and same event 
> type. Relevant log lines
> {code}
> 2017-03-03 17:09:52,464 [INFO] [StateChangeNotificationHandler] 
> |impl.LlapZookeeperRegistryImpl$InstanceStateChangeListener|: CHILD_REMOVED 
> for zknode /user-prasanth/llap0/workers/worker-001068 in llap namespace
> 2017-03-03 17:09:52,465 [INFO] [StateChangeNotificationHandler] 
> |impl.LlapZookeeperRegistryImpl$InstanceStateChangeListener|: CHILD_REMOVED 
> for zknode /user-prasanth/llap0/workers/worker-001068 in llap namespace
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16094) queued containers may timeout if they don't get to run for a long time

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895139#comment-15895139
 ] 

Hive QA commented on HIVE-16094:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855786/HIVE-16094.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3919/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3919/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3919/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Tests exited with: ExecutionException: java.util.concurrent.ExecutionException: 
org.apache.hive.ptest.execution.ssh.SSHExecutionException: RSyncResult 
[localFile=/data/hiveptest/logs/PreCommit-HIVE-Build-3919/failed/272_UTBatch_ql_10_tests,
 remoteFile=/home/hiveptest/104.197.40.239-hiveptest-0/logs/, getExitCode()=11, 
getException()=null, getUser()=hiveptest, getHost()=104.197.40.239, 
getInstance()=0]: 'Warning: Permanently added '104.197.40.239' (ECDSA) to the 
list of known hosts.
receiving incremental file list
./
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.TestTxnCommands.xml

  0   0%0.00kB/s0:00:00  
  8,231 100%7.85MB/s0:00:00 (xfr#1, to-chk=9/11)
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.lockmgr.TestEmbeddedLockManager.xml

  0   0%0.00kB/s0:00:00  
  5,265 100%5.02MB/s0:00:00 (xfr#2, to-chk=8/11)
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.lockmgr.TestHiveLockObject.xml

  0   0%0.00kB/s0:00:00  
  5,263 100%5.02MB/s0:00:00 (xfr#3, to-chk=7/11)
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.lockmgr.zookeeper.TestZookeeperLockManager.xml

  0   0%0.00kB/s0:00:00  
  5,559 100%5.30MB/s0:00:00 (xfr#4, to-chk=6/11)
maven-test.txt

  0   0%0.00kB/s0:00:00  
  6,509 100%6.21MB/s0:00:00 (xfr#5, to-chk=5/11)
logs/
logs/derby.log

  0   0%0.00kB/s0:00:00  
974 100%  951.17kB/s0:00:00 (xfr#6, to-chk=2/11)
logs/hive.log

  0   0%0.00kB/s0:00:00  
 36,044,800   0%   34.27MB/s0:14:44  
 80,216,064   0%   38.15MB/s0:13:13  
124,223,488   0%   39.40MB/s0:12:47  
168,624,128   0%   40.10MB/s0:12:32  
213,647,360   0%   42.28MB/s0:11:52  
257,622,016   0%   42.24MB/s0:11:52  
301,924,352   0%   42.29MB/s0:11:50  
345,047,040   1%   42.02MB/s0:11:54  
391,086,080   1%   42.27MB/s0:11:48  
437,485,568   1%   42.85MB/s0:11:38  
483,917,824   1%   43.27MB/s0:11:30  
531,070,976   1%   44.22MB/s0:11:14  
578,158,592   1%   44.48MB/s0:11:09  
624,885,760   2%   44.56MB/s0:11:07  
673,185,792   2%   45.12MB/s0:10:57  
721,879,040   2%   45.49MB/s0:10:51  
771,883,008   2%   46.14MB/s0:10:41  
823,787,520   2%   47.34MB/s0:10:24  
873,496,576   2%   47.70MB/s0:10:18  
915,537,920   2%   46.13MB/s0:10:38  
966,885,376   3%   46.49MB/s0:10:32  
  1,019,478,016   3%   46.66MB/s0:10:29  
  1,069,907,968   3%   46.76MB/s0:10:26  
  1,120,665,600   3%   48.83MB/s0:09:58  
  1,172,602,880   3%   49.00MB/s0:09:55  
  1,225,457,664   3%   49.07MB/s0:09:53  
  1,279,459,328   4%   50.00MB/s0:09:41  
  1,333,559,296   4%   50.77MB/s0:09:32  
  1,387,823,104   4%   51.31MB/s0:09:24  
  1,427,865,600   4%   48.25MB/s0:10:00  
  1,483,079,680   4%   48.53MB/s0:09:55  
  1,536,393,216   4%   48.36MB/s0:09:56  
  1,587,314,688   5%   47.55MB/s0:10:05  
  1,642,266,624   5%   51.12MB/s0:09:22  
  1,697,480,704   5%   51.12MB/s0:09:21  
  1,753,120,768   5%   51.67MB/s0:09:14  
  1,808,891,904   5%   52.85MB/s0:09:00  
  1,856,995,328   5%   51.21MB/s0:09:17  
  1,913,618,432   6%   51.53MB/s0:09:12  
  1,957,462,016   6%   48.68MB/s0:09:44  
  2,014,085,120   6%   48.89MB/s0:09:40  
  2,070,544,384   6%   50.89MB/s0:09:16  
  2,126,708,736   6%   50.80MB/s0:09:16  
  2,183,331,840   7%   53.89MB/s0:08:43  
  2,240,741,376   7%   54.03MB/s0:08:41  
  2,298,281,984   7%   54.20MB/s0:08:38  
  2,355,888,128   7%   54.48MB/s0:08:34  
  2,413,494,272   7%   54.71MB/s0:08:31  
  2,470,739,968   7%   54.73MB/s0:08:30  
  2,514,583,552   8%   51.53MB/s0:09:01  
  2,573,467,648   8%   51.81MB/s0:08:57  
  2,632,122,368   8%   52.06MB/s0:08:53  
  2,690,023,424   8%   52.24MB/s0:08:50  
  2,748,350,464   8%   55.66MB/s0:08:16  
  

[jira] [Updated] (HIVE-16082) Allow user to change number of listener thread in LlapTaskCommunicator

2017-03-03 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-16082:
--
   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

> Allow user to change number of listener thread in LlapTaskCommunicator
> --
>
> Key: HIVE-16082
> URL: https://issues.apache.org/jira/browse/HIVE-16082
> Project: Hive
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
> Fix For: 2.2.0
>
> Attachments: HIVE-16082.1.patch, HIVE-16082.2.patch
>
>
> Now LlapTaskCommunicator always has same number of RPC listener thread with 
> TezTaskCommunicatorImpl. There are scenarios when we want them different: for 
> example, in Llap only mode, we want less TezTaskCommunicatorImpl's listener 
> thread to reduce off-heap memory usage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16082) Allow user to change number of listener thread in LlapTaskCommunicator

2017-03-03 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895094#comment-15895094
 ] 

Siddharth Seth commented on HIVE-16082:
---

Test failures are unrelated. Committing.

> Allow user to change number of listener thread in LlapTaskCommunicator
> --
>
> Key: HIVE-16082
> URL: https://issues.apache.org/jira/browse/HIVE-16082
> Project: Hive
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
> Attachments: HIVE-16082.1.patch, HIVE-16082.2.patch
>
>
> Now LlapTaskCommunicator always has same number of RPC listener thread with 
> TezTaskCommunicatorImpl. There are scenarios when we want them different: for 
> example, in Llap only mode, we want less TezTaskCommunicatorImpl's listener 
> thread to reduce off-heap memory usage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-15983) Support the named columns join

2017-03-03 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong reassigned HIVE-15983:
--

Assignee: Pengcheng Xiong

> Support the named columns join
> --
>
> Key: HIVE-15983
> URL: https://issues.apache.org/jira/browse/HIVE-15983
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Carter Shanklin
>Assignee: Pengcheng Xiong
>
> The named columns join is a common shortcut allowing joins on identically 
> named keys. Example: select * from t1 join t2 using c1 is equivalent to 
> select * from t1 join t2 on t1.c1 = t2.c1. SQL standard reference: Section 7.7



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-14735) Build Infra: Spark artifacts download takes a long time

2017-03-03 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895056#comment-15895056
 ] 

Ashutosh Chauhan commented on HIVE-14735:
-

I am not sure whether publishing an artifact of another project is a good idea. 
Ideally, spark project itself should publish these artifacts. At the very least 
we shall ask on spark list of our intention for this and see what feedback we 
get.

> Build Infra: Spark artifacts download takes a long time
> ---
>
> Key: HIVE-14735
> URL: https://issues.apache.org/jira/browse/HIVE-14735
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: Vaibhav Gumashta
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14735.1.patch, HIVE-14735.1.patch, 
> HIVE-14735.1.patch, HIVE-14735.1.patch, HIVE-14735.2.patch, 
> HIVE-14735.3.patch, HIVE-14735.4.patch, HIVE-14735.4.patch, HIVE-14735.5.patch
>
>
> In particular this command:
> {{curl -Sso ./../thirdparty/spark-1.6.0-bin-hadoop2-without-hive.tgz 
> http://d3jw87u4immizc.cloudfront.net/spark-tarball/spark-1.6.0-bin-hadoop2-without-hive.tgz}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16061) Some of console output is not printed to the beeline console

2017-03-03 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895041#comment-15895041
 ] 

Aihua Xu commented on HIVE-16061:
-

Verified with beeline. Here is the output in beeline now.

{noformat}
2017-03-03T16:15:16,081 INFO [main] org.apache.hadoop.hive.conf.HiveConf - 
Found configuration file null
2017-03-03T16:15:17,432 WARN [main] org.apache.hadoop.hive.common.LogUtils - 
hive-site.xml not found on CLASSPATH
2017-03-03 16:15:17 Starting to launch local task to process map join;  
maximum memory = 932184064
2017-03-03 16:15:18 Dump the side-table for tag: 0 with group count: 309 
into file: 
file:/tmp/hive/anonymous/19c9aac1-1992-4061-b6ba-6eb4b9605656/hive_2017-03-03_16-15-12_037_6838673076291140467-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile00--.hashtable
2017-03-03 16:15:18 Uploaded 1 File to: 
file:/tmp/hive/anonymous/19c9aac1-1992-4061-b6ba-6eb4b9605656/hive_2017-03-03_16-15-12_037_6838673076291140467-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile00--.hashtable
 (7485 bytes)
2017-03-03 16:15:18 End of local task; Time Taken: 0.555 sec.
Execution completed successfully
MapredLocal task succeeded
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_local1477220543_0001, Tracking URL = http://localhost:8080/
Kill Command = 
/Users/axu/Documents/workspaces/tools/hadoop/hadoop-2.6.0/bin/hadoop job  -kill 
job_local1477220543_0001
Hadoop job information for Stage-3: number of mappers: 0; number of reducers: 0
2017-03-03 16:15:19,904 Stage-3 map = 0%,  reduce = 0%
Ended Job = job_local1477220543_0001 with errors
Error during job, obtaining debugging information...
{noformat}

> Some of console output is not printed to the beeline console
> 
>
> Key: HIVE-16061
> URL: https://issues.apache.org/jira/browse/HIVE-16061
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 2.1.1
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-16061.1.patch
>
>
> Run a hiveserver2 instance "hive --service hiveserver2".
> Then from another console, connect to hiveserver2 "beeline -u 
> "jdbc:hive2://localhost:1"
> When you run a MR job like "select t1.key from src t1 join src t2 on 
> t1.key=t2.key", some of the console logs like MR job info are not printed to 
> the console while it just print to the hiveserver2 console.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16061) Some of console output is not printed to the beeline console

2017-03-03 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-16061:

Status: Patch Available  (was: Open)

> Some of console output is not printed to the beeline console
> 
>
> Key: HIVE-16061
> URL: https://issues.apache.org/jira/browse/HIVE-16061
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 2.1.1
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-16061.1.patch
>
>
> Run a hiveserver2 instance "hive --service hiveserver2".
> Then from another console, connect to hiveserver2 "beeline -u 
> "jdbc:hive2://localhost:1"
> When you run a MR job like "select t1.key from src t1 join src t2 on 
> t1.key=t2.key", some of the console logs like MR job info are not printed to 
> the console while it just print to the hiveserver2 console.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16061) Some of console output is not printed to the beeline console

2017-03-03 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895036#comment-15895036
 ] 

Aihua Xu commented on HIVE-16061:
-

The issue is: we are printing the log to server console while we need to print 
to OperationLog so it will pass to the client. 

Patch-1: create a ClientConsole class and uses the class when we need to pass 
the info to the client.

> Some of console output is not printed to the beeline console
> 
>
> Key: HIVE-16061
> URL: https://issues.apache.org/jira/browse/HIVE-16061
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 2.1.1
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-16061.1.patch
>
>
> Run a hiveserver2 instance "hive --service hiveserver2".
> Then from another console, connect to hiveserver2 "beeline -u 
> "jdbc:hive2://localhost:1"
> When you run a MR job like "select t1.key from src t1 join src t2 on 
> t1.key=t2.key", some of the console logs like MR job info are not printed to 
> the console while it just print to the hiveserver2 console.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16076) LLAP packaging - include aux libs

2017-03-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-16076:

Status: Patch Available  (was: Open)

> LLAP packaging - include aux libs 
> --
>
> Key: HIVE-16076
> URL: https://issues.apache.org/jira/browse/HIVE-16076
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16076.patch
>
>
> The old auxlibs (or whatever) should be packaged by default, if present.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16076) LLAP packaging - include aux libs

2017-03-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-16076:

Attachment: HIVE-16076.patch

The patch. cc [~sseth] [~gopalv]

> LLAP packaging - include aux libs 
> --
>
> Key: HIVE-16076
> URL: https://issues.apache.org/jira/browse/HIVE-16076
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16076.patch
>
>
> The old auxlibs (or whatever) should be packaged by default, if present.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16076) LLAP packaging - include aux libs

2017-03-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-16076:
---

Assignee: Sergey Shelukhin

> LLAP packaging - include aux libs 
> --
>
> Key: HIVE-16076
> URL: https://issues.apache.org/jira/browse/HIVE-16076
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16076.patch
>
>
> The old auxlibs (or whatever) should be packaged by default, if present.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16061) Some of console output is not printed to the beeline console

2017-03-03 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-16061:

Attachment: HIVE-16061.1.patch

> Some of console output is not printed to the beeline console
> 
>
> Key: HIVE-16061
> URL: https://issues.apache.org/jira/browse/HIVE-16061
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 2.1.1
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-16061.1.patch
>
>
> Run a hiveserver2 instance "hive --service hiveserver2".
> Then from another console, connect to hiveserver2 "beeline -u 
> "jdbc:hive2://localhost:1"
> When you run a MR job like "select t1.key from src t1 join src t2 on 
> t1.key=t2.key", some of the console logs like MR job info are not printed to 
> the console while it just print to the hiveserver2 console.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16061) Some of console output is not printed to the beeline console

2017-03-03 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-16061:

Attachment: (was: HIVE-16061.1.patch)

> Some of console output is not printed to the beeline console
> 
>
> Key: HIVE-16061
> URL: https://issues.apache.org/jira/browse/HIVE-16061
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 2.1.1
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-16061.1.patch
>
>
> Run a hiveserver2 instance "hive --service hiveserver2".
> Then from another console, connect to hiveserver2 "beeline -u 
> "jdbc:hive2://localhost:1"
> When you run a MR job like "select t1.key from src t1 join src t2 on 
> t1.key=t2.key", some of the console logs like MR job info are not printed to 
> the console while it just print to the hiveserver2 console.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16061) Some of console output is not printed to the beeline console

2017-03-03 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-16061:

Attachment: HIVE-16061.1.patch

> Some of console output is not printed to the beeline console
> 
>
> Key: HIVE-16061
> URL: https://issues.apache.org/jira/browse/HIVE-16061
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 2.1.1
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-16061.1.patch
>
>
> Run a hiveserver2 instance "hive --service hiveserver2".
> Then from another console, connect to hiveserver2 "beeline -u 
> "jdbc:hive2://localhost:1"
> When you run a MR job like "select t1.key from src t1 join src t2 on 
> t1.key=t2.key", some of the console logs like MR job info are not printed to 
> the console while it just print to the hiveserver2 console.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-8871) Hive Hbase Integration : Support for NULL value columns

2017-03-03 Thread Christopher (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15895005#comment-15895005
 ] 

Christopher commented on HIVE-8871:
---

A concern that I have is that currently in order to perform an update in a hive 
table, one uses NULLs as place holders.  Those NULLs are ignored if it is a 
preexisting record.

CREATE TABLE hbase_table_emp(id int, name string, role string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:name,cf1:role")
TBLPROPERTIES ("hbase.table.name" = "emp");
insert into hbase_table_emp select 1,'bob', 'admin';
insert into hbase_table_emp select 2,'bobby', 'user';
select * from  hbase_table_emp;
insert into hbase_table_emp select 1,null, 'superadmin';

See "bob" remains as "bob"...


> Hive Hbase Integration : Support for NULL value  columns
> 
>
> Key: HIVE-8871
> URL: https://issues.apache.org/jira/browse/HIVE-8871
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.10.0
>Reporter: Jasper Knulst
>  Labels: features
>
> If you map a Hive column to a Hbase CF where the CF only has qualifiers but 
> no values, Hive always outputs ' {} ' for that key. This hides the fact that 
> qualifiers do exist within the CF. As soon as you put a single byte (like a 
> space) as value you'll get a return like this ' {"20140911"," "} in Hive.
> Since it is a common data modelling technique in Hbase to not use the value 
> (and essentially use the qualifier in a CF as value holder) I think it would 
> be worthwhile to have some support for this in the Hbase handler. 
> A solution could be to show a data structure like  CF:qualifier: like 
> this: {"20140911",""}
> , where '20140911' is the qualifier and NULL value in Hbase are shown as 
> empty json strings.
> CREATE EXTERNAL TABLE hb_test (
>   userhash string,
>   count bigint,
>   dates map)
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES (
>   "hbase.columns.mapping" =
>   ":key,SUM:COUNT,DATES:",
> "hbase.table.default.storage.type" = "binary"
> )
> TBLPROPERTIES("hbase.table.name" = "test");



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16098) Describe table doesn't show stats for partitioned tables

2017-03-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894983#comment-15894983
 ] 

Hive QA commented on HIVE-16098:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12855931/HIVE-16098.2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/3918/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/3918/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-3918/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Tests exited with: ExecutionException: java.util.concurrent.ExecutionException: 
org.apache.hive.ptest.execution.ssh.SSHExecutionException: RSyncResult 
[localFile=/data/hiveptest/logs/PreCommit-HIVE-Build-3918/failed/272_UTBatch_ql_10_tests,
 remoteFile=/home/hiveptest/104.198.24.219-hiveptest-0/logs/, getExitCode()=11, 
getException()=null, getUser()=hiveptest, getHost()=104.198.24.219, 
getInstance()=0]: 'Warning: Permanently added '104.198.24.219' (ECDSA) to the 
list of known hosts.
receiving incremental file list
./
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.TestTxnCommands.xml

  0   0%0.00kB/s0:00:00  
  8,242 100%7.86MB/s0:00:00 (xfr#1, to-chk=9/11)
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.lockmgr.TestEmbeddedLockManager.xml

  0   0%0.00kB/s0:00:00  
  5,265 100%5.02MB/s0:00:00 (xfr#2, to-chk=8/11)
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.lockmgr.TestHiveLockObject.xml

  0   0%0.00kB/s0:00:00  
  5,265 100%5.02MB/s0:00:00 (xfr#3, to-chk=7/11)
TEST-272_UTBatch_ql_10_tests-TEST-org.apache.hadoop.hive.ql.lockmgr.zookeeper.TestZookeeperLockManager.xml

  0   0%0.00kB/s0:00:00  
  5,559 100%5.30MB/s0:00:00 (xfr#4, to-chk=6/11)
maven-test.txt

  0   0%0.00kB/s0:00:00  
  6,509 100%6.21MB/s0:00:00 (xfr#5, to-chk=5/11)
logs/
logs/derby.log

  0   0%0.00kB/s0:00:00  
974 100%  951.17kB/s0:00:00 (xfr#6, to-chk=2/11)
logs/hive.log

  0   0%0.00kB/s0:00:00  
rsync: write failed on 
"/data/hiveptest/logs/PreCommit-HIVE-Build-3918/failed/272_UTBatch_ql_10_tests/logs/hive.log":
 No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1]
Warning: Permanently added '104.198.24.219' (ECDSA) to the list of known hosts.
receiving incremental file list
logs/
logs/hive.log

  0   0%0.00kB/s0:00:00  
rsync: write failed on 
"/data/hiveptest/logs/PreCommit-HIVE-Build-3918/failed/272_UTBatch_ql_10_tests/logs/hive.log":
 No space left on device (28)
rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1]
Warning: Permanently added '104.198.24.219' (ECDSA) to the list of known hosts.
receiving incremental file list
logs/
logs/hive.log

  0   0%0.00kB/s0:00:00  
 79,393,152   0%   75.72MB/s0:06:32  
130,937,216   0%   62.47MB/s0:07:55  
182,382,976   0%   58.00MB/s0:08:30  
234,123,648   0%   55.85MB/s0:08:49  
284,488,064   0%   48.92MB/s0:10:03  
333,672,832   1%   48.36MB/s0:10:09  
384,856,448   1%   48.30MB/s0:10:09  
436,171,136   1%   48.20MB/s0:10:09  
488,927,616   1%   48.77MB/s0:10:01  
541,094,272   1%   49.48MB/s0:09:51  
591,982,976   1%   49.41MB/s0:09:51  
643,494,272   2%   49.47MB/s0:09:49  
695,726,464   2%   49.33MB/s0:09:50  
748,188,032   2%   49.40MB/s0:09:48  
801,763,712   2%   50.04MB/s0:09:40  
855,470,464   2%   50.56MB/s0:09:33  
906,981,760   2%   50.39MB/s0:09:34  
959,771,008   3%   50.47MB/s0:09:32  
  1,013,117,312   3%   50.42MB/s0:09:31  
  1,064,104,320   3%   49.77MB/s0:09:38  
  1,117,843,840   3%   50.30MB/s0:09:31  
  1,170,665,856   3%   50.31MB/s0:09:29  
  1,225,191,808   4%   50.59MB/s0:09:25  
  1,279,750,528   4%   51.44MB/s0:09:15  
  1,334,833,536   4%   51.76MB/s0:09:10  
  1,389,097,344   4%   52.10MB/s0:09:06  
  1,444,311,424   4%   52.27MB/s0:09:03  
  1,498,280,320   4%   52.13MB/s0:09:03  
  1,553,723,776   5%   52.23MB/s0:09:01  
  1,605,464,448   5%   51.62MB/s0:09:07  
  1,660,612,992   5%   51.61MB/s0:09:06  
  1,714,319,744   5%   51.55MB/s0:09:05  
  1,766,617,472   5%   50.78MB/s0:09:13  
  1,820,946,816   5%   51.40MB/s0:09:05  
  1,866,428,800   6%   49.09MB/s0:09:30  
  1,921,315,200   6%   49.38MB/s0:09:25  
  1,976,824,192   6%   

[jira] [Updated] (HIVE-16104) LLAP: preemption may be too aggressive if the pre-empted task doesn't die immediately

2017-03-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-16104:

Status: Patch Available  (was: Open)

Forgot to submit

> LLAP: preemption may be too aggressive if the pre-empted task doesn't die 
> immediately
> -
>
> Key: HIVE-16104
> URL: https://issues.apache.org/jira/browse/HIVE-16104
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16104.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16107) JDBC: HttpClient should retry one more time on NoHttpResponseException

2017-03-03 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta reassigned HIVE-16107:
---


> JDBC: HttpClient should retry one more time on NoHttpResponseException
> --
>
> Key: HIVE-16107
> URL: https://issues.apache.org/jira/browse/HIVE-16107
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 2.1.1, 2.0.1
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>
> Hive's JDBC client in HTTP transport mode doesn't retry on 
> NoHttpResponseException. We've seen the exception being thrown to the JDBC 
> end user when used with Knox as the proxy, when Knox upgraded its jetty 
> version, which has a smaller value for jetty connector idletimeout, and as a 
> result closes the HTTP connection on server side. The next jdbc query on the 
> client, throws a NoHttpResponseException. However, subsequent queries 
> reconnect, but the JDBC driver should ideally handle this by retrying.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-14698) Metastore: Datanucleus MSSQLServerAdapter generates incorrect syntax for OFFSET-FETCH clause

2017-03-03 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-14698:

Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> Metastore: Datanucleus MSSQLServerAdapter generates incorrect syntax for 
> OFFSET-FETCH clause
> 
>
> Key: HIVE-14698
> URL: https://issues.apache.org/jira/browse/HIVE-14698
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-14698.1.patch, HIVE-14698.2.patch
>
>
> See the bug description here: 
> https://github.com/datanucleus/datanucleus-rdbms/issues/110. 
> In ObjectStore#listStorageDescriptorsWithCD, we set a range on the query. For 
> MSSQLServer version >= 12, this results in an OFFSET-FETCH clause in the 
> MSSQLServerAdapter (provided by datanucleus).
> I'll attach a short term workaround for Hive here and once DN has the fix, we 
> can upgrade and remove the short term fix from Hive. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-15857) Vectorization: Add string conversion case for UDFToInteger, etc

2017-03-03 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-15857:

Status: Patch Available  (was: In Progress)

> Vectorization: Add string conversion case for UDFToInteger, etc
> ---
>
> Key: HIVE-15857
> URL: https://issues.apache.org/jira/browse/HIVE-15857
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-15857.01.patch, HIVE-15857.02.patch, 
> HIVE-15857.03.patch
>
>
> Otherwise, VectorUDFAdaptor is used to convert a column from String to Int, 
> etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-15857) Vectorization: Add string conversion case for UDFToInteger, etc

2017-03-03 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-15857:

Attachment: HIVE-15857.03.patch

> Vectorization: Add string conversion case for UDFToInteger, etc
> ---
>
> Key: HIVE-15857
> URL: https://issues.apache.org/jira/browse/HIVE-15857
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-15857.01.patch, HIVE-15857.02.patch, 
> HIVE-15857.03.patch
>
>
> Otherwise, VectorUDFAdaptor is used to convert a column from String to Int, 
> etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-15857) Vectorization: Add string conversion case for UDFToInteger, etc

2017-03-03 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-15857:

Status: In Progress  (was: Patch Available)

> Vectorization: Add string conversion case for UDFToInteger, etc
> ---
>
> Key: HIVE-15857
> URL: https://issues.apache.org/jira/browse/HIVE-15857
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-15857.01.patch, HIVE-15857.02.patch
>
>
> Otherwise, VectorUDFAdaptor is used to convert a column from String to Int, 
> etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-14698) Metastore: Datanucleus MSSQLServerAdapter generates incorrect syntax for OFFSET-FETCH clause

2017-03-03 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894970#comment-15894970
 ] 

Vaibhav Gumashta commented on HIVE-14698:
-

[~sershe] Looks like DN has fixed the bug in their 4.1.x release line. I've 
created HIVE-16106 for upgrading the version in hive. Resolving this. 

> Metastore: Datanucleus MSSQLServerAdapter generates incorrect syntax for 
> OFFSET-FETCH clause
> 
>
> Key: HIVE-14698
> URL: https://issues.apache.org/jira/browse/HIVE-14698
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-14698.1.patch, HIVE-14698.2.patch
>
>
> See the bug description here: 
> https://github.com/datanucleus/datanucleus-rdbms/issues/110. 
> In ObjectStore#listStorageDescriptorsWithCD, we set a range on the query. For 
> MSSQLServer version >= 12, this results in an OFFSET-FETCH clause in the 
> MSSQLServerAdapter (provided by datanucleus).
> I'll attach a short term workaround for Hive here and once DN has the fix, we 
> can upgrade and remove the short term fix from Hive. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16106) Upgrade to Datanucleus 4.1.17

2017-03-03 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta reassigned HIVE-16106:
---


> Upgrade to Datanucleus 4.1.17
> -
>
> Key: HIVE-16106
> URL: https://issues.apache.org/jira/browse/HIVE-16106
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>
> As described in HIVE-14698, the datanucleus-rdbms package that we have 
> currently (4.1.7) has a bug which generates incorrect synatx for MS SQL 
> Server. The bug has been fixed in later releases. HIVE-14698 was a workaround 
> for hive, but since DN has the fix in its 4.1.x line, we should pick it from 
> there



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-15947) Enhance Templeton service job operations reliability

2017-03-03 Thread Subramanyam Pattipaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subramanyam Pattipaka updated HIVE-15947:
-
Attachment: HIVE-15947.4.patch

Latest patch with failure scenarios handled gracefully.

> Enhance Templeton service job operations reliability
> 
>
> Key: HIVE-15947
> URL: https://issues.apache.org/jira/browse/HIVE-15947
> Project: Hive
>  Issue Type: Bug
>Reporter: Subramanyam Pattipaka
>Assignee: Subramanyam Pattipaka
> Attachments: HIVE-15947.2.patch, HIVE-15947.3.patch, 
> HIVE-15947.4.patch, HIVE-15947.patch
>
>
> Currently Templeton service doesn't restrict number of job operation 
> requests. It simply accepts and tries to run all operations. If more number 
> of concurrent job submit requests comes then the time to submit job 
> operations can increase significantly. Templetonused hdfs to store staging 
> file for job. If HDFS storage can't respond to large number of requests and 
> throttles then the job submission can take very large times in order of 
> minutes.
> This behavior may not be suitable for all applications and client 
> applications  may be looking for predictable and low response for successful 
> request or send throttle response to client to wait for some time before 
> re-requesting job operation.
> In this JIRA, I am trying to address following job operations 
> 1) Submit new Job
> 2) Get Job Status
> 3) List jobs
> These three operations has different complexity due to variance in use of 
> cluster resources like YARN/HDFS.
> The idea is to introduce a new config templeton.job.submit.exec.max-procs 
> which controls maximum number of concurrent active job submissions within 
> Templeton and use this config to control better response times. If a new job 
> submission request sees that there are already 
> templeton.job.submit.exec.max-procs jobs getting submitted concurrently then 
> the request will fail with Http error 503 with reason 
>β€œToo many concurrent job submission requests received. Please wait for 
> some time before retrying.”
>  
> The client is expected to catch this response and retry after waiting for 
> some time. The default value for the config 
> templeton.job.submit.exec.max-procs is set to β€˜0’. This means by default job 
> submission requests are always accepted. The behavior needs to be enabled 
> based on requirements.
> We can have similar behavior for Status and List operations with configs 
> templeton.job.status.exec.max-procs and templeton.list.job.exec.max-procs 
> respectively.
> Once the job operation is started, the operation can take longer time. The 
> client which has requested for job operation may not be waiting for 
> indefinite amount of time. This work introduces configurations
> templeton.exec.job.submit.timeout
> templeton.exec.job.status.timeout
> templeton.exec.job.list.timeout
> to specify maximum amount of time job operation can execute. If time out 
> happens then list and status job requests returns to client with message
> "List job request got timed out. Please retry the operation after waiting for 
> some time."
> If submit job request gets timed out then 
>   i) The job submit request thread which receives time out will check if 
> valid job id is generated in job request.
>   ii) If it is generated then issue kill job request on cancel thread 
> pool. Don't wait for operation to complete and returns to client with time 
> out message. 
> Side effects of enabling time out for submit operations
> 1) This has a possibility for having active job for some time by the client 
> gets response and a list operation from client could potential show the newly 
> created job before it gets killed.
> 2) We do best effort to kill the job and no guarantees. This means there is a 
> possibility of duplicate job created. One possible reason for this could be a 
> case where job is created and then operation timed out but kill request 
> failed due to resource manager unavailability. When resource manager 
> restarts, it will restarts the job which got created.
> Fixing this scenario is not part of the scope of this JIRA. The job operation 
> functionality can be enabled only if above side effects are acceptable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16103) LLAP: Scheduler timeout monitor never stops with slot nodes

2017-03-03 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-16103:
-
Summary: LLAP: Scheduler timeout monitor never stops with slot nodes  (was: 
LLAP: Scheduler timeout monitor never stops)

> LLAP: Scheduler timeout monitor never stops with slot nodes
> ---
>
> Key: HIVE-16103
> URL: https://issues.apache.org/jira/browse/HIVE-16103
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>
> The scheduler timeout monitor is started when node count becomes 0 and 
> stopped when node count becomes 1. For node count, we were relying on the 
> paths under llap namespace. With addition of slot znodes, every node creates 
> 2 paths (worker and slot). As a result, the size of the instances cache will 
> never be 1 (always multiple of 2) which leads to condition where timeout 
> monitor is not stopped. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16104) LLAP: preemption may be too aggressive if the pre-empted task doesn't die immediately

2017-03-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-16104:

Attachment: HIVE-16104.patch

The patch. cc [~sseth]

> LLAP: preemption may be too aggressive if the pre-empted task doesn't die 
> immediately
> -
>
> Key: HIVE-16104
> URL: https://issues.apache.org/jira/browse/HIVE-16104
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16104.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16105) LLAP: refactor executor pool to not depend on RejectedExecutionEx for preemption

2017-03-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-16105:

Description: There's a queue inside the threadpool consisting of one item 
(that's how we set it up), which means that we can submit N+1 tasks and not get 
rejected, with one task still not running and no preemption happening (note 
that SyncQueue we pass in does not in fact block in TP, because TP calls offer 
not put; and if it did, preemption would never trigger at all because the only 
thread adding stuff to the TP would be blocked until the item was gone from the 
queue, meaning that there'd never be a rejection). Having a threadpool like 
this also limits our options to e.g. move the task that is being killed out 
immediately to start another one (that itself is out of the scope of this jira) 
 (was: There's a queue inside the threadpool consisting on one item (that's how 
we set it up), which means that we can submit N+1 tasks and not get rejected, 
with one task still not running and no preemption happening. Having a 
threadpool like this also limits our options to e.g. move the task that is 
being killed out immediately to start another one (that itself is out of the 
scope of this jira))

> LLAP: refactor executor pool to not depend on RejectedExecutionEx for 
> preemption
> 
>
> Key: HIVE-16105
> URL: https://issues.apache.org/jira/browse/HIVE-16105
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>
> There's a queue inside the threadpool consisting of one item (that's how we 
> set it up), which means that we can submit N+1 tasks and not get rejected, 
> with one task still not running and no preemption happening (note that 
> SyncQueue we pass in does not in fact block in TP, because TP calls offer not 
> put; and if it did, preemption would never trigger at all because the only 
> thread adding stuff to the TP would be blocked until the item was gone from 
> the queue, meaning that there'd never be a rejection). Having a threadpool 
> like this also limits our options to e.g. move the task that is being killed 
> out immediately to start another one (that itself is out of the scope of this 
> jira)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16105) LLAP: refactor executor pool to not depend on RejectedExecutionEx for preemption

2017-03-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-16105:
---


> LLAP: refactor executor pool to not depend on RejectedExecutionEx for 
> preemption
> 
>
> Key: HIVE-16105
> URL: https://issues.apache.org/jira/browse/HIVE-16105
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>
> There's a queue inside the threadpool consisting on one item (that's how we 
> set it up), which means that we can submit N+1 tasks and not get rejected, 
> with one task still not running and no preemption happening. Having a 
> threadpool like this also limits our options to e.g. move the task that is 
> being killed out immediately to start another one (that itself is out of the 
> scope of this jira)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16104) LLAP: preemption may be too aggressive if the pre-empted task doesn't die immediately

2017-03-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-16104:
---


> LLAP: preemption may be too aggressive if the pre-empted task doesn't die 
> immediately
> -
>
> Key: HIVE-16104
> URL: https://issues.apache.org/jira/browse/HIVE-16104
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16078) improve abort checking in Tez/LLAP

2017-03-03 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-16078:

Attachment: HIVE-16078.02.patch

Updated

> improve abort checking in Tez/LLAP
> --
>
> Key: HIVE-16078
> URL: https://issues.apache.org/jira/browse/HIVE-16078
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16078.01.patch, HIVE-16078.02.patch, 
> HIVE-16078.patch
>
>
> Sometimes, a fragment can run for a long time after a query fails. It looks 
> from logs like the abort/interrupt were called correctly on the thread, yet 
> the thread hangs around minutes after, doing the below. Other tasks for the 
> same job appear to have exited correctly, after the same abort logic (at 
> least, the same log lines, fwiw)
> {noformat}
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorCopyRow.copyByValue(VectorCopyRow.java:317)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:263)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerGenerateResultOperator.finishInner(VectorMapJoinInnerGenerateResultOperator.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:389)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.forwardOverflow(VectorMapJoinGenerateResultOperator.java:628)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:277)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerGenerateResultOperator.finishInner(VectorMapJoinInnerGenerateResultOperator.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:389)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.forwardOverflow(VectorMapJoinGenerateResultOperator.java:628)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:277)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerGenerateResultOperator.finishInner(VectorMapJoinInnerGenerateResultOperator.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:389)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.forwardOverflow(VectorMapJoinGenerateResultOperator.java:628)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:277)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerGenerateResultOperator.finishInner(VectorMapJoinInnerGenerateResultOperator.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:389)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.forwardOverflow(VectorMapJoinGenerateResultOperator.java:628)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:277)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerGenerateResultOperator.finishInner(VectorMapJoinInnerGenerateResultOperator.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:389)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.forwardOverflow(VectorMapJoinGenerateResultOperator.java:628)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:277)
>   at 
> 

[jira] [Assigned] (HIVE-16103) LLAP: Scheduler timeout monitor never stops

2017-03-03 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran reassigned HIVE-16103:



> LLAP: Scheduler timeout monitor never stops
> ---
>
> Key: HIVE-16103
> URL: https://issues.apache.org/jira/browse/HIVE-16103
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>
> The scheduler timeout monitor is started when node count becomes 0 and 
> stopped when node count becomes 1. For node count, we were relying on the 
> paths under llap namespace. With addition of slot znodes, every node creates 
> 2 paths (worker and slot). As a result, the size of the instances cache will 
> never be 1 (always multiple of 2) which leads to condition where timeout 
> monitor is not stopped. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-15708) Upgrade calcite version to 1.12

2017-03-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894865#comment-15894865
 ] 

Sergey Shelukhin commented on HIVE-15708:
-

Thanks for the update!

> Upgrade calcite version to 1.12
> ---
>
> Key: HIVE-15708
> URL: https://issues.apache.org/jira/browse/HIVE-15708
> Project: Hive
>  Issue Type: Task
>  Components: CBO, Logical Optimizer
>Affects Versions: 2.2.0
>Reporter: Ashutosh Chauhan
>Assignee: Remus Rusanu
> Attachments: HIVE-15708.01.patch, HIVE-15708.02.patch, 
> HIVE-15708.03.patch, HIVE-15708.04.patch, HIVE-15708.05.patch, 
> HIVE-15708.06.patch, HIVE-15708.07.patch, HIVE-15708.08.patch, 
> HIVE-15708.09.patch, HIVE-15708.10.patch, HIVE-15708.11.patch, 
> HIVE-15708.12.patch, HIVE-15708.13.patch, HIVE-15708.14.patch
>
>
> Currently we are on 1.10 Need to upgrade calcite version to 1.11



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16078) improve abort checking in Tez/LLAP

2017-03-03 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15894859#comment-15894859
 ] 

Sergey Shelukhin commented on HIVE-16078:
-

Failures unrelated. startAbortChecks should also reset nRows with new mergejoin 
usage.

> improve abort checking in Tez/LLAP
> --
>
> Key: HIVE-16078
> URL: https://issues.apache.org/jira/browse/HIVE-16078
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16078.01.patch, HIVE-16078.patch
>
>
> Sometimes, a fragment can run for a long time after a query fails. It looks 
> from logs like the abort/interrupt were called correctly on the thread, yet 
> the thread hangs around minutes after, doing the below. Other tasks for the 
> same job appear to have exited correctly, after the same abort logic (at 
> least, the same log lines, fwiw)
> {noformat}
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorCopyRow.copyByValue(VectorCopyRow.java:317)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:263)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerGenerateResultOperator.finishInner(VectorMapJoinInnerGenerateResultOperator.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:389)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.forwardOverflow(VectorMapJoinGenerateResultOperator.java:628)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:277)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerGenerateResultOperator.finishInner(VectorMapJoinInnerGenerateResultOperator.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:389)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.forwardOverflow(VectorMapJoinGenerateResultOperator.java:628)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:277)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerGenerateResultOperator.finishInner(VectorMapJoinInnerGenerateResultOperator.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:389)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.forwardOverflow(VectorMapJoinGenerateResultOperator.java:628)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:277)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerGenerateResultOperator.finishInner(VectorMapJoinInnerGenerateResultOperator.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:389)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.forwardOverflow(VectorMapJoinGenerateResultOperator.java:628)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:277)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerGenerateResultOperator.finishInner(VectorMapJoinInnerGenerateResultOperator.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinInnerLongOperator.process(VectorMapJoinInnerLongOperator.java:389)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.forwardOverflow(VectorMapJoinGenerateResultOperator.java:628)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.mapjoin.VectorMapJoinGenerateResultOperator.generateHashMapResultMultiValue(VectorMapJoinGenerateResultOperator.java:277)
>   at 
> 

  1   2   >