[jira] [Commented] (HIVE-10291) Hive on Spark job configuration needs to be logged [Spark Branch]

2015-04-10 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490815#comment-14490815
 ] 

Lefty Leverenz commented on HIVE-10291:
---

Thanks Szehon.  (This is another missing commit email jira.  Sigh.  I'm 
keeping a list for INFRA-9221.)

 Hive on Spark job configuration needs to be logged [Spark Branch]
 -

 Key: HIVE-10291
 URL: https://issues.apache.org/jira/browse/HIVE-10291
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: 1.1.0
Reporter: Szehon Ho
Assignee: Szehon Ho
 Fix For: spark-branch

 Attachments: HIVE-10291-spark.patch, HIVE-10291.2-spark.patch, 
 HIVE-10291.3-spark.patch


 In a Hive on MR job, all the job properties are put into the JobConf, which 
 can then be viewed via the MR2 HistoryServer's Job UI.
 However, in Hive on Spark we are submitting an application that is 
 long-lived.  Hence, we only put properties into the SparkConf relevant to 
 application submission (spark and yarn properties).  Only these are viewable 
 through the Spark HistoryServer Application UI.
 It is the Hive application code (RemoteDriver, aka RemoteSparkContext) that 
 is responsible for serializing and deserializing the job.xml per job (ie, 
 query) within the application.  Thus, for supportability we also need to give 
 an equivalent mechanism to print the job.xml per job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10291) Hive on Spark job configuration needs to be logged [Spark Branch]

2015-04-10 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490801#comment-14490801
 ] 

Lefty Leverenz commented on HIVE-10291:
---

Any documentation needed?

 Hive on Spark job configuration needs to be logged [Spark Branch]
 -

 Key: HIVE-10291
 URL: https://issues.apache.org/jira/browse/HIVE-10291
 Project: Hive
  Issue Type: Sub-task
  Components: Spark
Affects Versions: 1.1.0
Reporter: Szehon Ho
Assignee: Szehon Ho
 Fix For: spark-branch

 Attachments: HIVE-10291-spark.patch, HIVE-10291.2-spark.patch, 
 HIVE-10291.3-spark.patch


 In a Hive on MR job, all the job properties are put into the JobConf, which 
 can then be viewed via the MR2 HistoryServer's Job UI.
 However, in Hive on Spark we are submitting an application that is 
 long-lived.  Hence, we only put properties into the SparkConf relevant to 
 application submission (spark and yarn properties).  Only these are viewable 
 through the Spark HistoryServer Application UI.
 It is the Hive application code (RemoteDriver, aka RemoteSparkContext) that 
 is responsible for serializing and deserializing the job.xml per job (ie, 
 query) within the application.  Thus, for supportability we also need to give 
 an equivalent mechanism to print the job.xml per job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10285) Incorrect endFunction call in HiveMetaStore

2015-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490779#comment-14490779
 ] 

Hive QA commented on HIVE-10285:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12724346/HIVE-10285.patch

{color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 8670 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testNewConnectionConfiguration
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3370/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3370/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3370/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 15 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12724346 - PreCommit-HIVE-TRUNK-Build

 Incorrect endFunction call in HiveMetaStore
 ---

 Key: HIVE-10285
 URL: https://issues.apache.org/jira/browse/HIVE-10285
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.14.0
Reporter: Nezih Yigitbasi
Priority: Minor
 Attachments: HIVE-10285.patch


 The HiveMetaStore.get_function() method ends with an incorrect call to the 
 endFunction() method. Instead of:
 {code}
 endFunction(get_database, func != null, ex);
 {code}
 It should call:
 {code}
 endFunction(get_function, func != null, ex);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10220) Disable all non-concurrent access to BytesBytesHashMap

2015-04-10 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-10220:
---
Summary: Disable all non-concurrent access to BytesBytesHashMap  (was: 
Concurrent read access within HybridHashTableContainer )

 Disable all non-concurrent access to BytesBytesHashMap
 --

 Key: HIVE-10220
 URL: https://issues.apache.org/jira/browse/HIVE-10220
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Gopal V
 Attachments: HIVE-10220.1.patch


 HybridHashTableContainer can end up being cached if it does not spill - that 
 needs to follow HIVE-10128 thread safety patterns for the partitioned hash 
 maps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-3299) Create UDF DAYNAME(date)

2015-04-10 Thread Alexander Pivovarov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490806#comment-14490806
 ] 

Alexander Pivovarov commented on HIVE-3299:
---

Test results look good. Failed tests has no relation to patch #5

 Create UDF  DAYNAME(date)
 -

 Key: HIVE-3299
 URL: https://issues.apache.org/jira/browse/HIVE-3299
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Affects Versions: 0.9.0
Reporter: Namitha Babychan
Assignee: Alexander Pivovarov
  Labels: patch
 Attachments: HIVE-3299.1.patch.txt, HIVE-3299.2.patch, 
 HIVE-3299.3.patch, HIVE-3299.4.patch, HIVE-3299.5.patch, HIVE-3299.patch.txt, 
 Hive-3299_Testcase.doc, udf_dayname.q, udf_dayname.q.out


 dayname(date/timestamp/string)
 Returns the name of the weekday for date. The language used for the name is 
 English.
 select dayname('2015-04-08');
 Wednesday



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9182) avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl

2015-04-10 Thread Abdelrahman Shettia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahman Shettia updated HIVE-9182:
--
Attachment: (was: HIVE-9182.1.patch)

 avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl
 -

 Key: HIVE-9182
 URL: https://issues.apache.org/jira/browse/HIVE-9182
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Thejas M Nair
Assignee: Abdelrahman Shettia
 Fix For: 1.2.0


 File systems such as s3, wasp (azure) don't implement Hadoop FileSystem acl 
 functionality.
 Hadoop23Shims has code that calls getAclStatus on file systems.
 Instead of calling getAclStatus and catching the exception, we can also check 
 FsPermission#getAclBit .
 Additionally, instead of catching all exceptions for calls to getAclStatus 
 and ignoring them, it is better to just catch UnsupportedOperationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8519) Hive metastore lock wait timeout

2015-04-10 Thread Alexey Zotov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490276#comment-14490276
 ] 

Alexey Zotov commented on HIVE-8519:


I've seen the same issue when I was trying to delete table with ~30.000 
partitions, looks like it fails by timeout during getting information about 
partitions. 

 Hive metastore lock wait timeout
 

 Key: HIVE-8519
 URL: https://issues.apache.org/jira/browse/HIVE-8519
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0
Reporter: Liao, Xiaoge

 We got a lot of exception as below when doing a drop table partition, which 
 made hive query every every slow. For example, it will cost 250s while 
 executing use db_test;
 Log:
 2014-10-17 04:04:46,873 ERROR Datastore.Persist (Log4JLogger.java:error(115)) 
 - Update of object 
 org.apache.hadoop.hive.metastore.model.MStorageDescriptor@13c9c4b3 using 
 statement UPDATE `SDS` SET `CD_ID`=? WHERE `SD_ID`=? failed : 
 java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
 at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1074)
 at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4096)
 at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4028)
 at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2490)
 at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2651)
 at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2734)
 at 
 com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155)
 at 
 com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2458)
 at 
 com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2375)
 at 
 com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2359)
 at 
 org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
 at 
 org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
 at 
 org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeUpdate(ParamLoggingPreparedStatement.java:399)
 at 
 org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:439)
 at 
 org.datanucleus.store.rdbms.request.UpdateRequest.execute(UpdateRequest.java:374)
 at 
 org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateTable(RDBMSPersistenceHandler.java:417)
 at 
 org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateObject(RDBMSPersistenceHandler.java:390)
 at 
 org.datanucleus.state.JDOStateManager.flush(JDOStateManager.java:5012)
 at org.datanucleus.FlushOrdered.execute(FlushOrdered.java:106)
 at 
 org.datanucleus.ExecutionContextImpl.flushInternal(ExecutionContextImpl.java:4019)
 at 
 org.datanucleus.ExecutionContextThreadedImpl.flushInternal(ExecutionContextThreadedImpl.java:450)
 at org.datanucleus.store.query.Query.prepareDatastore(Query.java:1575)
 at org.datanucleus.store.query.Query.executeQuery(Query.java:1760)
 at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
 at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:243)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.listStorageDescriptorsWithCD(ObjectStore.java:2185)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.removeUnusedColumnDescriptor(ObjectStore.java:2131)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.preDropStorageDescriptor(ObjectStore.java:2162)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.dropPartitionCommon(ObjectStore.java:1361)
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.dropPartition(ObjectStore.java:1301)
 at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111)
 at $Proxy4.dropPartition(Unknown Source)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_partition_common(HiveMetaStore.java:1865)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_partition(HiveMetaStore.java:1911)
 at sun.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
 at 

[jira] [Updated] (HIVE-10305) TestOrcFile has a mistake that makes metadata test ineffective

2015-04-10 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-10305:
-
Attachment: HIVE-10305.patch

This patch does a flip so that the buffer doesn't look empty.

 TestOrcFile has a mistake that makes metadata test ineffective
 --

 Key: HIVE-10305
 URL: https://issues.apache.org/jira/browse/HIVE-10305
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Attachments: HIVE-10305.patch


 Two of the values that are being stored as user metadata in 
 TestOrcFile.metaData weren't flipped and thus were empty buffers. The test 
 passes because they are compared to empty buffers. We should fix the test to 
 perform the expected test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10251) HIVE-9664 makes hive depend on ivysettings.xml

2015-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490325#comment-14490325
 ] 

Hive QA commented on HIVE-10251:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12724559/HIVE-10251.3.patch

{color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 8674 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
org.apache.hadoop.hive.ql.session.TestDependencyResolver.testForSuccessfulDownload
org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.testMetastoreProxyUser
org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.testSaslWithHiveMetaStore
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testNewConnectionConfiguration
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp
org.apache.hive.spark.client.TestSparkClient.testAddJarsAndFiles
org.apache.hive.spark.client.TestSparkClient.testCounters
org.apache.hive.spark.client.TestSparkClient.testErrorJob
org.apache.hive.spark.client.TestSparkClient.testJobSubmission
org.apache.hive.spark.client.TestSparkClient.testMetricsCollection
org.apache.hive.spark.client.TestSparkClient.testSimpleSparkJob
org.apache.hive.spark.client.TestSparkClient.testSyncRpc
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3363/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3363/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3363/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 25 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12724559 - PreCommit-HIVE-TRUNK-Build

 HIVE-9664 makes hive depend on ivysettings.xml
 --

 Key: HIVE-10251
 URL: https://issues.apache.org/jira/browse/HIVE-10251
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Sushanth Sowmyan
Assignee: Anant Nag
  Labels: patch
 Attachments: HIVE-10251.1.patch, HIVE-10251.2.patch, 
 HIVE-10251.3.patch, HIVE-10251.simple.patch


 HIVE-9664 makes hive depend on the existence of ivysettings.xml, and if it is 
 not present, it makes hive NPE when instantiating a CLISessionState.
 {noformat}
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hive.ql.session.DependencyResolver.init(DependencyResolver.java:61)
 at 
 org.apache.hadoop.hive.ql.session.SessionState.init(SessionState.java:343)
 at 
 org.apache.hadoop.hive.ql.session.SessionState.init(SessionState.java:334)
 at org.apache.hadoop.hive.cli.CliSessionState.init(CliSessionState.java:60)
 {noformat}
 This happens because of the following bit:
 {noformat}
 // If HIVE_HOME is not defined or file is not found in HIVE_HOME/conf 
 then load default ivysettings.xml from class loader
 if (ivysettingsPath == 

[jira] [Updated] (HIVE-10304) Add deprecation message to HiveCLI

2015-04-10 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-10304:
-
Attachment: HIVE-10304.patch

Attaching patch.  Now CLI displays the message: Hive CLI is deprecated and 
migration to Beeline is recommended.

 Add deprecation message to HiveCLI
 --

 Key: HIVE-10304
 URL: https://issues.apache.org/jira/browse/HIVE-10304
 Project: Hive
  Issue Type: Improvement
  Components: CLI
Affects Versions: 1.1.0
Reporter: Szehon Ho
 Attachments: HIVE-10304.patch


 As Beeline is now the recommended command line tool to Hive, we should add a 
 message to HiveCLI to indicate that it is deprecated and redirect them to 
 Beeline.  
 This is not suggesting to remove HiveCLI for now, but just a helpful 
 direction for user to know the direction to focus attention in Beeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-10304) Add deprecation message to HiveCLI

2015-04-10 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho reassigned HIVE-10304:


Assignee: Szehon Ho

 Add deprecation message to HiveCLI
 --

 Key: HIVE-10304
 URL: https://issues.apache.org/jira/browse/HIVE-10304
 Project: Hive
  Issue Type: Improvement
  Components: CLI
Affects Versions: 1.1.0
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: HIVE-10304.patch


 As Beeline is now the recommended command line tool to Hive, we should add a 
 message to HiveCLI to indicate that it is deprecated and redirect them to 
 Beeline.  
 This is not suggesting to remove HiveCLI for now, but just a helpful 
 direction for user to know the direction to focus attention in Beeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10228) Changes to Hive Export/Import/DropTable/DropPartition to support replication semantics

2015-04-10 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-10228:

Attachment: HIVE-10228.patch

Patch attached.

 Changes to Hive Export/Import/DropTable/DropPartition to support replication 
 semantics
 --

 Key: HIVE-10228
 URL: https://issues.apache.org/jira/browse/HIVE-10228
 Project: Hive
  Issue Type: Sub-task
  Components: Import/Export
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-10228.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9182) avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl

2015-04-10 Thread Abdelrahman Shettia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdelrahman Shettia updated HIVE-9182:
--
Attachment: HIVE-9182.3.patch

 avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl
 -

 Key: HIVE-9182
 URL: https://issues.apache.org/jira/browse/HIVE-9182
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Thejas M Nair
Assignee: Abdelrahman Shettia
 Fix For: 1.2.0

 Attachments: HIVE-9182.2.patch, HIVE-9182.3.patch


 File systems such as s3, wasp (azure) don't implement Hadoop FileSystem acl 
 functionality.
 Hadoop23Shims has code that calls getAclStatus on file systems.
 Instead of calling getAclStatus and catching the exception, we can also check 
 FsPermission#getAclBit .
 Additionally, instead of catching all exceptions for calls to getAclStatus 
 and ignoring them, it is better to just catch UnsupportedOperationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10228) Changes to Hive Export/Import/DropTable/DropPartition to support replication semantics

2015-04-10 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-10228:

Description: 
We need to update a couple of hive commands to support replication semantics. 
To wit, we need the following:

EXPORT ... [FOR [METADATA] REPLICATION(“comment”)]

Export will now support an extra optional clause to tell it that this export is 
being prepared for the purpose of replication. There is also an additional 
optional clause here, that allows for the export to be a metadata-only export, 
to handle cases of capturing the diff for alter statements, for example.

Also, if done for replication, the non-presence of a table, or a table being a 
view/offline table/non-native table is not considered an error, and instead, 
will result in a successful no-op.

IMPORT ... (as normal) – but handles new semantics 

No syntax changes for import, but import will have to change to be able to 
handle all the permutations of export dumps possible. Also, import will have to 
ensure that it should update the object only if the update being imported is 
not older than the state of the object.

DROP TABLE ... FOR REPLICATION('eventid')

Drop Table now has an additional clause, to specify that this drop table is 
being done for replication purposes, and that the dop should not actually drop 
the table if the table is newer than that event id specified.

ALTER TABLE ... DROP PARTITION (...) FOR REPLICATION('eventid')

Similarly, Drop Partition also has an equivalent change to Drop Table.

=

In addition, we introduce a new property repl.last.id, which when tagged on 
to table properties or partition properties on a replication-destination, holds 
the effective state identifier of the object.

 Changes to Hive Export/Import/DropTable/DropPartition to support replication 
 semantics
 --

 Key: HIVE-10228
 URL: https://issues.apache.org/jira/browse/HIVE-10228
 Project: Hive
  Issue Type: Sub-task
  Components: Import/Export
Affects Versions: 1.2.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-10228.patch


 We need to update a couple of hive commands to support replication semantics. 
 To wit, we need the following:
 EXPORT ... [FOR [METADATA] REPLICATION(“comment”)]
 Export will now support an extra optional clause to tell it that this export 
 is being prepared for the purpose of replication. There is also an additional 
 optional clause here, that allows for the export to be a metadata-only 
 export, to handle cases of capturing the diff for alter statements, for 
 example.
 Also, if done for replication, the non-presence of a table, or a table being 
 a view/offline table/non-native table is not considered an error, and 
 instead, will result in a successful no-op.
 IMPORT ... (as normal) – but handles new semantics 
 No syntax changes for import, but import will have to change to be able to 
 handle all the permutations of export dumps possible. Also, import will have 
 to ensure that it should update the object only if the update being imported 
 is not older than the state of the object.
 DROP TABLE ... FOR REPLICATION('eventid')
 Drop Table now has an additional clause, to specify that this drop table is 
 being done for replication purposes, and that the dop should not actually 
 drop the table if the table is newer than that event id specified.
 ALTER TABLE ... DROP PARTITION (...) FOR REPLICATION('eventid')
 Similarly, Drop Partition also has an equivalent change to Drop Table.
 =
 In addition, we introduce a new property repl.last.id, which when tagged on 
 to table properties or partition properties on a replication-destination, 
 holds the effective state identifier of the object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10228) Changes to Hive Export/Import/DropTable/DropPartition to support replication semantics

2015-04-10 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490439#comment-14490439
 ] 

Sushanth Sowmyan commented on HIVE-10228:
-

Jumbo patch. [~alangates], could you please take a look?

 Changes to Hive Export/Import/DropTable/DropPartition to support replication 
 semantics
 --

 Key: HIVE-10228
 URL: https://issues.apache.org/jira/browse/HIVE-10228
 Project: Hive
  Issue Type: Sub-task
  Components: Import/Export
Affects Versions: 1.2.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-10228.patch


 We need to update a couple of hive commands to support replication semantics. 
 To wit, we need the following:
 EXPORT ... [FOR [METADATA] REPLICATION(“comment”)]
 Export will now support an extra optional clause to tell it that this export 
 is being prepared for the purpose of replication. There is also an additional 
 optional clause here, that allows for the export to be a metadata-only 
 export, to handle cases of capturing the diff for alter statements, for 
 example.
 Also, if done for replication, the non-presence of a table, or a table being 
 a view/offline table/non-native table is not considered an error, and 
 instead, will result in a successful no-op.
 IMPORT ... (as normal) – but handles new semantics 
 No syntax changes for import, but import will have to change to be able to 
 handle all the permutations of export dumps possible. Also, import will have 
 to ensure that it should update the object only if the update being imported 
 is not older than the state of the object.
 DROP TABLE ... FOR REPLICATION('eventid')
 Drop Table now has an additional clause, to specify that this drop table is 
 being done for replication purposes, and that the dop should not actually 
 drop the table if the table is newer than that event id specified.
 ALTER TABLE ... DROP PARTITION (...) FOR REPLICATION('eventid')
 Similarly, Drop Partition also has an equivalent change to Drop Table.
 =
 In addition, we introduce a new property repl.last.id, which when tagged on 
 to table properties or partition properties on a replication-destination, 
 holds the effective state identifier of the object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10148) update of bucking column should not be allowed

2015-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490463#comment-14490463
 ] 

Hive QA commented on HIVE-10148:




{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12724237/HIVE-10148.5.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3365/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3365/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3365/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-3365/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 
'serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/TypeInfoUtils.java'
Reverted 'ql/src/test/results/clientpositive/spark/mapjoin_decimal.q.out'
Reverted 'ql/src/test/results/clientpositive/tez/vector_char_mapjoin1.q.out'
Reverted 'ql/src/test/results/clientpositive/tez/mapjoin_decimal.q.out'
Reverted 'ql/src/test/results/clientpositive/tez/vector_varchar_mapjoin1.q.out'
Reverted 'ql/src/test/results/clientpositive/vector_varchar_mapjoin1.q.out'
Reverted 'ql/src/test/results/clientpositive/vector_char_mapjoin1.q.out'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java'
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20S/target 
shims/0.23/target shims/aggregator/target shims/common/target 
shims/scheduler/target packaging/target hbase-handler/target testutils/target 
jdbc/target metastore/target itests/target itests/thirdparty 
itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target 
itests/hive-unit-hadoop2/target itests/hive-minikdc/target 
itests/hive-jmh/target itests/hive-unit/target itests/custom-serde/target 
itests/util/target itests/qtest-spark/target hcatalog/target 
hcatalog/core/target hcatalog/streaming/target 
hcatalog/server-extensions/target hcatalog/webhcat/svr/target 
hcatalog/webhcat/java-client/target hcatalog/hcatalog-pig-adapter/target 
accumulo-handler/target hwi/target common/target common/src/gen 
spark-client/target service/target contrib/target serde/target beeline/target 
odbc/target cli/target ql/dependency-reduced-pom.xml ql/target 
ql/src/test/results/clientpositive/join_on_varchar.q.out 
ql/src/test/queries/clientpositive/join_on_varchar.q
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1672780.

At revision 1672780.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12724237 - PreCommit-HIVE-TRUNK-Build

 update of bucking column should not be allowed
 --

 Key: HIVE-10148
 URL: https://issues.apache.org/jira/browse/HIVE-10148
 Project: Hive
  Issue Type: Bug
  Components: Transactions
Affects Versions: 1.1.0
Reporter: Eugene Koifman

[jira] [Commented] (HIVE-10190) CBO: AST mode checks for TABLESAMPLE with AST.toString().contains(TOK_TABLESPLITSAMPLE)

2015-04-10 Thread Laljo John Pullokkaran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490465#comment-14490465
 ] 

Laljo John Pullokkaran commented on HIVE-10190:
---

[~ sircodesalot] How does patch3 improve on patch2?
Other than wrapping AST in a stream i don't see any benefit; also you incur the 
cost of reflection.

 CBO: AST mode checks for TABLESAMPLE with 
 AST.toString().contains(TOK_TABLESPLITSAMPLE)
 -

 Key: HIVE-10190
 URL: https://issues.apache.org/jira/browse/HIVE-10190
 Project: Hive
  Issue Type: Bug
  Components: CBO
Affects Versions: 1.2.0
Reporter: Gopal V
Assignee: Pengcheng Xiong
Priority: Trivial
  Labels: perfomance
 Attachments: HIVE-10190-querygen.py, HIVE-10190.01.patch, 
 HIVE-10190.02.patch, HIVE-10190.03.patch


 {code}
 public static boolean validateASTForUnsupportedTokens(ASTNode ast) {
 String astTree = ast.toStringTree();
 // if any of following tokens are present in AST, bail out
 String[] tokens = { TOK_CHARSETLITERAL, TOK_TABLESPLITSAMPLE };
 for (String token : tokens) {
   if (astTree.contains(token)) {
 return false;
   }
 }
 return true;
   }
 {code}
 This is an issue for a SQL query which is bigger in AST form than in text 
 (~700kb).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9182) avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl

2015-04-10 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490482#comment-14490482
 ] 

Thejas M Nair commented on HIVE-9182:
-

+1


 avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl
 -

 Key: HIVE-9182
 URL: https://issues.apache.org/jira/browse/HIVE-9182
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Thejas M Nair
Assignee: Abdelrahman Shettia
 Fix For: 1.2.0

 Attachments: HIVE-9182.2.patch, HIVE-9182.3.patch


 File systems such as s3, wasp (azure) don't implement Hadoop FileSystem acl 
 functionality.
 Hadoop23Shims has code that calls getAclStatus on file systems.
 Instead of calling getAclStatus and catching the exception, we can also check 
 FsPermission#getAclBit .
 Additionally, instead of catching all exceptions for calls to getAclStatus 
 and ignoring them, it is better to just catch UnsupportedOperationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-10154) LLAP: GC issues 1

2015-04-10 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HIVE-10154.
-
   Resolution: Fixed
Fix Version/s: llap

This adds an object pool, and pooling for the following objects: 
ColumnVectorBatch (small pool per split, so that ColumnVector-s could be reused)
(the rest of the pools are global)
ColumnReadContext and ColumnReadContext.StreamContext (not sure these are 
needed, they are always used within one method call, GC could be very efficient 
for those)
OrcEncodedColumnBatch+OrcEncodedColumnBatch.StreamBuffer (passed between 
threads/classes), TrackedCacheChunk/ProcCacheChunk (between methods, may also 
be unnecessary).

 LLAP: GC issues 1
 -

 Key: HIVE-10154
 URL: https://issues.apache.org/jira/browse/HIVE-10154
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: llap


 Lots of small objects like CVBs, CVs, CacheChunk-s, BufferChunk-s, etc. 
 floating around. Need to pool them, or reuse is a simpler way when possible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10154) LLAP: GC issues 1

2015-04-10 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-10154:

Attachment: HIVE-10154.patch

 LLAP: GC issues 1
 -

 Key: HIVE-10154
 URL: https://issues.apache.org/jira/browse/HIVE-10154
 Project: Hive
  Issue Type: Sub-task
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: llap

 Attachments: HIVE-10154.patch


 Lots of small objects like CVBs, CVs, CacheChunk-s, BufferChunk-s, etc. 
 floating around. Need to pool them, or reuse is a simpler way when possible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10012) LLAP: Hive sessions run before Slider registers to YARN registry fail to launch

2015-04-10 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490497#comment-14490497
 ] 

Siddharth Seth commented on HIVE-10012:
---

Glanced over. Mostly looks good to me. This removes some of the log messages 
when a host is selected for locality, which may be useful for debugging.
Also there's a check for local addresses which needs to be added back to the 
FixedRegistryImpl.
{code}
inetAddress = InetAddress.getByName(host);
  if (NetUtils.isLocalAddress(inetAddress)) {
{code}
Required to match the hostname reported by a daemon and the one used by the 
scheduler.

 LLAP: Hive sessions run before Slider registers to YARN registry fail to 
 launch
 ---

 Key: HIVE-10012
 URL: https://issues.apache.org/jira/browse/HIVE-10012
 Project: Hive
  Issue Type: Sub-task
Affects Versions: llap
Reporter: Gopal V
Assignee: Gopal V
 Fix For: llap

 Attachments: HIVE-10012.1.patch, HIVE-10012.wip1.patch


 The LLAP YARN registry only registers entries after at least one daemon is up.
 Any Tez session starting before that will end up with an error listing 
 zookeeper directories.
 {code}
 2015-03-18 16:54:21,392 FATAL [main] app.DAGAppMaster: Error starting 
 DAGAppMaster
 org.apache.hadoop.service.ServiceStateException: 
 org.apache.hadoop.fs.PathNotFoundException: 
 `/users/sershe/services/org-apache-hive/llap0/components/workers':
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10306) We need to print tez summary when hive.server2.logging.level = PERFORMANCE.

2015-04-10 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-10306:
-
Attachment: HIVE-10306.1.patch

 We need to print tez summary when hive.server2.logging.level = PERFORMANCE. 
 -

 Key: HIVE-10306
 URL: https://issues.apache.org/jira/browse/HIVE-10306
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-10306.1.patch


 We need to print tez summary when hive.server2.logging.level = PERFORMANCE. 
 We introduced this parameter via HIVE-10119.
 The logging param for levels is only relevant to HS2, so for hive-cli users 
 the hive.tez.exec.print.summary still makes sense. We can check for log-level 
 param as well, in places we are checking value of 
 hive.tez.exec.print.summary. Ie, consider hive.tez.exec.print.summary=true if 
 log.level = PERFORMANCE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10306) We need to print tez summary when hive.server2.logging.level = PERFORMANCE.

2015-04-10 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-10306:
-
Attachment: HIVE-10306.1.patch

 We need to print tez summary when hive.server2.logging.level = PERFORMANCE. 
 -

 Key: HIVE-10306
 URL: https://issues.apache.org/jira/browse/HIVE-10306
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-10306.1.patch


 We need to print tez summary when hive.server2.logging.level = PERFORMANCE. 
 We introduced this parameter via HIVE-10119.
 The logging param for levels is only relevant to HS2, so for hive-cli users 
 the hive.tez.exec.print.summary still makes sense. We can check for log-level 
 param as well, in places we are checking value of 
 hive.tez.exec.print.summary. Ie, consider hive.tez.exec.print.summary=true if 
 log.level = PERFORMANCE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10306) We need to print tez summary when hive.server2.logging.level = PERFORMANCE.

2015-04-10 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-10306:
-
Attachment: (was: HIVE-10306.1.patch)

 We need to print tez summary when hive.server2.logging.level = PERFORMANCE. 
 -

 Key: HIVE-10306
 URL: https://issues.apache.org/jira/browse/HIVE-10306
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-10306.1.patch


 We need to print tez summary when hive.server2.logging.level = PERFORMANCE. 
 We introduced this parameter via HIVE-10119.
 The logging param for levels is only relevant to HS2, so for hive-cli users 
 the hive.tez.exec.print.summary still makes sense. We can check for log-level 
 param as well, in places we are checking value of 
 hive.tez.exec.print.summary. Ie, consider hive.tez.exec.print.summary=true if 
 log.level = PERFORMANCE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9645) Constant folding case NULL equality

2015-04-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-9645:
---
Attachment: HIVE-9645.7.patch

Updated golden files.

 Constant folding case NULL equality
 ---

 Key: HIVE-9645
 URL: https://issues.apache.org/jira/browse/HIVE-9645
 Project: Hive
  Issue Type: Bug
  Components: Logical Optimizer
Affects Versions: 0.14.0, 1.0.0, 1.1.0
Reporter: Gopal V
Assignee: Ashutosh Chauhan
 Attachments: HIVE-9645.1.patch, HIVE-9645.2.patch, HIVE-9645.3.patch, 
 HIVE-9645.4.patch, HIVE-9645.5.patch, HIVE-9645.6.patch, HIVE-9645.7.patch, 
 HIVE-9645.patch


 Hive logical optimizer does not follow the Null scan codepath when 
 encountering a NULL = 1;
 NULL = 1 is not evaluated as false in the constant propogation implementation.
 {code}
 hive explain select count(1) from store_sales where null=1;
 ...
  TableScan
   alias: store_sales
   filterExpr: (null = 1) (type: boolean)
   Statistics: Num rows: 550076554 Data size: 49570324480 
 Basic stats: COMPLETE Column stats: COMPLETE
   Filter Operator
 predicate: (null = 1) (type: boolean)
 Statistics: Num rows: 275038277 Data size: 0 Basic stats: 
 PARTIAL Column stats: COMPLETE
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-7723) Explain plan for complex query with lots of partitions is slow due to in-efficient collection used to find a matching ReadEntity

2015-04-10 Thread Mostafa Mokhtar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mostafa Mokhtar updated HIVE-7723:
--
Attachment: HIVE-7723.9.patch

Rebase and run tests

 Explain plan for complex query with lots of partitions is slow due to 
 in-efficient collection used to find a matching ReadEntity
 

 Key: HIVE-7723
 URL: https://issues.apache.org/jira/browse/HIVE-7723
 Project: Hive
  Issue Type: Bug
  Components: CLI, Physical Optimizer
Affects Versions: 0.13.1
Reporter: Mostafa Mokhtar
Assignee: Mostafa Mokhtar
 Attachments: HIVE-7723.1.patch, HIVE-7723.10.patch, 
 HIVE-7723.2.patch, HIVE-7723.3.patch, HIVE-7723.4.patch, HIVE-7723.5.patch, 
 HIVE-7723.6.patch, HIVE-7723.7.patch, HIVE-7723.8.patch, HIVE-7723.9.patch


 Explain on TPC-DS query 64 took 11 seconds, when the CLI was profiled it 
 showed that ReadEntity.equals is taking ~40% of the CPU.
 ReadEntity.equals is called from the snippet below.
 Again and again the set is iterated over to get the actual match, a HashMap 
 is a better option for this case as Set doesn't have a Get method.
 Also for ReadEntity equals is case-insensitive while hash is , which is an 
 undesired behavior.
 {code}
 public static ReadEntity addInput(SetReadEntity inputs, ReadEntity 
 newInput) {
 // If the input is already present, make sure the new parent is added to 
 the input.
 if (inputs.contains(newInput)) {
   for (ReadEntity input : inputs) {
 if (input.equals(newInput)) {
   if ((newInput.getParents() != null)  
 (!newInput.getParents().isEmpty())) {
 input.getParents().addAll(newInput.getParents());
 input.setDirect(input.isDirect() || newInput.isDirect());
   }
   return input;
 }
   }
   assert false;
 } else {
   inputs.add(newInput);
   return newInput;
 }
 // make compile happy
 return null;
   }
 {code}
 This is the query used : 
 {code}
 select cs1.product_name ,cs1.store_name ,cs1.store_zip ,cs1.b_street_number 
 ,cs1.b_streen_name ,cs1.b_city
  ,cs1.b_zip ,cs1.c_street_number ,cs1.c_street_name ,cs1.c_city 
 ,cs1.c_zip ,cs1.syear ,cs1.cnt
  ,cs1.s1 ,cs1.s2 ,cs1.s3
  ,cs2.s1 ,cs2.s2 ,cs2.s3 ,cs2.syear ,cs2.cnt
 from
 (select i_product_name as product_name ,i_item_sk as item_sk ,s_store_name as 
 store_name
  ,s_zip as store_zip ,ad1.ca_street_number as b_street_number 
 ,ad1.ca_street_name as b_streen_name
  ,ad1.ca_city as b_city ,ad1.ca_zip as b_zip ,ad2.ca_street_number as 
 c_street_number
  ,ad2.ca_street_name as c_street_name ,ad2.ca_city as c_city ,ad2.ca_zip 
 as c_zip
  ,d1.d_year as syear ,d2.d_year as fsyear ,d3.d_year as s2year ,count(*) 
 as cnt
  ,sum(ss_wholesale_cost) as s1 ,sum(ss_list_price) as s2 
 ,sum(ss_coupon_amt) as s3
   FROM   store_sales
 JOIN store_returns ON store_sales.ss_item_sk = 
 store_returns.sr_item_sk and store_sales.ss_ticket_number = 
 store_returns.sr_ticket_number
 JOIN customer ON store_sales.ss_customer_sk = customer.c_customer_sk
 JOIN date_dim d1 ON store_sales.ss_sold_date_sk = d1.d_date_sk
 JOIN date_dim d2 ON customer.c_first_sales_date_sk = d2.d_date_sk 
 JOIN date_dim d3 ON customer.c_first_shipto_date_sk = d3.d_date_sk
 JOIN store ON store_sales.ss_store_sk = store.s_store_sk
 JOIN customer_demographics cd1 ON store_sales.ss_cdemo_sk= 
 cd1.cd_demo_sk
 JOIN customer_demographics cd2 ON customer.c_current_cdemo_sk = 
 cd2.cd_demo_sk
 JOIN promotion ON store_sales.ss_promo_sk = promotion.p_promo_sk
 JOIN household_demographics hd1 ON store_sales.ss_hdemo_sk = 
 hd1.hd_demo_sk
 JOIN household_demographics hd2 ON customer.c_current_hdemo_sk = 
 hd2.hd_demo_sk
 JOIN customer_address ad1 ON store_sales.ss_addr_sk = 
 ad1.ca_address_sk
 JOIN customer_address ad2 ON customer.c_current_addr_sk = 
 ad2.ca_address_sk
 JOIN income_band ib1 ON hd1.hd_income_band_sk = ib1.ib_income_band_sk
 JOIN income_band ib2 ON hd2.hd_income_band_sk = ib2.ib_income_band_sk
 JOIN item ON store_sales.ss_item_sk = item.i_item_sk
 JOIN
  (select cs_item_sk
 ,sum(cs_ext_list_price) as 
 sale,sum(cr_refunded_cash+cr_reversed_charge+cr_store_credit) as refund
   from catalog_sales JOIN catalog_returns
   ON catalog_sales.cs_item_sk = catalog_returns.cr_item_sk
 and catalog_sales.cs_order_number = catalog_returns.cr_order_number
   group by cs_item_sk
   having 
 sum(cs_ext_list_price)2*sum(cr_refunded_cash+cr_reversed_charge+cr_store_credit))
  cs_ui
 ON store_sales.ss_item_sk = cs_ui.cs_item_sk
   WHERE  
  cd1.cd_marital_status  

[jira] [Commented] (HIVE-9182) avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl

2015-04-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490555#comment-14490555
 ] 

Chris Nauroth commented on HIVE-9182:
-

Is {{setFullFileStatus}} always called in situations where source and 
destination are on the same file system?  That looks to be true for the call 
sites I found in the {{DDLTask}}, {{MoveTask}}, and {{Hive}} classes.  If so, 
then the presence of ACLs in the source file implies that ACLs will be 
supported when you make the setfacl call on the destination path.  (ACLs are 
enabled or disabled for the whole HDFS namespace.)  That would mean it's 
feasible to rely on checking {{sourceStatus.getPermission().getAclBit()}} and 
remove all calls to {{isExtendedAclEnabled}}, which relies on inspecting the 
configuration.

Even if you want to continue relying on the configuration, you can still check 
the ACL bit on the source before trying the {{getAclStatus}} call, which is an 
RPC.

If you decide to go ahead and remove this dependency on 
{{dfs.namenode.acls.enabled}} in the configuration, then there are also some 
log messages which mention the configuration property that could be updated.

Thanks for the patch, Abdelrahman!

 avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl
 -

 Key: HIVE-9182
 URL: https://issues.apache.org/jira/browse/HIVE-9182
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Thejas M Nair
Assignee: Abdelrahman Shettia
 Fix For: 1.2.0

 Attachments: HIVE-9182.2.patch, HIVE-9182.3.patch


 File systems such as s3, wasp (azure) don't implement Hadoop FileSystem acl 
 functionality.
 Hadoop23Shims has code that calls getAclStatus on file systems.
 Instead of calling getAclStatus and catching the exception, we can also check 
 FsPermission#getAclBit .
 Additionally, instead of catching all exceptions for calls to getAclStatus 
 and ignoring them, it is better to just catch UnsupportedOperationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10239) Create scripts to do metastore upgrade tests on jenkins for Derby, Oracle and PostgreSQL

2015-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490565#comment-14490565
 ] 

Hive QA commented on HIVE-10239:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12724249/HIVE-10239.0.patch

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 8672 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testNewConnectionConfiguration
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3366/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3366/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3366/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12724249 - PreCommit-HIVE-TRUNK-Build

 Create scripts to do metastore upgrade tests on jenkins for Derby, Oracle and 
 PostgreSQL
 

 Key: HIVE-10239
 URL: https://issues.apache.org/jira/browse/HIVE-10239
 Project: Hive
  Issue Type: Improvement
Affects Versions: 1.1.0
Reporter: Naveen Gangam
Assignee: Naveen Gangam
 Attachments: HIVE-10239-donotcommit.patch, HIVE-10239.0.patch, 
 HIVE-10239.patch


 Need to create DB-implementation specific scripts to use the framework 
 introduced in HIVE-9800 to have any metastore schema changes tested across 
 all supported databases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9182) avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl

2015-04-10 Thread Abdelrahman Shettia (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490568#comment-14490568
 ] 

Abdelrahman Shettia commented on HIVE-9182:
---

Thanks Chris for your comment. I appreciate the feedback. 

Thanks
-Rahman 

 avoid FileSystem.getAclStatus rpc call for filesystems that don't support acl
 -

 Key: HIVE-9182
 URL: https://issues.apache.org/jira/browse/HIVE-9182
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.14.0
Reporter: Thejas M Nair
Assignee: Abdelrahman Shettia
 Fix For: 1.2.0

 Attachments: HIVE-9182.2.patch, HIVE-9182.3.patch


 File systems such as s3, wasp (azure) don't implement Hadoop FileSystem acl 
 functionality.
 Hadoop23Shims has code that calls getAclStatus on file systems.
 Instead of calling getAclStatus and catching the exception, we can also check 
 FsPermission#getAclBit .
 Additionally, instead of catching all exceptions for calls to getAclStatus 
 and ignoring them, it is better to just catch UnsupportedOperationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-9937) LLAP: Vectorized Field-By-Field Serialize / Deserialize to support new Vectorized Map Join

2015-04-10 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-9937:
---
Attachment: HIVE-9937.92.patch

 LLAP: Vectorized Field-By-Field Serialize / Deserialize to support new 
 Vectorized Map Join
 --

 Key: HIVE-9937
 URL: https://issues.apache.org/jira/browse/HIVE-9937
 Project: Hive
  Issue Type: Sub-task
Reporter: Matt McCline
Assignee: Matt McCline
 Attachments: HIVE-9937.01.patch, HIVE-9937.02.patch, 
 HIVE-9937.03.patch, HIVE-9937.04.patch, HIVE-9937.05.patch, 
 HIVE-9937.06.patch, HIVE-9937.07.patch, HIVE-9937.08.patch, 
 HIVE-9937.09.patch, HIVE-9937.91.patch, HIVE-9937.92.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-9524) TestEncryptedHDFSCliDriver.encryption_join_with_different_encryption_keys is failing on trunk

2015-04-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-9524.

Resolution: Cannot Reproduce

 TestEncryptedHDFSCliDriver.encryption_join_with_different_encryption_keys is 
 failing on trunk
 -

 Key: HIVE-9524
 URL: https://issues.apache.org/jira/browse/HIVE-9524
 Project: Hive
  Issue Type: Test
  Components: Encryption
Affects Versions: 1.2.0
Reporter: Ashutosh Chauhan

 Able to reproduce consistently locally with following stacktrace:
 {code}
 Diagnostic Messages for this Task:
 Error: java.lang.RuntimeException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
 processing row {key:238,value:val_238}
 at 
 org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:179)
 at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row {key:238,value:val_238}
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:503)
 at 
 org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:170)
 ... 8 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
 java.security.InvalidKeyException: Illegal key size
 at 
 org.apache.hadoop.crypto.JceAesCtrCryptoCodec$JceAesCtrCipher.init(JceAesCtrCryptoCodec.java:116)
 at 
 org.apache.hadoop.crypto.key.KeyProviderCryptoExtension$DefaultCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:264)
 at 
 org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.generateEncryptedDataEncryptionKey(FSNamesystem.java:2489)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2620)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
 Caused by: java.security.InvalidKeyException: Illegal key size
 at javax.crypto.Cipher.checkCryptoPerm(Cipher.java:1024)
 at javax.crypto.Cipher.implInit(Cipher.java:790)
 at javax.crypto.Cipher.chooseProvider(Cipher.java:849)
 at javax.crypto.Cipher.init(Cipher.java:1348)
 at javax.crypto.Cipher.init(Cipher.java:1282)
 at 
 org.apache.hadoop.crypto.JceAesCtrCryptoCodec$JceAesCtrCipher.init(JceAesCtrCryptoCodec.java:113)
 ... 16 more
 at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:542)
 at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:640)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
 at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
 at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
 

[jira] [Assigned] (HIVE-9015) Constant Folding optimizer doesn't handle expressions involving null

2015-04-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-9015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-9015:
--

Assignee: Ashutosh Chauhan

 Constant Folding optimizer doesn't handle expressions involving null
 

 Key: HIVE-9015
 URL: https://issues.apache.org/jira/browse/HIVE-9015
 Project: Hive
  Issue Type: Task
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan

 Expressions which are guaranteed to evaluate to {{null}} aren't folded by 
 optimizer yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10272) Some HCat tests fail under windows

2015-04-10 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490586#comment-14490586
 ] 

Thejas M Nair commented on HIVE-10272:
--

+1

 Some HCat tests fail under windows
 --

 Key: HIVE-10272
 URL: https://issues.apache.org/jira/browse/HIVE-10272
 Project: Hive
  Issue Type: Bug
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-10272.patch


 Some HCat tests fail under windows with errors like this:
 {noformat}
 java.lang.RuntimeException: java.lang.IllegalArgumentException: Pathname 
 /D:/w/hv/hcatalog/hcatalog-pig-adapter/target/tmp/scratchdir from 
 D:/w/hv/hcatalog/hcatalog-pig-adapter/target/tmp/scratchdir is not a valid 
 DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:197)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
   at 
 org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:594)
   at 
 org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:552)
   at 
 org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:504)
   at 
 org.apache.hive.hcatalog.pig.TestHCatLoaderEncryption.setup(TestHCatLoaderEncryption.java:185)
 {noformat}
 We need to sanitize HiveConf objects with 
 WindowsPathUtil.convertPathsFromWindowsToHdfs if running under windows before 
 we use them to instantiate a SessionState/Driver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-8907) Partition Condition Remover doesn't remove conditions involving cast on partition column

2015-04-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-8907:
--

Assignee: Ashutosh Chauhan

 Partition Condition Remover doesn't remove conditions involving cast on 
 partition column
 

 Key: HIVE-8907
 URL: https://issues.apache.org/jira/browse/HIVE-8907
 Project: Hive
  Issue Type: Improvement
  Components: Logical Optimizer
Affects Versions: 0.14.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan

 e.g,
 {code}
 create table partition_test_partitioned(key string, value string) partitioned 
 by (dt string)
  explain select * from partition_test_partitioned where cast(dt as double) 
 =100.0 and cast(dt as double) = 102.0
 {code}
 For queries like above, although {{PartitionPruner}} is able to prune 
 partitions correctly, filter is still not optimized away by PCR, where it 
 could.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10304) Add deprecation message to HiveCLI

2015-04-10 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490652#comment-14490652
 ] 

Xuefu Zhang commented on HIVE-10304:


LGTM. Could you add Warning:  at the beginning of the message?

 Add deprecation message to HiveCLI
 --

 Key: HIVE-10304
 URL: https://issues.apache.org/jira/browse/HIVE-10304
 Project: Hive
  Issue Type: Improvement
  Components: CLI
Affects Versions: 1.1.0
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: HIVE-10304.patch


 As Beeline is now the recommended command line tool to Hive, we should add a 
 message to HiveCLI to indicate that it is deprecated and redirect them to 
 Beeline.  
 This is not suggesting to remove HiveCLI for now, but just a helpful 
 direction for user to know the direction to focus attention in Beeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10304) Add deprecation message to HiveCLI

2015-04-10 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-10304:
-
Attachment: HIVE-10304.2.patch

Address review comments

 Add deprecation message to HiveCLI
 --

 Key: HIVE-10304
 URL: https://issues.apache.org/jira/browse/HIVE-10304
 Project: Hive
  Issue Type: Improvement
  Components: CLI
Affects Versions: 1.1.0
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: HIVE-10304.2.patch, HIVE-10304.patch


 As Beeline is now the recommended command line tool to Hive, we should add a 
 message to HiveCLI to indicate that it is deprecated and redirect them to 
 Beeline.  
 This is not suggesting to remove HiveCLI for now, but just a helpful 
 direction for user to know the direction to focus attention in Beeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10242) ACID: insert overwrite prevents create table command

2015-04-10 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-10242:
--
Attachment: HIVE-10242.2.patch

 ACID: insert overwrite prevents create table command
 

 Key: HIVE-10242
 URL: https://issues.apache.org/jira/browse/HIVE-10242
 Project: Hive
  Issue Type: Bug
  Components: Transactions
Affects Versions: 1.0.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Attachments: HIVE-10242.2.patch, HIVE-10242.patch


 1. insert overwirte table DB.T1 select ... from T2: this takes X lock on 
 DB.T1 and S lock on T2.
 X lock makes sense because we don't want anyone reading T1 while it's 
 overwritten. S lock on T2 prevents if from being dropped while the query is 
 in progress.
 2. create table DB.T3: takes S lock on DB.
 This S lock gets blocked by X lock on T1. S lock prevents the DB from being 
 dropped while create table is executed.
 If the insert statement is long running, this blocks DDL ops on the same 
 database.  This is a usability issue.  
 There is no good reason why X lock on a table within a DB and S lock on DB 
 should be in conflict.  
 (this is different from a situation where X lock is on a partition and S lock 
 is on the table to which this partition belongs.  Here it makes sense.  
 Basically there is no SQL way to address all tables in a DB but you can 
 easily refer to all partitions of a table)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10304) Add deprecation message to HiveCLI

2015-04-10 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490693#comment-14490693
 ] 

Xuefu Zhang commented on HIVE-10304:


+1

 Add deprecation message to HiveCLI
 --

 Key: HIVE-10304
 URL: https://issues.apache.org/jira/browse/HIVE-10304
 Project: Hive
  Issue Type: Improvement
  Components: CLI
Affects Versions: 1.1.0
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: HIVE-10304.2.patch, HIVE-10304.patch


 As Beeline is now the recommended command line tool to Hive, we should add a 
 message to HiveCLI to indicate that it is deprecated and redirect them to 
 Beeline.  
 This is not suggesting to remove HiveCLI for now, but just a helpful 
 direction for user to know the direction to focus attention in Beeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10036) Writing ORC format big table causes OOM - too many fixed sized stream buffers

2015-04-10 Thread Selina Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Selina Zhang updated HIVE-10036:

Attachment: HIVE-10036.5.patch

Thanks Mithun and Prasanth! Uploaded modified patch. 

 Writing ORC format big table causes OOM - too many fixed sized stream buffers
 -

 Key: HIVE-10036
 URL: https://issues.apache.org/jira/browse/HIVE-10036
 Project: Hive
  Issue Type: Improvement
Reporter: Selina Zhang
Assignee: Selina Zhang
 Attachments: HIVE-10036.1.patch, HIVE-10036.2.patch, 
 HIVE-10036.3.patch, HIVE-10036.5.patch


 ORC writer keeps multiple out steams for each column. Each output stream is 
 allocated fixed size ByteBuffer (configurable, default to 256K). For a big 
 table, the memory cost is unbearable. Specially when HCatalog dynamic 
 partition involves, several hundreds files may be open and writing at the 
 same time (same problems for FileSinkOperator). 
 Global ORC memory manager controls the buffer size, but it only got kicked in 
 at 5000 rows interval. An enhancement could be done here, but the problem is 
 reducing the buffer size introduces worse compression and more IOs in read 
 path. Sacrificing the read performance is always not a good choice. 
 I changed the fixed size ByteBuffer to a dynamic growth buffer which up bound 
 to the existing configurable buffer size. Most of the streams does not need 
 large buffer so the performance got improved significantly. Comparing to 
 Facebook's hive-dwrf, I monitored 2x performance gain with this fix. 
 Solving OOM for ORC completely maybe needs lots of effort , but this is 
 definitely a low hanging fruit. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10273) Union with partition tables which have no data fails with NPE

2015-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490698#comment-14490698
 ] 

Hive QA commented on HIVE-10273:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12724635/HIVE-10273.3.patch

{color:red}ERROR:{color} -1 due to 46 failed/errored test(s), 8672 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_annotate_stats_join
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join32
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_correlationoptimizer6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input26
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_cond_pushdown_unqual2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_cond_pushdown_unqual4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_view
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_metadataonly1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_nullgroup5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_partition_boolexpr
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_union_view
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_smb_mapjoin9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_percentile_approx_23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union30
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union_lateralview
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union_view
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_reduce_deduplicate
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_join_nonexistent_part
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join32
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucketmapjoin1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join_view
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_optimize_nullscan
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_sample6
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union_view
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testNewConnectionConfiguration
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp
org.apache.hive.spark.client.TestSparkClient.testSyncRpc
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3367/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3367/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3367/

Messages:
{noformat}
Executing 

[jira] [Commented] (HIVE-7150) FileInputStream is not closed in HiveConnection#getHttpClient()

2015-04-10 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-7150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490705#comment-14490705
 ] 

Gabor Liptak commented on HIVE-7150:


Please code review. Thanks

 FileInputStream is not closed in HiveConnection#getHttpClient()
 ---

 Key: HIVE-7150
 URL: https://issues.apache.org/jira/browse/HIVE-7150
 Project: Hive
  Issue Type: Bug
Reporter: Ted Yu
  Labels: jdbc
 Fix For: 1.2.0

 Attachments: HIVE-7150.patch


 Here is related code:
 {code}
 sslTrustStore.load(new FileInputStream(sslTrustStorePath),
 sslTrustStorePassword.toCharArray());
 {code}
 The FileInputStream is not closed upon returning from the method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-10308) Vectorization execution throws java.lang.IllegalArgumentException: Unsupported complex type: MAP

2015-04-10 Thread Selina Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Selina Zhang updated HIVE-10308:

Attachment: HIVE-10308.1.patch

 Vectorization execution throws java.lang.IllegalArgumentException: 
 Unsupported complex type: MAP
 

 Key: HIVE-10308
 URL: https://issues.apache.org/jira/browse/HIVE-10308
 Project: Hive
  Issue Type: Bug
  Components: Vectorization
Affects Versions: 0.14.0, 0.13.1, 1.2.0, 1.1.0
Reporter: Selina Zhang
Assignee: Selina Zhang
 Attachments: HIVE-10308.1.patch


 Steps to reproduce:
 
 CREATE TABLE test_orc (a INT, b MAPINT, STRING) STORED AS ORC;
 INSERT OVERWRITE TABLE test_orc SELECT 1, MAP(1, one, 2, two) FROM src 
 LIMIT 1;
 CREATE TABLE test(key INT) ;
 INSERT OVERWRITE TABLE test SELECT 1 FROM src LIMIT 1;
 set hive.vectorized.execution.enabled=true;
 set hive.auto.convert.join=false;
 select l.key from test l left outer join test_orc r on (l.key= r.a) where r.a 
 is not null;
 Stack trace:
 
 Caused by: java.lang.IllegalArgumentException: Unsupported complex type: MAP
   at 
 org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory.genVectorExpressionWritable(VectorExpressionWriterFactory.java:456)
   at 
 org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpressionWriterFactory.processVectorInspector(VectorExpressionWriterFactory.java:1191)
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorReduceSinkOperator.initializeOp(VectorReduceSinkOperator.java:58)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:362)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:481)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:438)
   at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.initializeMapOperator(MapOperator.java:442)
   at 
 org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-3299) Create UDF DAYNAME(date)

2015-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490750#comment-14490750
 ] 

Hive QA commented on HIVE-3299:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12724305/HIVE-3299.5.patch

{color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 8674 tests 
executed
*Failed tests:*
{noformat}
TestMinimrCliDriver-bucketmapjoin6.q-constprog_partitioner.q-infer_bucket_sort_dyn_part.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-external_table_with_space_in_location_path.q-infer_bucket_sort_merge.q-auto_sortmerge_join_16.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-groupby2.q-import_exported_table.q-bucketizedhiveinputformat.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-index_bitmap3.q-stats_counter_partitioned.q-temp_table_external.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_map_operators.q-join1.q-bucketmapjoin7.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_num_buckets.q-disable_merge_for_bucketing.q-uber_reduce.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-infer_bucket_sort_reducers_power_two.q-scriptfile1.q-scriptfile1_win.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-leftsemijoin_mr.q-load_hdfs_file_with_space_in_the_name.q-root_dir_external_table.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-list_bucket_dml_10.q-bucket_num_reducers.q-bucket6.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-load_fs2.q-file_with_header_footer.q-ql_rewrite_gbtoidx_cbo_1.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-parallel_orderby.q-reduce_deduplicate.q-ql_rewrite_gbtoidx_cbo_2.q-and-1-more
 - did not produce a TEST-*.xml file
TestMinimrCliDriver-ql_rewrite_gbtoidx.q-smb_mapjoin_8.q - did not produce a 
TEST-*.xml file
TestMinimrCliDriver-schemeAuthority2.q-bucket4.q-input16_cc.q-and-1-more - did 
not produce a TEST-*.xml file
org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.testMetastoreProxyUser
org.apache.hadoop.hive.thrift.TestHadoop20SAuthBridge.testSaslWithHiveMetaStore
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testNewConnectionConfiguration
org.apache.hive.spark.client.TestSparkClient.testSyncRpc
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3368/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3368/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3368/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 18 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12724305 - PreCommit-HIVE-TRUNK-Build

 Create UDF  DAYNAME(date)
 -

 Key: HIVE-3299
 URL: https://issues.apache.org/jira/browse/HIVE-3299
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Affects Versions: 0.9.0
Reporter: Namitha Babychan
Assignee: Alexander Pivovarov
  Labels: patch
 Attachments: HIVE-3299.1.patch.txt, HIVE-3299.2.patch, 
 HIVE-3299.3.patch, HIVE-3299.4.patch, HIVE-3299.5.patch, HIVE-3299.patch.txt, 
 Hive-3299_Testcase.doc, udf_dayname.q, udf_dayname.q.out


 dayname(date/timestamp/string)
 Returns the name of the weekday for date. The language used for the name is 
 English.
 select dayname('2015-04-08');
 Wednesday



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9917) After HIVE-3454 is done, make int to timestamp conversion configurable

2015-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14490751#comment-14490751
 ] 

Hive QA commented on HIVE-9917:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12724311/HIVE-9917.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3369/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/3369/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-3369/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]]
+ export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera
+ export 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ 
PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-3369/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'ql/src/test/results/clientpositive/show_functions.q.out'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java'
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20S/target 
shims/0.23/target shims/aggregator/target shims/common/target 
shims/scheduler/target packaging/target hbase-handler/target testutils/target 
jdbc/target metastore/target itests/target itests/thirdparty 
itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target 
itests/hive-unit-hadoop2/target itests/hive-minikdc/target 
itests/hive-jmh/target itests/hive-unit/target itests/custom-serde/target 
itests/util/target itests/qtest-spark/target hcatalog/target 
hcatalog/core/target hcatalog/streaming/target 
hcatalog/server-extensions/target hcatalog/hcatalog-pig-adapter/target 
hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target 
accumulo-handler/target hwi/target common/target common/src/gen 
spark-client/target service/target contrib/target serde/target beeline/target 
odbc/target cli/target ql/dependency-reduced-pom.xml ql/target 
ql/src/test/results/clientpositive/udf_dayname.q.out 
ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFDayName.java 
ql/src/test/queries/clientpositive/udf_dayname.q 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFDayName.java
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1672809.

At revision 1672809.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12724311 - PreCommit-HIVE-TRUNK-Build

 After HIVE-3454 is done, make int to timestamp conversion configurable
 --

 Key: HIVE-9917
 URL: https://issues.apache.org/jira/browse/HIVE-9917
 Project: Hive
  Issue Type: Improvement
Reporter: Aihua Xu
Assignee: Aihua Xu
 Attachments: HIVE-9917.patch


 After HIVE-3454 is fixed, we will have correct behavior of converting int to 
 timestamp. While the customers are using such incorrect behavior for so long, 
 better to make it configurable so that in one release, it will default to 
 old/inconsistent way and the next release will default to