[jira] [Assigned] (HIVE-11484) Fix ObjectInspector for Char and VarChar
[ https://issues.apache.org/jira/browse/HIVE-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Barr reassigned HIVE-11484: -- Assignee: Deepak Barr (was: Rajat Khandelwal) > Fix ObjectInspector for Char and VarChar > > > Key: HIVE-11484 > URL: https://issues.apache.org/jira/browse/HIVE-11484 > Project: Hive > Issue Type: Bug > Components: Serializers/Deserializers >Reporter: Amareshwari Sriramadasu >Assignee: Deepak Barr > > The creation of HiveChar and Varchar is not happening through ObjectInspector. > Here is fix we pushed internally : > https://github.com/InMobi/hive/commit/fe95c7850e7130448209141155f28b25d3504216 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11483) Add encoding and decoding for query string config
[ https://issues.apache.org/jira/browse/HIVE-11483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajat Khandelwal updated HIVE-11483: Attachment: HIVE-11483.02.patch > Add encoding and decoding for query string config > - > > Key: HIVE-11483 > URL: https://issues.apache.org/jira/browse/HIVE-11483 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Reporter: Amareshwari Sriramadasu >Assignee: Rajat Khandelwal > Attachments: HIVE-11483.01.patch, HIVE-11483.02.patch > > > We have seen some queries in production where some of the literals passed in > the query have control characters, which result in exception when query > string is set in the job xml. > Proposing a solution to encode the query string in configuration and provide > getters decoded string. > Here is a commit in a forked repo : > https://github.com/InMobi/hive/commit/2faf5761191fa3103a0d779fde584d494ed75bf5 > Suggestions are welcome on the solution. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11483) Add encoding and decoding for query string config
[ https://issues.apache.org/jira/browse/HIVE-11483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173387#comment-15173387 ] Rajat Khandelwal commented on HIVE-11483: - Taking patch from reviewboard and attaching > Add encoding and decoding for query string config > - > > Key: HIVE-11483 > URL: https://issues.apache.org/jira/browse/HIVE-11483 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Reporter: Amareshwari Sriramadasu >Assignee: Rajat Khandelwal > Attachments: HIVE-11483.01.patch, HIVE-11483.02.patch > > > We have seen some queries in production where some of the literals passed in > the query have control characters, which result in exception when query > string is set in the job xml. > Proposing a solution to encode the query string in configuration and provide > getters decoded string. > Here is a commit in a forked repo : > https://github.com/InMobi/hive/commit/2faf5761191fa3103a0d779fde584d494ed75bf5 > Suggestions are welcome on the solution. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HIVE-13188) Allow users of RetryingThriftClient to close transport
[ https://issues.apache.org/jira/browse/HIVE-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-13188 started by Rajat Khandelwal. --- > Allow users of RetryingThriftClient to close transport > -- > > Key: HIVE-13188 > URL: https://issues.apache.org/jira/browse/HIVE-13188 > Project: Hive > Issue Type: Task >Reporter: Rajat Khandelwal >Assignee: Rajat Khandelwal > > RetryingThriftCLIClient opens a TTransport and leaves it open. there should > be a way to close that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13188) Allow users of RetryingThriftClient to close transport
[ https://issues.apache.org/jira/browse/HIVE-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173372#comment-15173372 ] Rajat Khandelwal commented on HIVE-13188: - Created https://reviews.apache.org/r/44201/ > Allow users of RetryingThriftClient to close transport > -- > > Key: HIVE-13188 > URL: https://issues.apache.org/jira/browse/HIVE-13188 > Project: Hive > Issue Type: Task >Reporter: Rajat Khandelwal >Assignee: Rajat Khandelwal > > RetryingThriftCLIClient opens a TTransport and leaves it open. there should > be a way to close that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13146) OrcFile table property values are case sensitive
[ https://issues.apache.org/jira/browse/HIVE-13146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173352#comment-15173352 ] Hive QA commented on HIVE-13146: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12790505/HIVE-13146.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 9756 tests executed *Failed tests:* {noformat} TestSSL - did not produce a TEST-*.xml file TestSparkCliDriver-groupby3_map.q-sample2.q-auto_join14.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-groupby_map_ppr_multi_distinct.q-table_access_keys_stats.q-groupby4_noskew.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import org.apache.hive.jdbc.TestMultiSessionsHS2WithLocalClusterSpark.testSparkQuery {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7134/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7134/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7134/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12790505 - PreCommit-HIVE-TRUNK-Build > OrcFile table property values are case sensitive > > > Key: HIVE-13146 > URL: https://issues.apache.org/jira/browse/HIVE-13146 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 1.2.1 >Reporter: Andrew Sears >Assignee: Yongzhi Chen >Priority: Minor > Attachments: HIVE-13146.1.patch, HIVE-13146.2.patch, > HIVE-13146.3.patch > > > In Hive v1.2.1.2.3, with Tez , create an external table with compression > SNAPPY value marked as lower case. Table is created successfully. Insert > data into table fails with no enum constant error. > CREATE EXTERNAL TABLE mydb.mytable > (id int) > PARTITIONED BY (business_date date) > STORED AS ORC > LOCATION > '/data/mydb/mytable' > TBLPROPERTIES ( > 'orc.compress'='snappy'); > set hive.exec.dynamic.partition=true; > set hive.exec.dynamic.partition.mode=nonstrict; > INSERT OVERWRITE mydb.mytable PARTITION (business_date) > SELECT * from mydb.sourcetable; > Caused by: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hive.ql.io.orc.CompressionKind.snappy > at java.lang.Enum.valueOf(Enum.java:238) > at > org.apache.hadoop.hive.ql.io.orc.CompressionKind.valueOf(CompressionKind.java:25) > Constant SNAPPY needs to be uppercase in definition to fix. Case should be > agnostic or throw error on creation of table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13169) HiveServer2: Support delegation token based connection when using http transport
[ https://issues.apache.org/jira/browse/HIVE-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-13169: - Attachment: HIVE-13169.5.patch 5.patch - Fixing unit test > HiveServer2: Support delegation token based connection when using http > transport > > > Key: HIVE-13169 > URL: https://issues.apache.org/jira/browse/HIVE-13169 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Thejas M Nair > Attachments: HIVE-13169.1.patch, HIVE-13169.2.patch, > HIVE-13169.3.patch, HIVE-13169.3.patch, HIVE-13169.4.patch, HIVE-13169.5.patch > > > HIVE-5155 introduced support for delegation token based connection. However, > it was intended for tcp transport mode. We need to have similar mechanisms > for http transport. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13169) HiveServer2: Support delegation token based connection when using http transport
[ https://issues.apache.org/jira/browse/HIVE-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173199#comment-15173199 ] Hive QA commented on HIVE-13169: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12790504/HIVE-13169.4.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 9763 tests executed *Failed tests:* {noformat} TestSparkCliDriver-groupby3_map.q-sample2.q-auto_join14.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-groupby_map_ppr_multi_distinct.q-table_access_keys_stats.q-groupby4_noskew.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import org.apache.hive.jdbc.TestSSL.testSSLVersion org.apache.hive.minikdc.TestHiveAuthFactory.testStartTokenManagerForDBTokenStore {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7133/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7133/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7133/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12790504 - PreCommit-HIVE-TRUNK-Build > HiveServer2: Support delegation token based connection when using http > transport > > > Key: HIVE-13169 > URL: https://issues.apache.org/jira/browse/HIVE-13169 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Thejas M Nair > Attachments: HIVE-13169.1.patch, HIVE-13169.2.patch, > HIVE-13169.3.patch, HIVE-13169.3.patch, HIVE-13169.4.patch > > > HIVE-5155 introduced support for delegation token based connection. However, > it was intended for tcp transport mode. We need to have similar mechanisms > for http transport. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12995) LLAP: Synthetic file ids need collision checks
[ https://issues.apache.org/jira/browse/HIVE-12995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173098#comment-15173098 ] Sergey Shelukhin commented on HIVE-12995: - Ok, now I'm working on this btw :) > LLAP: Synthetic file ids need collision checks > -- > > Key: HIVE-12995 > URL: https://issues.apache.org/jira/browse/HIVE-12995 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 2.1.0 >Reporter: Gopal V >Assignee: Sergey Shelukhin > > LLAP synthetic file ids do not have any way of checking whether a collision > occurs other than a data-error. > Synthetic file-ids have only been used with unit tests so far - but they will > be needed to add cache mechanisms to non-HDFS filesystems. > In case of Synthetic file-ids, it is recommended that we track the full-tuple > (path, mtime, len) in the cache so that a cache-hit for the synthetic file-id > can be compared against the parameters & only accepted if those match. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10632) Make sure TXN_COMPONENTS gets cleaned up if table is dropped before compaction.
[ https://issues.apache.org/jira/browse/HIVE-10632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173067#comment-15173067 ] Wei Zheng commented on HIVE-10632: -- You're right, as HIVE-12064 went in, only a legit ACID table can have tblproperty 'transactional' set to true. So that should suffice to use that info to judge. I will also remove the unnecessary part for AcidUtils.isAcidTable, since that's no longer needed there as well. > Make sure TXN_COMPONENTS gets cleaned up if table is dropped before > compaction. > --- > > Key: HIVE-10632 > URL: https://issues.apache.org/jira/browse/HIVE-10632 > Project: Hive > Issue Type: Bug > Components: Metastore, Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Wei Zheng >Priority: Critical > Attachments: HIVE-10632.1.patch, HIVE-10632.2.patch, > HIVE-10632.3.patch, HIVE-10632.4.patch, HIVE-10632.5.patch > > > The compaction process will clean up entries in TXNS, > COMPLETED_TXN_COMPONENTS, TXN_COMPONENTS. If the table/partition is dropped > before compaction is complete there will be data left in these tables. Need > to investigate if there are other situations where this may happen and > address it. > see HIVE-10595 for additional info -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12994) Implement support for NULLS FIRST/NULLS LAST
[ https://issues.apache.org/jira/browse/HIVE-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173042#comment-15173042 ] Ashutosh Chauhan commented on HIVE-12994: - Compile side changes look good to me. [~gopalv] Can you help review runtime changes ? > Implement support for NULLS FIRST/NULLS LAST > > > Key: HIVE-12994 > URL: https://issues.apache.org/jira/browse/HIVE-12994 > Project: Hive > Issue Type: New Feature > Components: CBO, Parser, Serializers/Deserializers >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-12994.01.patch, HIVE-12994.02.patch, > HIVE-12994.03.patch, HIVE-12994.04.patch, HIVE-12994.05.patch, > HIVE-12994.06.patch, HIVE-12994.06.patch, HIVE-12994.07.patch, > HIVE-12994.08.patch, HIVE-12994.09.patch, HIVE-12994.10.patch, > HIVE-12994.11.patch, HIVE-12994.patch > > > From SQL:2003, the NULLS FIRST and NULLS LAST options can be used to > determine whether nulls appear before or after non-null data values when the > ORDER BY clause is used. > SQL standard does not specify the behavior by default. Currently in Hive, > null values sort as if lower than any non-null value; that is, NULLS FIRST is > the default for ASC order, and NULLS LAST for DESC order. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11675) make use of file footer PPD API in ETL strategy or separate strategy
[ https://issues.apache.org/jira/browse/HIVE-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-11675: Attachment: HIVE-11675.08.patch > make use of file footer PPD API in ETL strategy or separate strategy > > > Key: HIVE-11675 > URL: https://issues.apache.org/jira/browse/HIVE-11675 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-11675.01.patch, HIVE-11675.02.patch, > HIVE-11675.03.patch, HIVE-11675.04.patch, HIVE-11675.05.patch, > HIVE-11675.06.patch, HIVE-11675.07.patch, HIVE-11675.08.patch, > HIVE-11675.patch > > > Need to take a look at the best flow. It won't be much different if we do > filtering metastore call for each partition. So perhaps we'd need the custom > sync point/batching after all. > Or we can make it opportunistic and not fetch any footers unless it can be > pushed down to metastore or fetched from local cache, that way the only slow > threaded op is directory listings -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13151) Clean up UGI objects in FileSystem cache for transactions
[ https://issues.apache.org/jira/browse/HIVE-13151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-13151: - Attachment: HIVE-13151.3..patch > Clean up UGI objects in FileSystem cache for transactions > - > > Key: HIVE-13151 > URL: https://issues.apache.org/jira/browse/HIVE-13151 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.0.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > Attachments: HIVE-13151.1.patch, HIVE-13151.2.patch, > HIVE-13151.3..patch > > > One issue with FileSystem.CACHE is that it does not clean itself. The key in > that cache includes UGI object. When new UGI objects are created and used > with the FileSystem api, new entries get added to the cache. > We need to manually clean up those UGI objects once they are no longer in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13151) Clean up UGI objects in FileSystem cache for transactions
[ https://issues.apache.org/jira/browse/HIVE-13151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-13151: - Status: Patch Available (was: Open) > Clean up UGI objects in FileSystem cache for transactions > - > > Key: HIVE-13151 > URL: https://issues.apache.org/jira/browse/HIVE-13151 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.0.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > Attachments: HIVE-13151.1.patch, HIVE-13151.2.patch, > HIVE-13151.3..patch > > > One issue with FileSystem.CACHE is that it does not clean itself. The key in > that cache includes UGI object. When new UGI objects are created and used > with the FileSystem api, new entries get added to the cache. > We need to manually clean up those UGI objects once they are no longer in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13151) Clean up UGI objects in FileSystem cache for transactions
[ https://issues.apache.org/jira/browse/HIVE-13151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-13151: - Attachment: (was: HIVE-13151.3..patch) > Clean up UGI objects in FileSystem cache for transactions > - > > Key: HIVE-13151 > URL: https://issues.apache.org/jira/browse/HIVE-13151 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.0.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > Attachments: HIVE-13151.1.patch, HIVE-13151.2.patch > > > One issue with FileSystem.CACHE is that it does not clean itself. The key in > that cache includes UGI object. When new UGI objects are created and used > with the FileSystem api, new entries get added to the cache. > We need to manually clean up those UGI objects once they are no longer in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13151) Clean up UGI objects in FileSystem cache for transactions
[ https://issues.apache.org/jira/browse/HIVE-13151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-13151: - Status: Open (was: Patch Available) > Clean up UGI objects in FileSystem cache for transactions > - > > Key: HIVE-13151 > URL: https://issues.apache.org/jira/browse/HIVE-13151 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.0.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > Attachments: HIVE-13151.1.patch, HIVE-13151.2.patch > > > One issue with FileSystem.CACHE is that it does not clean itself. The key in > that cache includes UGI object. When new UGI objects are created and used > with the FileSystem api, new entries get added to the cache. > We need to manually clean up those UGI objects once they are no longer in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13186) ALTER TABLE RENAME should lowercase table name and hdfs location
[ https://issues.apache.org/jira/browse/HIVE-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-13186: - Status: Patch Available (was: Open) > ALTER TABLE RENAME should lowercase table name and hdfs location > > > Key: HIVE-13186 > URL: https://issues.apache.org/jira/browse/HIVE-13186 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.0.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > Attachments: HIVE-13186.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13186) ALTER TABLE RENAME should lowercase table name and hdfs location
[ https://issues.apache.org/jira/browse/HIVE-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-13186: - Attachment: HIVE-13186.1.patch > ALTER TABLE RENAME should lowercase table name and hdfs location > > > Key: HIVE-13186 > URL: https://issues.apache.org/jira/browse/HIVE-13186 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.0.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > Attachments: HIVE-13186.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13002) Hive object is not thread safe, is shared via a threadlocal and thus should not be passed around too much - part 1
[ https://issues.apache.org/jira/browse/HIVE-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173019#comment-15173019 ] Sergey Shelukhin commented on HIVE-13002: - Spark tests pass for me locally... let's see if this was a fluke or if logs from the run reveal anything... > Hive object is not thread safe, is shared via a threadlocal and thus should > not be passed around too much - part 1 > -- > > Key: HIVE-13002 > URL: https://issues.apache.org/jira/browse/HIVE-13002 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13002.01.patch, HIVE-13002.02.patch, > HIVE-13002.03.patch, HIVE-13002.patch > > > Discovered in some q test run: > {noformat} > TestCliDriver.testCliDriver_insert_values_orig_table:123->runTest:199 > Unexpected exception java.util.ConcurrentModificationException > at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926) > at java.util.HashMap$EntryIterator.next(HashMap.java:966) > at java.util.HashMap$EntryIterator.next(HashMap.java:964) > at > org.apache.hadoop.hive.ql.metadata.Hive.dumpAndClearMetaCallTiming(Hive.java:3412) > at > org.apache.hadoop.hive.ql.Driver.dumpMetaCallTimingWithoutEx(Driver.java:574) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1722) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1342) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1113) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1101) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13145) Separate the output path of metrics file of HS2 and HMS
[ https://issues.apache.org/jira/browse/HIVE-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173016#comment-15173016 ] Hive QA commented on HIVE-13145: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12790488/HIVE-13145.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 9768 tests executed *Failed tests:* {noformat} TestSparkCliDriver-groupby3_map.q-sample2.q-auto_join14.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-groupby_map_ppr_multi_distinct.q-table_access_keys_stats.q-groupby4_noskew.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import org.apache.hive.jdbc.TestSSL.testSSLVersion {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7132/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7132/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7132/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 6 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12790488 - PreCommit-HIVE-TRUNK-Build > Separate the output path of metrics file of HS2 and HMS > --- > > Key: HIVE-13145 > URL: https://issues.apache.org/jira/browse/HIVE-13145 > Project: Hive > Issue Type: Improvement > Components: HiveServer2, Metastore >Reporter: Shinichi Yamashita >Assignee: Shinichi Yamashita > Attachments: HIVE-13145.1.patch, HIVE-13145.2.patch > > > The output path of metrics file of HS2 and HMS can define by > {{hive.service.metrics.file.location}} property at present. > When it starts HS2 and HMS by the same server, both metrics is written in the > same file. And when confirming this file, it is difficult to judge which > metrics it is. > Therefore the output path of metrics file of HS2 and HMS is separated. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13002) Hive object is not thread safe, is shared via a threadlocal and thus should not be passed around too much - part 1
[ https://issues.apache.org/jira/browse/HIVE-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172989#comment-15172989 ] Ashutosh Chauhan commented on HIVE-13002: - Original description is for q test run. Our qtest run is single threaded. Wondering how this issue can show up there? > Hive object is not thread safe, is shared via a threadlocal and thus should > not be passed around too much - part 1 > -- > > Key: HIVE-13002 > URL: https://issues.apache.org/jira/browse/HIVE-13002 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13002.01.patch, HIVE-13002.02.patch, > HIVE-13002.03.patch, HIVE-13002.patch > > > Discovered in some q test run: > {noformat} > TestCliDriver.testCliDriver_insert_values_orig_table:123->runTest:199 > Unexpected exception java.util.ConcurrentModificationException > at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926) > at java.util.HashMap$EntryIterator.next(HashMap.java:966) > at java.util.HashMap$EntryIterator.next(HashMap.java:964) > at > org.apache.hadoop.hive.ql.metadata.Hive.dumpAndClearMetaCallTiming(Hive.java:3412) > at > org.apache.hadoop.hive.ql.Driver.dumpMetaCallTimingWithoutEx(Driver.java:574) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1722) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1342) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1113) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1101) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13002) Hive object is not thread safe, is shared via a threadlocal and thus should not be passed around too much - part 1
[ https://issues.apache.org/jira/browse/HIVE-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172976#comment-15172976 ] Sergey Shelukhin commented on HIVE-13002: - Well, I'm removing all the places except one where it could be multithreaded, so it's more of an overkill than a potential underkill. I nuked the barn so I'm pretty sure I hit the target on its wall somewhere :) The exception is session Hive, but we saw this callstack in tests w/o HS2, so it shouldn't have been the culprit in this case. I am not sure if session Hive can be used in two places, I don't think so... > Hive object is not thread safe, is shared via a threadlocal and thus should > not be passed around too much - part 1 > -- > > Key: HIVE-13002 > URL: https://issues.apache.org/jira/browse/HIVE-13002 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13002.01.patch, HIVE-13002.02.patch, > HIVE-13002.03.patch, HIVE-13002.patch > > > Discovered in some q test run: > {noformat} > TestCliDriver.testCliDriver_insert_values_orig_table:123->runTest:199 > Unexpected exception java.util.ConcurrentModificationException > at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926) > at java.util.HashMap$EntryIterator.next(HashMap.java:966) > at java.util.HashMap$EntryIterator.next(HashMap.java:964) > at > org.apache.hadoop.hive.ql.metadata.Hive.dumpAndClearMetaCallTiming(Hive.java:3412) > at > org.apache.hadoop.hive.ql.Driver.dumpMetaCallTimingWithoutEx(Driver.java:574) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1722) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1342) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1113) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1101) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13002) Hive object is not thread safe, is shared via a threadlocal and thus should not be passed around too much - part 1
[ https://issues.apache.org/jira/browse/HIVE-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172971#comment-15172971 ] Ashutosh Chauhan commented on HIVE-13002: - Then this is a cleanup which *hopefully* will fix the problem : ) > Hive object is not thread safe, is shared via a threadlocal and thus should > not be passed around too much - part 1 > -- > > Key: HIVE-13002 > URL: https://issues.apache.org/jira/browse/HIVE-13002 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13002.01.patch, HIVE-13002.02.patch, > HIVE-13002.03.patch, HIVE-13002.patch > > > Discovered in some q test run: > {noformat} > TestCliDriver.testCliDriver_insert_values_orig_table:123->runTest:199 > Unexpected exception java.util.ConcurrentModificationException > at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926) > at java.util.HashMap$EntryIterator.next(HashMap.java:966) > at java.util.HashMap$EntryIterator.next(HashMap.java:964) > at > org.apache.hadoop.hive.ql.metadata.Hive.dumpAndClearMetaCallTiming(Hive.java:3412) > at > org.apache.hadoop.hive.ql.Driver.dumpMetaCallTimingWithoutEx(Driver.java:574) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1722) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1342) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1113) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1101) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13002) Hive object is not thread safe, is shared via a threadlocal and thus should not be passed around too much - part 1
[ https://issues.apache.org/jira/browse/HIVE-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13002: Attachment: HIVE-13002.03.patch Updated the patch to fix some tests. Not sure why spark stuff all timed out... I will try to repro it locally, or will try to get logs here. > Hive object is not thread safe, is shared via a threadlocal and thus should > not be passed around too much - part 1 > -- > > Key: HIVE-13002 > URL: https://issues.apache.org/jira/browse/HIVE-13002 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13002.01.patch, HIVE-13002.02.patch, > HIVE-13002.03.patch, HIVE-13002.patch > > > Discovered in some q test run: > {noformat} > TestCliDriver.testCliDriver_insert_values_orig_table:123->runTest:199 > Unexpected exception java.util.ConcurrentModificationException > at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926) > at java.util.HashMap$EntryIterator.next(HashMap.java:966) > at java.util.HashMap$EntryIterator.next(HashMap.java:964) > at > org.apache.hadoop.hive.ql.metadata.Hive.dumpAndClearMetaCallTiming(Hive.java:3412) > at > org.apache.hadoop.hive.ql.Driver.dumpMetaCallTimingWithoutEx(Driver.java:574) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1722) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1342) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1113) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1101) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13002) Hive object is not thread safe, is shared via a threadlocal and thus should not be passed around too much - part 1
[ https://issues.apache.org/jira/browse/HIVE-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172966#comment-15172966 ] Sergey Shelukhin commented on HIVE-13002: - No. > Hive object is not thread safe, is shared via a threadlocal and thus should > not be passed around too much - part 1 > -- > > Key: HIVE-13002 > URL: https://issues.apache.org/jira/browse/HIVE-13002 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13002.01.patch, HIVE-13002.02.patch, > HIVE-13002.patch > > > Discovered in some q test run: > {noformat} > TestCliDriver.testCliDriver_insert_values_orig_table:123->runTest:199 > Unexpected exception java.util.ConcurrentModificationException > at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926) > at java.util.HashMap$EntryIterator.next(HashMap.java:966) > at java.util.HashMap$EntryIterator.next(HashMap.java:964) > at > org.apache.hadoop.hive.ql.metadata.Hive.dumpAndClearMetaCallTiming(Hive.java:3412) > at > org.apache.hadoop.hive.ql.Driver.dumpMetaCallTimingWithoutEx(Driver.java:574) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1722) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1342) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1113) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1101) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13176) OutOfMemoryError : GC overhead limit exceeded
[ https://issues.apache.org/jira/browse/HIVE-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-13176: -- Attachment: shutdownhook.png fs.png dataNucleus.png > OutOfMemoryError : GC overhead limit exceeded > -- > > Key: HIVE-13176 > URL: https://issues.apache.org/jira/browse/HIVE-13176 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Kavan Suresh >Assignee: Siddharth Seth > Attachments: dataNucleus.png, fs.png, shutdownhook.png > > > Detected leaks while testing hiveserver2 concurrency setup with LLAP. > 2016-02-26T12:50:58,131 ERROR [HiveServer2-Background-Pool: Thread-311030]: > operation.Operation (SQLOperation.java:run(230)) - Error running hive query: > org.apache.hive.service.cli.HiveSQLException: Error while processing > statement: FAILED: Execution Error, return code -101 from > org.apache.hadoop.hive.ql.exec.StatsTask. GC overhead limit exceeded > at > org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:333) > ~[hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:177) > ~[hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:73) > ~[hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:227) > [hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] > at java.security.AccessController.doPrivileged(Native Method) > ~[?:1.8.0_45] > at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_45] > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > [hadoop-common-2.7.1.2.3.5.1-36.jar:?] > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:239) > [hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [?:1.8.0_45] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_45] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [?:1.8.0_45] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [?:1.8.0_45] > at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13002) Hive object is not thread safe, is shared via a threadlocal and thus should not be passed around too much - part 1
[ https://issues.apache.org/jira/browse/HIVE-13002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172959#comment-15172959 ] Ashutosh Chauhan commented on HIVE-13002: - Patch looks good. Failures need to be looked at. Do we know how perf logger was getting shared across threads? Which class was culprit for caching it? > Hive object is not thread safe, is shared via a threadlocal and thus should > not be passed around too much - part 1 > -- > > Key: HIVE-13002 > URL: https://issues.apache.org/jira/browse/HIVE-13002 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13002.01.patch, HIVE-13002.02.patch, > HIVE-13002.patch > > > Discovered in some q test run: > {noformat} > TestCliDriver.testCliDriver_insert_values_orig_table:123->runTest:199 > Unexpected exception java.util.ConcurrentModificationException > at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926) > at java.util.HashMap$EntryIterator.next(HashMap.java:966) > at java.util.HashMap$EntryIterator.next(HashMap.java:964) > at > org.apache.hadoop.hive.ql.metadata.Hive.dumpAndClearMetaCallTiming(Hive.java:3412) > at > org.apache.hadoop.hive.ql.Driver.dumpMetaCallTimingWithoutEx(Driver.java:574) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1722) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1342) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1113) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1101) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13176) OutOfMemoryError : GC overhead limit exceeded
[ https://issues.apache.org/jira/browse/HIVE-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172928#comment-15172928 ] Siddharth Seth commented on HIVE-13176: --- There's several issues here, and two cases being looked at. In the first case, there's a lot of failed queries. Driver instances are retained by the Shutdownhook - 35K of them. Looking at the code, I couldn't see any direct leaks - and can only attribute this to failed queries / connections - where the client may have failed to close the connection. In the second case, there's two main culprits. DataNucleus PluginManager seems to be retaining a large chunk of the heap. The second is a single FileSystem object which is over 100MB. This is caused by retention of FSStatistics objects - and has been fixed in a Hadoop jira - we should look at adding a workaround for this, i.e. consume the statistics instance as part of the close. The second reason is a large number of entities registered in the deleteOnExit tracker within FileSystem. That, I believe, is something that Hive will need to look at. > OutOfMemoryError : GC overhead limit exceeded > -- > > Key: HIVE-13176 > URL: https://issues.apache.org/jira/browse/HIVE-13176 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Kavan Suresh >Assignee: Siddharth Seth > > Detected leaks while testing hiveserver2 concurrency setup with LLAP. > 2016-02-26T12:50:58,131 ERROR [HiveServer2-Background-Pool: Thread-311030]: > operation.Operation (SQLOperation.java:run(230)) - Error running hive query: > org.apache.hive.service.cli.HiveSQLException: Error while processing > statement: FAILED: Execution Error, return code -101 from > org.apache.hadoop.hive.ql.exec.StatsTask. GC overhead limit exceeded > at > org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:333) > ~[hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:177) > ~[hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:73) > ~[hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:227) > [hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] > at java.security.AccessController.doPrivileged(Native Method) > ~[?:1.8.0_45] > at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_45] > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > [hadoop-common-2.7.1.2.3.5.1-36.jar:?] > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:239) > [hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [?:1.8.0_45] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_45] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [?:1.8.0_45] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [?:1.8.0_45] > at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13176) OutOfMemoryError : GC overhead limit exceeded
[ https://issues.apache.org/jira/browse/HIVE-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-13176: -- Description: Detected leaks while testing hiveserver2 concurrency setup with LLAP. 2016-02-26T12:50:58,131 ERROR [HiveServer2-Background-Pool: Thread-311030]: operation.Operation (SQLOperation.java:run(230)) - Error running hive query: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.StatsTask. GC overhead limit exceeded at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:333) ~[hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:177) ~[hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:73) ~[hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:227) [hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_45] at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_45] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.2.3.5.1-36.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:239) [hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_45] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_45] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_45] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_45] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45] was: Detected Thread leaks while testing hiveserver2 concurrency setup with LLAP. 2016-02-26T12:50:58,131 ERROR [HiveServer2-Background-Pool: Thread-311030]: operation.Operation (SQLOperation.java:run(230)) - Error running hive query: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.StatsTask. GC overhead limit exceeded at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:333) ~[hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:177) ~[hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:73) ~[hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:227) [hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_45] at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_45] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.2.3.5.1-36.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:239) [hive-jdbc-2.0.0.2.3.5.1-36-standalone.jar:2.0.0.2.3.5.1-36] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_45] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_45] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_45] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_45] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45] > OutOfMemoryError : GC overhead limit exceeded > -- > > Key: HIVE-13176 > URL: https://issues.apache.org/jira/browse/HIVE-13176 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Kavan Suresh >Assignee: Siddharth Seth > > Detected leaks while testing hiveserver2 concurrency setup with LLAP. > 2016-02-26T12:50:58,131 ERROR [HiveServer2-Background-Pool: Thread-311030]: > operation.Operation (SQLOperation.java:run(230)) - Error running hive query: > org.apache.hive.service.cli.HiveSQLException: Error while processing > statement: FAILED: Execution Error, return code -101 from > org.apache.hadoop.hive.ql.exec.StatsTask. GC overhead limit exceeded > at >
[jira] [Commented] (HIVE-9422) LLAP: row-level vectorized SARGs
[ https://issues.apache.org/jira/browse/HIVE-9422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172907#comment-15172907 ] Sergey Shelukhin commented on HIVE-9422: Sorry for the delay. 1) In general, this approach makes sense. The outer loop (for (int i = 0; i < maxBatchesRG; i++)) logic might need to change since the number of batches might change. Probably good to run some tests on this with lots of rows filtered out. 2) sargApp.pickRow(cvb, sarged_cvb); - the implementation for this is not included. Note that the next optimization would be to filter based only on relevant vectors, and then filling the others; that might be relevant for pickRow implementation. 3) assert sarged_cvb.size >= cvb.size; - wouldn't this be false if some rows are filtered out. > LLAP: row-level vectorized SARGs > > > Key: HIVE-9422 > URL: https://issues.apache.org/jira/browse/HIVE-9422 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Sergey Shelukhin > Attachments: HIVE-9422.WIP1.patch > > > When VRBs are built from encoded data, sargs can be applied on low level to > reduce the number of rows to process. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12994) Implement support for NULLS FIRST/NULLS LAST
[ https://issues.apache.org/jira/browse/HIVE-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172887#comment-15172887 ] Ashutosh Chauhan commented on HIVE-12994: - Are we reserving _nulls_ and _last_ as keyword in grammar with this change ? If not, can you add test case like {code} create table nulls (last int); {code} to ensure in future this doesnt get impacted? > Implement support for NULLS FIRST/NULLS LAST > > > Key: HIVE-12994 > URL: https://issues.apache.org/jira/browse/HIVE-12994 > Project: Hive > Issue Type: New Feature > Components: CBO, Parser, Serializers/Deserializers >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-12994.01.patch, HIVE-12994.02.patch, > HIVE-12994.03.patch, HIVE-12994.04.patch, HIVE-12994.05.patch, > HIVE-12994.06.patch, HIVE-12994.06.patch, HIVE-12994.07.patch, > HIVE-12994.08.patch, HIVE-12994.09.patch, HIVE-12994.10.patch, > HIVE-12994.11.patch, HIVE-12994.patch > > > From SQL:2003, the NULLS FIRST and NULLS LAST options can be used to > determine whether nulls appear before or after non-null data values when the > ORDER BY clause is used. > SQL standard does not specify the behavior by default. Currently in Hive, > null values sort as if lower than any non-null value; that is, NULLS FIRST is > the default for ASC order, and NULLS LAST for DESC order. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13186) ALTER TABLE RENAME should lowercase table name and hdfs location
[ https://issues.apache.org/jira/browse/HIVE-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172867#comment-15172867 ] Wei Zheng commented on HIVE-13186: -- Operations related to table creation will normally lowercase the db/table names before doing certain tasks. But for ALTER TABLE things are different. Specifically, if a table is renamed and given a new mixed-case name, HDFS directory is created with mixed case as is. That is not consistent to CREATE TABLE behavior. Moreover, this will cause issue for example if users want to load data directly to HDFS under the table directory, or use some scripts to check data size on HDFS path. Example {code} hive> create table OldName (a int); OK Time taken: 3.125 seconds hive> desc formatted oldname; OK # col_name data_type comment a int # Detailed Table Information Database: default Owner: hive CreateTime: Mon Feb 29 23:29:33 UTC 2016 LastAccessTime: UNKNOWN Protect Mode: None Retention: 0 Location: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/oldname Table Type: MANAGED_TABLE Table Parameters: transient_lastDdlTime 1456788573 # Storage Information SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe InputFormat:org.apache.hadoop.mapred.TextInputFormat OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat Compressed: No Num Buckets:-1 Bucket Columns: [] Sort Columns: [] Storage Desc Params: serialization.format1 Time taken: 0.754 seconds, Fetched: 26 row(s) hive> alter table oldname rename to newName; OK Time taken: 1.625 seconds hive> desc formatted newname; OK # col_name data_type comment a int # Detailed Table Information Database: default Owner: hive CreateTime: Mon Feb 29 23:29:33 UTC 2016 LastAccessTime: UNKNOWN Protect Mode: None Retention: 0 Location: hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/newName Table Type: MANAGED_TABLE Table Parameters: last_modified_byhive last_modified_time 1456788604 transient_lastDdlTime 1456788604 # Storage Information SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe InputFormat:org.apache.hadoop.mapred.TextInputFormat OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat Compressed: No Num Buckets:-1 Bucket Columns: [] Sort Columns: [] Storage Desc Params: serialization.format1 Time taken: 0.485 seconds, Fetched: 28 row(s) {code} > ALTER TABLE RENAME should lowercase table name and hdfs location > > > Key: HIVE-13186 > URL: https://issues.apache.org/jira/browse/HIVE-13186 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.0.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13184) LLAP: DAG credentials (e.g. HBase tokens) are not passed to the tasks in Tez plugin
[ https://issues.apache.org/jira/browse/HIVE-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172865#comment-15172865 ] Siddharth Seth commented on HIVE-13184: --- +1 > LLAP: DAG credentials (e.g. HBase tokens) are not passed to the tasks in Tez > plugin > --- > > Key: HIVE-13184 > URL: https://issues.apache.org/jira/browse/HIVE-13184 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13184.01.patch, HIVE-13184.patch > > > NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13184) LLAP: DAG credentials (e.g. HBase tokens) are not passed to the tasks in Tez plugin
[ https://issues.apache.org/jira/browse/HIVE-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13184: Attachment: HIVE-13184.01.patch > LLAP: DAG credentials (e.g. HBase tokens) are not passed to the tasks in Tez > plugin > --- > > Key: HIVE-13184 > URL: https://issues.apache.org/jira/browse/HIVE-13184 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13184.01.patch, HIVE-13184.patch > > > NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13185) orc.ReaderImp.ensureOrcFooter() method fails on small text files with IndexOutOfBoundsException
[ https://issues.apache.org/jira/browse/HIVE-13185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172853#comment-15172853 ] Illya Yalovyy commented on HIVE-13185: -- In OrcInputFormat.validateInput(...) it checks that file.size() is not 0. But a valid ORC file should be much larger than 0. Is there a way to come up with a smallest valid ORC file? For instance would it be correct to replace "if (file.getLen() == 0)" with "if (file.getLen() < OrcFile.MAGIC.length() + 1)"? > orc.ReaderImp.ensureOrcFooter() method fails on small text files with > IndexOutOfBoundsException > --- > > Key: HIVE-13185 > URL: https://issues.apache.org/jira/browse/HIVE-13185 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 2.1.0 >Reporter: Illya Yalovyy > > Steps to reproduce: > 1. Create a Text source table with one line of data: > {code} > create table src (id int); > insert overwrite table src values (1); > {code} > 2. Create a target table: > {code} > create table trg (id int); > {code} > 3. Try to load small text file to the target table: > {code} > load data inpath 'user/hive/warehouse/src/00_0' into table trg; > {code} > *Error message:* > {quote} > FAILED: SemanticException Unable to load data to destination table. Error: > java.lang.IndexOutOfBoundsException > {quote} > *Stack trace:* > {noformat} > org.apache.hadoop.hive.ql.parse.SemanticException: Unable to load data to > destination table. Error: java.lang.IndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.parse.LoadSemanticAnalyzer.ensureFileFormatsMatch(LoadSemanticAnalyzer.java:340) > at > org.apache.hadoop.hive.ql.parse.LoadSemanticAnalyzer.analyzeInternal(LoadSemanticAnalyzer.java:224) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:242) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:481) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:317) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1190) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1285) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1116) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1104) > ... > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10632) Make sure TXN_COMPONENTS gets cleaned up if table is dropped before compaction.
[ https://issues.apache.org/jira/browse/HIVE-10632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172854#comment-15172854 ] Alan Gates commented on HIVE-10632: --- TxnHandler.cleanupRecords For the first argument, enums or at least final statics are better than magic characters. TxnUtils.isAcidTable, I don't think we need to check beyond whether a table is marked transactional. If it's marked as transactional let's just take that as the truth and throw an error if we discover some other issue like it's not bucketed or whatever. Other than that, looks good. > Make sure TXN_COMPONENTS gets cleaned up if table is dropped before > compaction. > --- > > Key: HIVE-10632 > URL: https://issues.apache.org/jira/browse/HIVE-10632 > Project: Hive > Issue Type: Bug > Components: Metastore, Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Wei Zheng >Priority: Critical > Attachments: HIVE-10632.1.patch, HIVE-10632.2.patch, > HIVE-10632.3.patch, HIVE-10632.4.patch, HIVE-10632.5.patch > > > The compaction process will clean up entries in TXNS, > COMPLETED_TXN_COMPONENTS, TXN_COMPONENTS. If the table/partition is dropped > before compaction is complete there will be data left in these tables. Need > to investigate if there are other situations where this may happen and > address it. > see HIVE-10595 for additional info -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HIVE-13185) orc.ReaderImp.ensureOrcFooter() method fails on small text files with IndexOutOfBoundsException
[ https://issues.apache.org/jira/browse/HIVE-13185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172816#comment-15172816 ] Sergey Shelukhin edited comment on HIVE-13185 at 2/29/16 11:07 PM: --- Both can be done. My point is that the exceptions from corrupt files are often expected, so if there's some other issue like this, it's better to have it caught properly in the validator, to make sure we recognize the file as invalid instead of failing. Note that there isn't even an ORC table in the example, it's just running the validator on all supported formats to see if it happens to be ORC/... was (Author: sershe): Both can be done. My point is that the exceptions from corrupt files are often expected, so if there's some other issue like this, it's better to have it caught properly in the validator, to make sure we recognize the file as invalid instead of failing. > orc.ReaderImp.ensureOrcFooter() method fails on small text files with > IndexOutOfBoundsException > --- > > Key: HIVE-13185 > URL: https://issues.apache.org/jira/browse/HIVE-13185 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 2.1.0 >Reporter: Illya Yalovyy > > Steps to reproduce: > 1. Create a Text source table with one line of data: > {code} > create table src (id int); > insert overwrite table src values (1); > {code} > 2. Create a target table: > {code} > create table trg (id int); > {code} > 3. Try to load small text file to the target table: > {code} > load data inpath 'user/hive/warehouse/src/00_0' into table trg; > {code} > *Error message:* > {quote} > FAILED: SemanticException Unable to load data to destination table. Error: > java.lang.IndexOutOfBoundsException > {quote} > *Stack trace:* > {noformat} > org.apache.hadoop.hive.ql.parse.SemanticException: Unable to load data to > destination table. Error: java.lang.IndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.parse.LoadSemanticAnalyzer.ensureFileFormatsMatch(LoadSemanticAnalyzer.java:340) > at > org.apache.hadoop.hive.ql.parse.LoadSemanticAnalyzer.analyzeInternal(LoadSemanticAnalyzer.java:224) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:242) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:481) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:317) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1190) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1285) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1116) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1104) > ... > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13185) orc.ReaderImp.ensureOrcFooter() method fails on small text files with IndexOutOfBoundsException
[ https://issues.apache.org/jira/browse/HIVE-13185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172816#comment-15172816 ] Sergey Shelukhin commented on HIVE-13185: - Both can be done. My point is that the exceptions from corrupt files are often expected, so if there's some other issue like this, it's better to have it caught properly in the validator, to make sure we recognize the file as invalid instead of failing. > orc.ReaderImp.ensureOrcFooter() method fails on small text files with > IndexOutOfBoundsException > --- > > Key: HIVE-13185 > URL: https://issues.apache.org/jira/browse/HIVE-13185 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 2.1.0 >Reporter: Illya Yalovyy > > Steps to reproduce: > 1. Create a Text source table with one line of data: > {code} > create table src (id int); > insert overwrite table src values (1); > {code} > 2. Create a target table: > {code} > create table trg (id int); > {code} > 3. Try to load small text file to the target table: > {code} > load data inpath 'user/hive/warehouse/src/00_0' into table trg; > {code} > *Error message:* > {quote} > FAILED: SemanticException Unable to load data to destination table. Error: > java.lang.IndexOutOfBoundsException > {quote} > *Stack trace:* > {noformat} > org.apache.hadoop.hive.ql.parse.SemanticException: Unable to load data to > destination table. Error: java.lang.IndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.parse.LoadSemanticAnalyzer.ensureFileFormatsMatch(LoadSemanticAnalyzer.java:340) > at > org.apache.hadoop.hive.ql.parse.LoadSemanticAnalyzer.analyzeInternal(LoadSemanticAnalyzer.java:224) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:242) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:481) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:317) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1190) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1285) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1116) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1104) > ... > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13185) orc.ReaderImp.ensureOrcFooter() method fails on small text files with IndexOutOfBoundsException
[ https://issues.apache.org/jira/browse/HIVE-13185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172785#comment-15172785 ] Illya Yalovyy commented on HIVE-13185: -- I would prefer to check buffer size before accessing it. It seems like more robust way. > orc.ReaderImp.ensureOrcFooter() method fails on small text files with > IndexOutOfBoundsException > --- > > Key: HIVE-13185 > URL: https://issues.apache.org/jira/browse/HIVE-13185 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 2.1.0 >Reporter: Illya Yalovyy > > Steps to reproduce: > 1. Create a Text source table with one line of data: > {code} > create table src (id int); > insert overwrite table src values (1); > {code} > 2. Create a target table: > {code} > create table trg (id int); > {code} > 3. Try to load small text file to the target table: > {code} > load data inpath 'user/hive/warehouse/src/00_0' into table trg; > {code} > *Error message:* > {quote} > FAILED: SemanticException Unable to load data to destination table. Error: > java.lang.IndexOutOfBoundsException > {quote} > *Stack trace:* > {noformat} > org.apache.hadoop.hive.ql.parse.SemanticException: Unable to load data to > destination table. Error: java.lang.IndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.parse.LoadSemanticAnalyzer.ensureFileFormatsMatch(LoadSemanticAnalyzer.java:340) > at > org.apache.hadoop.hive.ql.parse.LoadSemanticAnalyzer.analyzeInternal(LoadSemanticAnalyzer.java:224) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:242) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:481) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:317) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1190) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1285) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1116) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1104) > ... > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13095) Support view column authorization
[ https://issues.apache.org/jira/browse/HIVE-13095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-13095: --- Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks [~ashutoshc] for the review! > Support view column authorization > - > > Key: HIVE-13095 > URL: https://issues.apache.org/jira/browse/HIVE-13095 > Project: Hive > Issue Type: New Feature >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Fix For: 2.1.0 > > Attachments: HIVE-13095.01.patch, HIVE-13095.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13185) orc.ReaderImp.ensureOrcFooter() method fails on small text files with IndexOutOfBoundsException
[ https://issues.apache.org/jira/browse/HIVE-13185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172755#comment-15172755 ] Sergey Shelukhin commented on HIVE-13185: - I wonder if this should catch all the exceptions, not just IO: {noformat} try { OrcFile.createReader(file.getPath(), OrcFile.readerOptions(conf).filesystem(fs)); } catch (IOException e) { return false; } {noformat}? The exception from trying to read a corrupted file (from ORC perspective) is expected. > orc.ReaderImp.ensureOrcFooter() method fails on small text files with > IndexOutOfBoundsException > --- > > Key: HIVE-13185 > URL: https://issues.apache.org/jira/browse/HIVE-13185 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 2.1.0 >Reporter: Illya Yalovyy > > Steps to reproduce: > 1. Create a Text source table with one line of data: > {code} > create table src (id int); > insert overwrite table src values (1); > {code} > 2. Create a target table: > {code} > create table trg (id int); > {code} > 3. Try to load small text file to the target table: > {code} > load data inpath 'user/hive/warehouse/src/00_0' into table trg; > {code} > *Error message:* > {quote} > FAILED: SemanticException Unable to load data to destination table. Error: > java.lang.IndexOutOfBoundsException > {quote} > *Stack trace:* > {noformat} > org.apache.hadoop.hive.ql.parse.SemanticException: Unable to load data to > destination table. Error: java.lang.IndexOutOfBoundsException > at > org.apache.hadoop.hive.ql.parse.LoadSemanticAnalyzer.ensureFileFormatsMatch(LoadSemanticAnalyzer.java:340) > at > org.apache.hadoop.hive.ql.parse.LoadSemanticAnalyzer.analyzeInternal(LoadSemanticAnalyzer.java:224) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:242) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:481) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:317) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1190) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1285) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1116) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1104) > ... > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13184) LLAP: DAG credentials (e.g. HBase tokens) are not passed to the tasks in Tez plugin
[ https://issues.apache.org/jira/browse/HIVE-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13184: Status: Patch Available (was: Open) > LLAP: DAG credentials (e.g. HBase tokens) are not passed to the tasks in Tez > plugin > --- > > Key: HIVE-13184 > URL: https://issues.apache.org/jira/browse/HIVE-13184 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13184.patch > > > NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13184) LLAP: DAG credentials (e.g. HBase tokens) are not passed to the tasks in Tez plugin
[ https://issues.apache.org/jira/browse/HIVE-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13184: Attachment: HIVE-13184.patch [~sseth] can you take a look? This uses the new API > LLAP: DAG credentials (e.g. HBase tokens) are not passed to the tasks in Tez > plugin > --- > > Key: HIVE-13184 > URL: https://issues.apache.org/jira/browse/HIVE-13184 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13184.patch > > > NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13151) Clean up UGI objects in FileSystem cache for transactions
[ https://issues.apache.org/jira/browse/HIVE-13151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-13151: - Status: Patch Available (was: Open) > Clean up UGI objects in FileSystem cache for transactions > - > > Key: HIVE-13151 > URL: https://issues.apache.org/jira/browse/HIVE-13151 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.0.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > Attachments: HIVE-13151.1.patch, HIVE-13151.2.patch, > HIVE-13151.3..patch > > > One issue with FileSystem.CACHE is that it does not clean itself. The key in > that cache includes UGI object. When new UGI objects are created and used > with the FileSystem api, new entries get added to the cache. > We need to manually clean up those UGI objects once they are no longer in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13151) Clean up UGI objects in FileSystem cache for transactions
[ https://issues.apache.org/jira/browse/HIVE-13151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-13151: - Status: Open (was: Patch Available) > Clean up UGI objects in FileSystem cache for transactions > - > > Key: HIVE-13151 > URL: https://issues.apache.org/jira/browse/HIVE-13151 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.0.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > Attachments: HIVE-13151.1.patch, HIVE-13151.2.patch, > HIVE-13151.3..patch > > > One issue with FileSystem.CACHE is that it does not clean itself. The key in > that cache includes UGI object. When new UGI objects are created and used > with the FileSystem api, new entries get added to the cache. > We need to manually clean up those UGI objects once they are no longer in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13175) Disallow making external tables transactional
[ https://issues.apache.org/jira/browse/HIVE-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-13175: - Attachment: HIVE-13175.1.patch > Disallow making external tables transactional > - > > Key: HIVE-13175 > URL: https://issues.apache.org/jira/browse/HIVE-13175 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.0.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > Attachments: HIVE-13175.1.patch > > > The fact that compactor rewrites contents of ACID tables is in conflict with > what is expected of external tables. > Conversely, end user can write to External table which certainly not what is > expected of ACID table. > So we should explicitly disallow making an external table ACID. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13175) Disallow making external tables transactional
[ https://issues.apache.org/jira/browse/HIVE-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-13175: - Status: Patch Available (was: Open) > Disallow making external tables transactional > - > > Key: HIVE-13175 > URL: https://issues.apache.org/jira/browse/HIVE-13175 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.0.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > Attachments: HIVE-13175.1.patch > > > The fact that compactor rewrites contents of ACID tables is in conflict with > what is expected of external tables. > Conversely, end user can write to External table which certainly not what is > expected of ACID table. > So we should explicitly disallow making an external table ACID. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10632) Make sure TXN_COMPONENTS gets cleaned up if table is dropped before compaction.
[ https://issues.apache.org/jira/browse/HIVE-10632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172667#comment-15172667 ] Wei Zheng commented on HIVE-10632: -- [~ekoifman] Feel free to take a look as well :) > Make sure TXN_COMPONENTS gets cleaned up if table is dropped before > compaction. > --- > > Key: HIVE-10632 > URL: https://issues.apache.org/jira/browse/HIVE-10632 > Project: Hive > Issue Type: Bug > Components: Metastore, Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Wei Zheng >Priority: Critical > Attachments: HIVE-10632.1.patch, HIVE-10632.2.patch, > HIVE-10632.3.patch, HIVE-10632.4.patch, HIVE-10632.5.patch > > > The compaction process will clean up entries in TXNS, > COMPLETED_TXN_COMPONENTS, TXN_COMPONENTS. If the table/partition is dropped > before compaction is complete there will be data left in these tables. Need > to investigate if there are other situations where this may happen and > address it. > see HIVE-10595 for additional info -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12757) Fix TestCodahaleMetrics#testFileReporting
[ https://issues.apache.org/jira/browse/HIVE-12757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-12757: - Status: Patch Available (was: Open) > Fix TestCodahaleMetrics#testFileReporting > - > > Key: HIVE-12757 > URL: https://issues.apache.org/jira/browse/HIVE-12757 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 2.1.0 >Reporter: Szehon Ho >Assignee: Szehon Ho > Attachments: HIVE-12757.2.patch, HIVE-12757.patch > > > Codahale Metrics file reporter is time based, hence test is as well. On slow > machines, sometimes the file is not written fast enough to be read. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13184) LLAP: DAG credentials (e.g. HBase tokens) are not passed to the tasks in Tez plugin
[ https://issues.apache.org/jira/browse/HIVE-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13184: Description: NO PRECOMMIT TESTS > LLAP: DAG credentials (e.g. HBase tokens) are not passed to the tasks in Tez > plugin > --- > > Key: HIVE-13184 > URL: https://issues.apache.org/jira/browse/HIVE-13184 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13184) LLAP: DAG credentials (e.g. HBase tokens) are not passed to the tasks in Tez plugin
[ https://issues.apache.org/jira/browse/HIVE-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13184: Target Version/s: 2.0.1 > LLAP: DAG credentials (e.g. HBase tokens) are not passed to the tasks in Tez > plugin > --- > > Key: HIVE-13184 > URL: https://issues.apache.org/jira/browse/HIVE-13184 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13184) LLAP: DAG credentials (e.g. HBase tokens) are not passed to the tasks in Tez plugin
[ https://issues.apache.org/jira/browse/HIVE-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172647#comment-15172647 ] Sergey Shelukhin commented on HIVE-13184: - Targeting 2.0.1 for now, depends on an unreleased Tez patch. > LLAP: DAG credentials (e.g. HBase tokens) are not passed to the tasks in Tez > plugin > --- > > Key: HIVE-13184 > URL: https://issues.apache.org/jira/browse/HIVE-13184 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13169) HiveServer2: Support delegation token based connection when using http transport
[ https://issues.apache.org/jira/browse/HIVE-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172534#comment-15172534 ] Vaibhav Gumashta commented on HIVE-13169: - [~thejas] Noticed the change in PR. Thanks for the note. I'm +1 on the patch. > HiveServer2: Support delegation token based connection when using http > transport > > > Key: HIVE-13169 > URL: https://issues.apache.org/jira/browse/HIVE-13169 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Thejas M Nair > Attachments: HIVE-13169.1.patch, HIVE-13169.2.patch, > HIVE-13169.3.patch, HIVE-13169.3.patch, HIVE-13169.4.patch > > > HIVE-5155 introduced support for delegation token based connection. However, > it was intended for tcp transport mode. We need to have similar mechanisms > for http transport. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12749) Constant propagate returns string values in incorrect format
[ https://issues.apache.org/jira/browse/HIVE-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172529#comment-15172529 ] Aleksey Vovchenko commented on HIVE-12749: -- In the last patch I deleted constprog2, constprog_partitioner tests. Can you apply and merge it? > Constant propagate returns string values in incorrect format > > > Key: HIVE-12749 > URL: https://issues.apache.org/jira/browse/HIVE-12749 > Project: Hive > Issue Type: Bug >Affects Versions: 1.0.0, 1.2.0 >Reporter: Oleksiy Sayankin >Assignee: Aleksey Vovchenko > Fix For: 2.0.1 > > Attachments: HIVE-12749.1.patch, HIVE-12749.2.patch, > HIVE-12749.3.patch, HIVE-12749.4.patch, HIVE-12749.5.patch, > HIVE-12749.6.patch, HIVE-12749.7.patch, HIVE-12749.8.patch > > > h2. STEP 1. Create and upload test data > Execute in command line: > {noformat} > nano stest.data > {noformat} > Add to file: > {noformat} > 000126,000777 > 000126,000778 > 000126,000779 > 000474,000888 > 000468,000889 > 000272,000880 > {noformat} > {noformat} > hadoop fs -put stest.data / > {noformat} > {noformat} > hive> create table stest(x STRING, y STRING) ROW FORMAT DELIMITED FIELDS > TERMINATED BY ','; > hive> LOAD DATA INPATH '/stest.data' OVERWRITE INTO TABLE stest; > {noformat} > h2. STEP 2. Execute test query (with cast for x) > {noformat} > select x from stest where cast(x as int) = 126; > {noformat} > EXPECTED RESULT: > {noformat} > 000126 > 000126 > 000126 > {noformat} > ACTUAL RESULT: > {noformat} > 126 > 126 > 126 > {noformat} > h2. STEP 3. Execute test query (no cast for x) > {noformat} > hive> select x from stest where x = 126; > {noformat} > EXPECTED RESULT: > {noformat} > 000126 > 000126 > 000126 > {noformat} > ACTUAL RESULT: > {noformat} > 126 > 126 > 126 > {noformat} > In steps #2, #3 I expected '000126' because the origin type of x is STRING in > stest table. > Note, setting hive.optimize.constant.propagation=false fixes the issue. > {noformat} > hive> set hive.optimize.constant.propagation=false; > hive> select x from stest where x = 126; > OK > 000126 > 000126 > 000126 > {noformat} > Related to HIVE-11104, HIVE-8555 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13169) HiveServer2: Support delegation token based connection when using http transport
[ https://issues.apache.org/jira/browse/HIVE-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172524#comment-15172524 ] Thejas M Nair commented on HIVE-13169: -- Yes, I was missing an update in the attached file, but that was present in the pull request. That is why the test was passing locally for me. The updated patch file has fix for that. I have also updated the file and pull request with your suggestions. I am using a new header name specific for hive. > HiveServer2: Support delegation token based connection when using http > transport > > > Key: HIVE-13169 > URL: https://issues.apache.org/jira/browse/HIVE-13169 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Thejas M Nair > Attachments: HIVE-13169.1.patch, HIVE-13169.2.patch, > HIVE-13169.3.patch, HIVE-13169.3.patch, HIVE-13169.4.patch > > > HIVE-5155 introduced support for delegation token based connection. However, > it was intended for tcp transport mode. We need to have similar mechanisms > for http transport. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13169) HiveServer2: Support delegation token based connection when using http transport
[ https://issues.apache.org/jira/browse/HIVE-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-13169: Assignee: Thejas M Nair (was: Vaibhav Gumashta) > HiveServer2: Support delegation token based connection when using http > transport > > > Key: HIVE-13169 > URL: https://issues.apache.org/jira/browse/HIVE-13169 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Thejas M Nair > Attachments: HIVE-13169.1.patch, HIVE-13169.2.patch, > HIVE-13169.3.patch, HIVE-13169.3.patch, HIVE-13169.4.patch > > > HIVE-5155 introduced support for delegation token based connection. However, > it was intended for tcp transport mode. We need to have similar mechanisms > for http transport. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13169) HiveServer2: Support delegation token based connection when using http transport
[ https://issues.apache.org/jira/browse/HIVE-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172505#comment-15172505 ] Vaibhav Gumashta commented on HIVE-13169: - [~thejas] Looks like org.apache.hive.minikdc.TestHiveAuthFactory.testStartTokenManagerForDBTokenStore failure is related. > HiveServer2: Support delegation token based connection when using http > transport > > > Key: HIVE-13169 > URL: https://issues.apache.org/jira/browse/HIVE-13169 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Attachments: HIVE-13169.1.patch, HIVE-13169.2.patch, > HIVE-13169.3.patch, HIVE-13169.3.patch, HIVE-13169.4.patch > > > HIVE-5155 introduced support for delegation token based connection. However, > it was intended for tcp transport mode. We need to have similar mechanisms > for http transport. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12757) Fix TestCodahaleMetrics#testFileReporting
[ https://issues.apache.org/jira/browse/HIVE-12757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172490#comment-15172490 ] Aihua Xu commented on HIVE-12757: - The new patch looks good to me. +1. > Fix TestCodahaleMetrics#testFileReporting > - > > Key: HIVE-12757 > URL: https://issues.apache.org/jira/browse/HIVE-12757 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 2.1.0 >Reporter: Szehon Ho >Assignee: Szehon Ho > Attachments: HIVE-12757.2.patch, HIVE-12757.patch > > > Codahale Metrics file reporter is time based, hence test is as well. On slow > machines, sometimes the file is not written fast enough to be read. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13160) HS2 unable to load UDFs on startup when HMS is not ready
[ https://issues.apache.org/jira/browse/HIVE-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172469#comment-15172469 ] Sergey Shelukhin commented on HIVE-13160: - I was referring to a hypothetical scenario with multiple metastores with different DBs, that probably only happens on a dev cluster where I was running my own metastore and there were multiple configs coming from different places. +1 on the 2nd patch. Can you check if test failures are related? They look unrelated to me. > HS2 unable to load UDFs on startup when HMS is not ready > > > Key: HIVE-13160 > URL: https://issues.apache.org/jira/browse/HIVE-13160 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 1.2.1 >Reporter: Eric Lin >Assignee: Aihua Xu > Attachments: HIVE-13160.1.patch, HIVE-13160.2.patch > > > The error looks like this: > {code} > 2016-02-18 14:43:54,251 INFO hive.metastore: [main]: Trying to connect to > metastore with URI thrift://host-10-17-81-201.coe.cloudera.com:9083 > 2016-02-18 14:48:54,692 WARN hive.metastore: [main]: Failed to connect to > the MetaStore Server... > 2016-02-18 14:48:54,692 INFO hive.metastore: [main]: Waiting 1 seconds > before next connection attempt. > 2016-02-18 14:48:55,692 INFO hive.metastore: [main]: Trying to connect to > metastore with URI thrift://host-10-17-81-201.coe.cloudera.com:9083 > 2016-02-18 14:53:55,800 WARN hive.metastore: [main]: Failed to connect to > the MetaStore Server... > 2016-02-18 14:53:55,800 INFO hive.metastore: [main]: Waiting 1 seconds > before next connection attempt. > 2016-02-18 14:53:56,801 INFO hive.metastore: [main]: Trying to connect to > metastore with URI thrift://host-10-17-81-201.coe.cloudera.com:9083 > 2016-02-18 14:58:56,967 WARN hive.metastore: [main]: Failed to connect to > the MetaStore Server... > 2016-02-18 14:58:56,967 INFO hive.metastore: [main]: Waiting 1 seconds > before next connection attempt. > 2016-02-18 14:58:57,994 WARN hive.ql.metadata.Hive: [main]: Failed to > register all functions. > java.lang.RuntimeException: Unable to instantiate > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1492) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:64) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:74) > at > org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2915) > ... > 016-02-18 14:58:57,997 INFO hive.metastore: [main]: Trying to connect to > metastore with URI thrift://host-10-17-81-201.coe.cloudera.com:9083 > 2016-02-18 15:03:58,094 WARN hive.metastore: [main]: Failed to connect to > the MetaStore Server... > 2016-02-18 15:03:58,095 INFO hive.metastore: [main]: Waiting 1 seconds > before next connection attempt. > 2016-02-18 15:03:59,095 INFO hive.metastore: [main]: Trying to connect to > metastore with URI thrift://host-10-17-81-201.coe.cloudera.com:9083 > 2016-02-18 15:08:59,203 WARN hive.metastore: [main]: Failed to connect to > the MetaStore Server... > 2016-02-18 15:08:59,203 INFO hive.metastore: [main]: Waiting 1 seconds > before next connection attempt. > 2016-02-18 15:09:00,203 INFO hive.metastore: [main]: Trying to connect to > metastore with URI thrift://host-10-17-81-201.coe.cloudera.com:9083 > 2016-02-18 15:14:00,304 WARN hive.metastore: [main]: Failed to connect to > the MetaStore Server... > 2016-02-18 15:14:00,304 INFO hive.metastore: [main]: Waiting 1 seconds > before next connection attempt. > 2016-02-18 15:14:01,306 INFO org.apache.hive.service.server.HiveServer2: > [main]: Shutting down HiveServer2 > 2016-02-18 15:14:01,308 INFO org.apache.hive.service.server.HiveServer2: > [main]: Exception caught when calling stop of HiveServer2 before retrying > start > java.lang.NullPointerException > at > org.apache.hive.service.server.HiveServer2.stop(HiveServer2.java:283) > at > org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:351) > at > org.apache.hive.service.server.HiveServer2.access$400(HiveServer2.java:69) > at > org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:545) > {code} > And then none of the functions will be available for use as HS2 does not > re-register them after HMS is up and ready. > This is not desired behaviour, we shouldn't allow HS2 to be in a servicing > state if function list is not ready. Or, maybe instead of initialize the > function list when HS2 starts, try to load the function list when each Hive > session is created. Of course we can have a
[jira] [Updated] (HIVE-13175) Disallow making external tables transactional
[ https://issues.apache.org/jira/browse/HIVE-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-13175: -- Component/s: (was: Hive) > Disallow making external tables transactional > - > > Key: HIVE-13175 > URL: https://issues.apache.org/jira/browse/HIVE-13175 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.0.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > > The fact that compactor rewrites contents of ACID tables is in conflict with > what is expected of external tables. > Conversely, end user can write to External table which certainly not what is > expected of ACID table. > So we should explicitly disallow making an external table ACID. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13175) Disallow making external tables transactional
[ https://issues.apache.org/jira/browse/HIVE-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-13175: -- Target Version/s: 1.3.0, 2.1.0 (was: 2.1.0) > Disallow making external tables transactional > - > > Key: HIVE-13175 > URL: https://issues.apache.org/jira/browse/HIVE-13175 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.0.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > > The fact that compactor rewrites contents of ACID tables is in conflict with > what is expected of external tables. > Conversely, end user can write to External table which certainly not what is > expected of ACID table. > So we should explicitly disallow making an external table ACID. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13175) Disallow making external tables transactional
[ https://issues.apache.org/jira/browse/HIVE-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-13175: -- Component/s: Transactions > Disallow making external tables transactional > - > > Key: HIVE-13175 > URL: https://issues.apache.org/jira/browse/HIVE-13175 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.0.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > > The fact that compactor rewrites contents of ACID tables is in conflict with > what is expected of external tables. > Conversely, end user can write to External table which certainly not what is > expected of ACID table. > So we should explicitly disallow making an external table ACID. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13013) Further Improve concurrency in TxnHandler
[ https://issues.apache.org/jira/browse/HIVE-13013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-13013: -- Resolution: Fixed Fix Version/s: 2.1.0 1.3.0 Status: Resolved (was: Patch Available) > Further Improve concurrency in TxnHandler > - > > Key: HIVE-13013 > URL: https://issues.apache.org/jira/browse/HIVE-13013 > Project: Hive > Issue Type: Bug > Components: Metastore, Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Fix For: 1.3.0, 2.1.0 > > Attachments: HIVE-13013.2.patch, HIVE-13013.3.patch, > HIVE-13013.4.patch, HIVE-13013.patch > > > There are still a few operations in TxnHandler that run at Serializable > isolation. > Most or all of them can be dropped to READ_COMMITTED now that we have SELECT > ... FOR UPDATE support. This will reduce number of deadlocks in the DBs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12757) Fix TestCodahaleMetrics#testFileReporting
[ https://issues.apache.org/jira/browse/HIVE-12757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-12757: - Attachment: HIVE-12757.2.patch Attach a more sophisticated fix for the file-reporter test where I try 3 times, each timeout of 2000, to read the file. And migrate other time-dependent tests to use the json-dump which is instantaneous. > Fix TestCodahaleMetrics#testFileReporting > - > > Key: HIVE-12757 > URL: https://issues.apache.org/jira/browse/HIVE-12757 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 2.1.0 >Reporter: Szehon Ho >Assignee: Szehon Ho > Attachments: HIVE-12757.2.patch, HIVE-12757.patch > > > Codahale Metrics file reporter is time based, hence test is as well. On slow > machines, sometimes the file is not written fast enough to be read. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13151) Clean up UGI objects in FileSystem cache for transactions
[ https://issues.apache.org/jira/browse/HIVE-13151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-13151: - Attachment: HIVE-13151.3..patch Test failures don't look related and I cannot reproduce locally. Upload patch 3 with exactly same content as patch 2, to trigger another QA run. > Clean up UGI objects in FileSystem cache for transactions > - > > Key: HIVE-13151 > URL: https://issues.apache.org/jira/browse/HIVE-13151 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.0.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > Attachments: HIVE-13151.1.patch, HIVE-13151.2.patch, > HIVE-13151.3..patch > > > One issue with FileSystem.CACHE is that it does not clean itself. The key in > that cache includes UGI object. When new UGI objects are created and used > with the FileSystem api, new entries get added to the cache. > We need to manually clean up those UGI objects once they are no longer in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-13173) LLAP: Add end-to-end test for LlapInputFormat
[ https://issues.apache.org/jira/browse/HIVE-13173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere resolved HIVE-13173. --- Resolution: Fixed Fix Version/s: llap Thanks [~leftylev], yes this was committed to the LLAP branch. > LLAP: Add end-to-end test for LlapInputFormat > - > > Key: HIVE-13173 > URL: https://issues.apache.org/jira/browse/HIVE-13173 > Project: Hive > Issue Type: Sub-task > Components: Tests >Reporter: Jason Dere >Assignee: Jason Dere > Fix For: llap > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12679) Allow users to be able to specify an implementation of IMetaStoreClient via HiveConf
[ https://issues.apache.org/jira/browse/HIVE-12679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172392#comment-15172392 ] Alan Gates commented on HIVE-12679: --- I looked over the patch. The code itself seems fine. The question I have is about the approach. There are several features tied into SessionHiveMetastoreClient and HiveMetastoreClient (temp tables, metastore hooks, how to connect to a remote metastore, as well as the new file footer cache). I'd like to better understand what flexibility you need. If you just want to avoid connecting to the Thrift server that can be accomplished in the current code (e.g HS2 usually runs this way, the fast-path stuff in there runs this way). Is there some feature you need there that can't be added to HIveMetastoreClient or SessionsHiveMetastoreClient? > Allow users to be able to specify an implementation of IMetaStoreClient via > HiveConf > > > Key: HIVE-12679 > URL: https://issues.apache.org/jira/browse/HIVE-12679 > Project: Hive > Issue Type: Improvement > Components: Configuration, Metastore, Query Planning >Affects Versions: 2.1.0 >Reporter: Austin Lee >Assignee: Austin Lee >Priority: Minor > Labels: metastore > Attachments: HIVE-12679.1.patch, HIVE-12679.patch > > > Hi, > I would like to propose a change that would make it possible for users to > choose an implementation of IMetaStoreClient via HiveConf, i.e. > hive-site.xml. Currently, in Hive the choice is hard coded to be > SessionHiveMetaStoreClient in org.apache.hadoop.hive.ql.metadata.Hive. There > is no other direct reference to SessionHiveMetaStoreClient other than the > hard coded class name in Hive.java and the QL component operates only on the > IMetaStoreClient interface so the change would be minimal and it would be > quite similar to how an implementation of RawStore is specified and loaded in > hive-metastore. One use case this change would serve would be one where a > user wishes to use an implementation of this interface without the dependency > on the Thrift server. > > Thank you, > Austin -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-5370) format_number udf should take user specifed format as argument
[ https://issues.apache.org/jira/browse/HIVE-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172368#comment-15172368 ] Hive QA commented on HIVE-5370: --- Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12629473/HIVE-5370.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7130/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7130/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7130/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]] + export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + export PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-7130/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 69d7ff1..aaa3569 master -> origin/master + git reset --hard HEAD HEAD is now at 69d7ff1 HIVE-13009 : Fix add_jar_file.q on Windows (Jason Dere via Ashutosh Chauhan) + git clean -f -d + git checkout master Already on 'master' Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. + git reset --hard origin/master HEAD is now at aaa3569 HIVE-13013 - Further Improve concurrency in TxnHandler (Eugene Koifman, reviewed by Alan Gates) + git merge --ff-only origin/master Already up-to-date. + git gc + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12629473 - PreCommit-HIVE-TRUNK-Build > format_number udf should take user specifed format as argument > -- > > Key: HIVE-5370 > URL: https://issues.apache.org/jira/browse/HIVE-5370 > Project: Hive > Issue Type: Improvement > Components: UDF >Reporter: Amareshwari Sriramadasu >Assignee: Amareshwari Sriramadasu >Priority: Minor > Attachments: D13185.1.patch, D13185.2.patch, HIVE-5370.patch, > HIVE-5370.patch > > > Currently, format_number udf formats the number to #,###,###.##, but it > should also take a user specified format as optional input. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13179) Allow custom HiveConf to be passed to Authentication Providers
[ https://issues.apache.org/jira/browse/HIVE-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172360#comment-15172360 ] Hive QA commented on HIVE-13179: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12790439/HIVE-13179.1.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 9753 tests executed *Failed tests:* {noformat} TestSparkCliDriver-groupby3_map.q-sample2.q-auto_join14.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-groupby_map_ppr_multi_distinct.q-table_access_keys_stats.q-groupby4_noskew.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-timestamp_lazy.q-bucketsortoptimize_insert_4.q-date_udf.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testAddPartitions org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testFetchingPartitionsWithDifferentSchemas org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping org.apache.hive.jdbc.TestSSL.testSSLFetchHttp org.apache.hive.jdbc.TestSSL.testSSLVersion {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7129/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7129/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7129/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 11 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12790439 - PreCommit-HIVE-TRUNK-Build > Allow custom HiveConf to be passed to Authentication Providers > -- > > Key: HIVE-13179 > URL: https://issues.apache.org/jira/browse/HIVE-13179 > Project: Hive > Issue Type: Improvement >Reporter: Rajat Khandelwal >Assignee: Rajat Khandelwal > Attachments: HIVE-13179.1.patch, HIVE-13179.patch, HIVE-13179.patch > > > Right now if I want to create an ldap auth provider, I have to create a > hive-site.xml, set endpoints and other relevant properties there, then > instantiate `LdapAuthenticationProviderImpl`, since inside the constructor, a > new HiveConf is constructed. > A better and more reusable design would be to ask for the conf in the > constructor itself. That will allow an external user to create a HiveConf, > set all relevant properties and instantiate `LdapAuthenticationProviderImpl` > with that conf. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10632) Make sure TXN_COMPONENTS gets cleaned up if table is dropped before compaction.
[ https://issues.apache.org/jira/browse/HIVE-10632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172353#comment-15172353 ] Wei Zheng commented on HIVE-10632: -- [~alangates] Can you take a look? > Make sure TXN_COMPONENTS gets cleaned up if table is dropped before > compaction. > --- > > Key: HIVE-10632 > URL: https://issues.apache.org/jira/browse/HIVE-10632 > Project: Hive > Issue Type: Bug > Components: Metastore, Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Wei Zheng >Priority: Critical > Attachments: HIVE-10632.1.patch, HIVE-10632.2.patch, > HIVE-10632.3.patch, HIVE-10632.4.patch, HIVE-10632.5.patch > > > The compaction process will clean up entries in TXNS, > COMPLETED_TXN_COMPONENTS, TXN_COMPONENTS. If the table/partition is dropped > before compaction is complete there will be data left in these tables. Need > to investigate if there are other situations where this may happen and > address it. > see HIVE-10595 for additional info -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13145) Separate the output path of metrics file of HS2 and HMS
[ https://issues.apache.org/jira/browse/HIVE-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172347#comment-15172347 ] Szehon Ho commented on HIVE-13145: -- And for the embedded-mode I thought that HS2 and HMS have unique set of metrics. I could be missing something, if it's not the case, then this patch makes sense. > Separate the output path of metrics file of HS2 and HMS > --- > > Key: HIVE-13145 > URL: https://issues.apache.org/jira/browse/HIVE-13145 > Project: Hive > Issue Type: Improvement > Components: HiveServer2, Metastore >Reporter: Shinichi Yamashita >Assignee: Shinichi Yamashita > Attachments: HIVE-13145.1.patch, HIVE-13145.2.patch > > > The output path of metrics file of HS2 and HMS can define by > {{hive.service.metrics.file.location}} property at present. > When it starts HS2 and HMS by the same server, both metrics is written in the > same file. And when confirming this file, it is difficult to judge which > metrics it is. > Therefore the output path of metrics file of HS2 and HMS is separated. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13145) Separate the output path of metrics file of HS2 and HMS
[ https://issues.apache.org/jira/browse/HIVE-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172343#comment-15172343 ] Szehon Ho commented on HIVE-13145: -- Thanks for the patch. Can I ask what problem it tries to solve? I'm not against to the patch, but just wanted to see if there are already some solutions. I thought initially that if HMS and HS2 are separate processes on same machine, then you can configure both to have a different hive-site and thus different metrics-file location. If it is embedded mode and they are both the same process, then both are written to same file, which doesn't seem a problem to me? > Separate the output path of metrics file of HS2 and HMS > --- > > Key: HIVE-13145 > URL: https://issues.apache.org/jira/browse/HIVE-13145 > Project: Hive > Issue Type: Improvement > Components: HiveServer2, Metastore >Reporter: Shinichi Yamashita >Assignee: Shinichi Yamashita > Attachments: HIVE-13145.1.patch, HIVE-13145.2.patch > > > The output path of metrics file of HS2 and HMS can define by > {{hive.service.metrics.file.location}} property at present. > When it starts HS2 and HMS by the same server, both metrics is written in the > same file. And when confirming this file, it is difficult to judge which > metrics it is. > Therefore the output path of metrics file of HS2 and HMS is separated. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13063) Create UDFs for CHR and REPLACE
[ https://issues.apache.org/jira/browse/HIVE-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172291#comment-15172291 ] Jason Dere commented on HIVE-13063: --- [~afernandez], looks like the golden files for show_functions.q/udf_chr.q may need to be updated. > Create UDFs for CHR and REPLACE > > > Key: HIVE-13063 > URL: https://issues.apache.org/jira/browse/HIVE-13063 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 1.2.0 >Reporter: Alejandro Fernandez >Assignee: Alejandro Fernandez > Fix For: 2.1.0 > > Attachments: HIVE-13063.patch, Screen Shot 2016-02-17 at 7.20.57 > PM.png, Screen Shot 2016-02-17 at 7.21.07 PM.png > > > Create UDFS for these functions. > CHR: convert n where n : [0, 256) into the ascii equivalent as a varchar. If > n is less than 0 or greater than 255, return the empty string. If n is 0, > return null. > REPLACE: replace all substrings of 'str' that match 'search' with 'rep'. > Example. SELECT REPLACE('Hack and Hue', 'H', 'BL'); > Equals 'BLack and BLue'" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13146) OrcFile table property values are case sensitive
[ https://issues.apache.org/jira/browse/HIVE-13146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172171#comment-15172171 ] Aihua Xu commented on HIVE-13146: - Seems the change in SemanticAnalyzer.java is not necessary? The change looks good since actually we can't override ValueOf(). +1. > OrcFile table property values are case sensitive > > > Key: HIVE-13146 > URL: https://issues.apache.org/jira/browse/HIVE-13146 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 1.2.1 >Reporter: Andrew Sears >Assignee: Yongzhi Chen >Priority: Minor > Attachments: HIVE-13146.1.patch, HIVE-13146.2.patch, > HIVE-13146.3.patch > > > In Hive v1.2.1.2.3, with Tez , create an external table with compression > SNAPPY value marked as lower case. Table is created successfully. Insert > data into table fails with no enum constant error. > CREATE EXTERNAL TABLE mydb.mytable > (id int) > PARTITIONED BY (business_date date) > STORED AS ORC > LOCATION > '/data/mydb/mytable' > TBLPROPERTIES ( > 'orc.compress'='snappy'); > set hive.exec.dynamic.partition=true; > set hive.exec.dynamic.partition.mode=nonstrict; > INSERT OVERWRITE mydb.mytable PARTITION (business_date) > SELECT * from mydb.sourcetable; > Caused by: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hive.ql.io.orc.CompressionKind.snappy > at java.lang.Enum.valueOf(Enum.java:238) > at > org.apache.hadoop.hive.ql.io.orc.CompressionKind.valueOf(CompressionKind.java:25) > Constant SNAPPY needs to be uppercase in definition to fix. Case should be > agnostic or throw error on creation of table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13146) OrcFile table property values are case sensitive
[ https://issues.apache.org/jira/browse/HIVE-13146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongzhi Chen updated HIVE-13146: Attachment: HIVE-13146.3.patch attach patch3, it should cover all the required cases. > OrcFile table property values are case sensitive > > > Key: HIVE-13146 > URL: https://issues.apache.org/jira/browse/HIVE-13146 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 1.2.1 >Reporter: Andrew Sears >Assignee: Yongzhi Chen >Priority: Minor > Attachments: HIVE-13146.1.patch, HIVE-13146.2.patch, > HIVE-13146.3.patch > > > In Hive v1.2.1.2.3, with Tez , create an external table with compression > SNAPPY value marked as lower case. Table is created successfully. Insert > data into table fails with no enum constant error. > CREATE EXTERNAL TABLE mydb.mytable > (id int) > PARTITIONED BY (business_date date) > STORED AS ORC > LOCATION > '/data/mydb/mytable' > TBLPROPERTIES ( > 'orc.compress'='snappy'); > set hive.exec.dynamic.partition=true; > set hive.exec.dynamic.partition.mode=nonstrict; > INSERT OVERWRITE mydb.mytable PARTITION (business_date) > SELECT * from mydb.sourcetable; > Caused by: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hive.ql.io.orc.CompressionKind.snappy > at java.lang.Enum.valueOf(Enum.java:238) > at > org.apache.hadoop.hive.ql.io.orc.CompressionKind.valueOf(CompressionKind.java:25) > Constant SNAPPY needs to be uppercase in definition to fix. Case should be > agnostic or throw error on creation of table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13169) HiveServer2: Support delegation token based connection when using http transport
[ https://issues.apache.org/jira/browse/HIVE-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-13169: - Attachment: HIVE-13169.4.patch > HiveServer2: Support delegation token based connection when using http > transport > > > Key: HIVE-13169 > URL: https://issues.apache.org/jira/browse/HIVE-13169 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Attachments: HIVE-13169.1.patch, HIVE-13169.2.patch, > HIVE-13169.3.patch, HIVE-13169.3.patch, HIVE-13169.4.patch > > > HIVE-5155 introduced support for delegation token based connection. However, > it was intended for tcp transport mode. We need to have similar mechanisms > for http transport. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13146) OrcFile table property values are case sensitive
[ https://issues.apache.org/jira/browse/HIVE-13146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172077#comment-15172077 ] Aihua Xu commented on HIVE-13146: - Seems we should also handle "ALTER TABLE table_name SET TBLPROPERTIES" case. Probably we can do that in org.apache.hadoop.hive.ql.io.orc.CompressionKind class to override ValueOf() so we can handle both cases in one place. When it's called, we will convert to upper case and throw exception if it's not one of the value. > OrcFile table property values are case sensitive > > > Key: HIVE-13146 > URL: https://issues.apache.org/jira/browse/HIVE-13146 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 1.2.1 >Reporter: Andrew Sears >Assignee: Yongzhi Chen >Priority: Minor > Attachments: HIVE-13146.1.patch, HIVE-13146.2.patch > > > In Hive v1.2.1.2.3, with Tez , create an external table with compression > SNAPPY value marked as lower case. Table is created successfully. Insert > data into table fails with no enum constant error. > CREATE EXTERNAL TABLE mydb.mytable > (id int) > PARTITIONED BY (business_date date) > STORED AS ORC > LOCATION > '/data/mydb/mytable' > TBLPROPERTIES ( > 'orc.compress'='snappy'); > set hive.exec.dynamic.partition=true; > set hive.exec.dynamic.partition.mode=nonstrict; > INSERT OVERWRITE mydb.mytable PARTITION (business_date) > SELECT * from mydb.sourcetable; > Caused by: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hive.ql.io.orc.CompressionKind.snappy > at java.lang.Enum.valueOf(Enum.java:238) > at > org.apache.hadoop.hive.ql.io.orc.CompressionKind.valueOf(CompressionKind.java:25) > Constant SNAPPY needs to be uppercase in definition to fix. Case should be > agnostic or throw error on creation of table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13169) HiveServer2: Support delegation token based connection when using http transport
[ https://issues.apache.org/jira/browse/HIVE-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15172028#comment-15172028 ] Hive QA commented on HIVE-13169: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12790421/HIVE-13169.3.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 9768 tests executed *Failed tests:* {noformat} TestSparkCliDriver-groupby3_map.q-sample2.q-auto_join14.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-groupby_map_ppr_multi_distinct.q-table_access_keys_stats.q-groupby4_noskew.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import org.apache.hive.jdbc.TestSSL.testSSLVersion org.apache.hive.minikdc.TestHiveAuthFactory.testStartTokenManagerForDBTokenStore {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7128/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7128/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7128/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12790421 - PreCommit-HIVE-TRUNK-Build > HiveServer2: Support delegation token based connection when using http > transport > > > Key: HIVE-13169 > URL: https://issues.apache.org/jira/browse/HIVE-13169 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Attachments: HIVE-13169.1.patch, HIVE-13169.2.patch, > HIVE-13169.3.patch, HIVE-13169.3.patch > > > HIVE-5155 introduced support for delegation token based connection. However, > it was intended for tcp transport mode. We need to have similar mechanisms > for http transport. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13146) OrcFile table property values are case sensitive
[ https://issues.apache.org/jira/browse/HIVE-13146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15171932#comment-15171932 ] Yongzhi Chen commented on HIVE-13146: - The tests failures for patch2 are not related. > OrcFile table property values are case sensitive > > > Key: HIVE-13146 > URL: https://issues.apache.org/jira/browse/HIVE-13146 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 1.2.1 >Reporter: Andrew Sears >Assignee: Yongzhi Chen >Priority: Minor > Attachments: HIVE-13146.1.patch, HIVE-13146.2.patch > > > In Hive v1.2.1.2.3, with Tez , create an external table with compression > SNAPPY value marked as lower case. Table is created successfully. Insert > data into table fails with no enum constant error. > CREATE EXTERNAL TABLE mydb.mytable > (id int) > PARTITIONED BY (business_date date) > STORED AS ORC > LOCATION > '/data/mydb/mytable' > TBLPROPERTIES ( > 'orc.compress'='snappy'); > set hive.exec.dynamic.partition=true; > set hive.exec.dynamic.partition.mode=nonstrict; > INSERT OVERWRITE mydb.mytable PARTITION (business_date) > SELECT * from mydb.sourcetable; > Caused by: java.lang.IllegalArgumentException: No enum constant > org.apache.hadoop.hive.ql.io.orc.CompressionKind.snappy > at java.lang.Enum.valueOf(Enum.java:238) > at > org.apache.hadoop.hive.ql.io.orc.CompressionKind.valueOf(CompressionKind.java:25) > Constant SNAPPY needs to be uppercase in definition to fix. Case should be > agnostic or throw error on creation of table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-11484) Fix ObjectInspector for Char and VarChar
[ https://issues.apache.org/jira/browse/HIVE-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajat Khandelwal reassigned HIVE-11484: --- Assignee: Rajat Khandelwal (was: Deepak Barr) > Fix ObjectInspector for Char and VarChar > > > Key: HIVE-11484 > URL: https://issues.apache.org/jira/browse/HIVE-11484 > Project: Hive > Issue Type: Bug > Components: Serializers/Deserializers >Reporter: Amareshwari Sriramadasu >Assignee: Rajat Khandelwal > > The creation of HiveChar and Varchar is not happening through ObjectInspector. > Here is fix we pushed internally : > https://github.com/InMobi/hive/commit/fe95c7850e7130448209141155f28b25d3504216 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13169) HiveServer2: Support delegation token based connection when using http transport
[ https://issues.apache.org/jira/browse/HIVE-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15171919#comment-15171919 ] Vaibhav Gumashta commented on HIVE-13169: - [~thejas] Patch looks good. Just one minor comment on the pull request. > HiveServer2: Support delegation token based connection when using http > transport > > > Key: HIVE-13169 > URL: https://issues.apache.org/jira/browse/HIVE-13169 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Attachments: HIVE-13169.1.patch, HIVE-13169.2.patch, > HIVE-13169.3.patch, HIVE-13169.3.patch > > > HIVE-5155 introduced support for delegation token based connection. However, > it was intended for tcp transport mode. We need to have similar mechanisms > for http transport. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13145) Separate the output path of metrics file of HS2 and HMS
[ https://issues.apache.org/jira/browse/HIVE-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shinichi Yamashita updated HIVE-13145: -- Attachment: HIVE-13145.2.patch > Separate the output path of metrics file of HS2 and HMS > --- > > Key: HIVE-13145 > URL: https://issues.apache.org/jira/browse/HIVE-13145 > Project: Hive > Issue Type: Improvement > Components: HiveServer2, Metastore >Reporter: Shinichi Yamashita >Assignee: Shinichi Yamashita > Attachments: HIVE-13145.1.patch, HIVE-13145.2.patch > > > The output path of metrics file of HS2 and HMS can define by > {{hive.service.metrics.file.location}} property at present. > When it starts HS2 and HMS by the same server, both metrics is written in the > same file. And when confirming this file, it is difficult to judge which > metrics it is. > Therefore the output path of metrics file of HS2 and HMS is separated. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13145) Separate the output path of metrics file of HS2 and HMS
[ https://issues.apache.org/jira/browse/HIVE-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15171917#comment-15171917 ] Shinichi Yamashita commented on HIVE-13145: --- [~leftylev] Thank you for your comment. I attach a patch file which restricted within 100 character. > Separate the output path of metrics file of HS2 and HMS > --- > > Key: HIVE-13145 > URL: https://issues.apache.org/jira/browse/HIVE-13145 > Project: Hive > Issue Type: Improvement > Components: HiveServer2, Metastore >Reporter: Shinichi Yamashita >Assignee: Shinichi Yamashita > Attachments: HIVE-13145.1.patch > > > The output path of metrics file of HS2 and HMS can define by > {{hive.service.metrics.file.location}} property at present. > When it starts HS2 and HMS by the same server, both metrics is written in the > same file. And when confirming this file, it is difficult to judge which > metrics it is. > Therefore the output path of metrics file of HS2 and HMS is separated. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11483) Add encoding and decoding for query string config
[ https://issues.apache.org/jira/browse/HIVE-11483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajat Khandelwal updated HIVE-11483: Status: Patch Available (was: In Progress) > Add encoding and decoding for query string config > - > > Key: HIVE-11483 > URL: https://issues.apache.org/jira/browse/HIVE-11483 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Reporter: Amareshwari Sriramadasu >Assignee: Rajat Khandelwal > Attachments: HIVE-11483.01.patch > > > We have seen some queries in production where some of the literals passed in > the query have control characters, which result in exception when query > string is set in the job xml. > Proposing a solution to encode the query string in configuration and provide > getters decoded string. > Here is a commit in a forked repo : > https://github.com/InMobi/hive/commit/2faf5761191fa3103a0d779fde584d494ed75bf5 > Suggestions are welcome on the solution. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11483) Add encoding and decoding for query string config
[ https://issues.apache.org/jira/browse/HIVE-11483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajat Khandelwal updated HIVE-11483: Attachment: HIVE-11483.01.patch > Add encoding and decoding for query string config > - > > Key: HIVE-11483 > URL: https://issues.apache.org/jira/browse/HIVE-11483 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Reporter: Amareshwari Sriramadasu >Assignee: Rajat Khandelwal > Attachments: HIVE-11483.01.patch > > > We have seen some queries in production where some of the literals passed in > the query have control characters, which result in exception when query > string is set in the job xml. > Proposing a solution to encode the query string in configuration and provide > getters decoded string. > Here is a commit in a forked repo : > https://github.com/InMobi/hive/commit/2faf5761191fa3103a0d779fde584d494ed75bf5 > Suggestions are welcome on the solution. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11483) Add encoding and decoding for query string config
[ https://issues.apache.org/jira/browse/HIVE-11483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15171898#comment-15171898 ] Rajat Khandelwal commented on HIVE-11483: - Created https://reviews.apache.org/r/44172/ > Add encoding and decoding for query string config > - > > Key: HIVE-11483 > URL: https://issues.apache.org/jira/browse/HIVE-11483 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Reporter: Amareshwari Sriramadasu >Assignee: Rajat Khandelwal > > We have seen some queries in production where some of the literals passed in > the query have control characters, which result in exception when query > string is set in the job xml. > Proposing a solution to encode the query string in configuration and provide > getters decoded string. > Here is a commit in a forked repo : > https://github.com/InMobi/hive/commit/2faf5761191fa3103a0d779fde584d494ed75bf5 > Suggestions are welcome on the solution. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HIVE-11483) Add encoding and decoding for query string config
[ https://issues.apache.org/jira/browse/HIVE-11483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-11483 started by Rajat Khandelwal. --- > Add encoding and decoding for query string config > - > > Key: HIVE-11483 > URL: https://issues.apache.org/jira/browse/HIVE-11483 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Reporter: Amareshwari Sriramadasu >Assignee: Rajat Khandelwal > > We have seen some queries in production where some of the literals passed in > the query have control characters, which result in exception when query > string is set in the job xml. > Proposing a solution to encode the query string in configuration and provide > getters decoded string. > Here is a commit in a forked repo : > https://github.com/InMobi/hive/commit/2faf5761191fa3103a0d779fde584d494ed75bf5 > Suggestions are welcome on the solution. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-5370) format_number udf should take user specifed format as argument
[ https://issues.apache.org/jira/browse/HIVE-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15171826#comment-15171826 ] Amareshwari Sriramadasu commented on HIVE-5370: --- Can anyone review this patch and provide feedback? The jira is PATCH AVAILABLE from more than one year. > format_number udf should take user specifed format as argument > -- > > Key: HIVE-5370 > URL: https://issues.apache.org/jira/browse/HIVE-5370 > Project: Hive > Issue Type: Improvement > Components: UDF >Reporter: Amareshwari Sriramadasu >Assignee: Amareshwari Sriramadasu >Priority: Minor > Attachments: D13185.1.patch, D13185.2.patch, HIVE-5370.patch, > HIVE-5370.patch > > > Currently, format_number udf formats the number to #,###,###.##, but it > should also take a user specified format as optional input. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13139) Unfold TOK_ALLCOLREF of source table/view at QB stage
[ https://issues.apache.org/jira/browse/HIVE-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15171822#comment-15171822 ] Hive QA commented on HIVE-13139: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12790417/HIVE-13139.02.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 120 failed/errored test(s), 9769 tests executed *Failed tests:* {noformat} TestSparkCliDriver-groupby3_map.q-sample2.q-auto_join14.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-groupby_map_ppr_multi_distinct.q-table_access_keys_stats.q-groupby4_noskew.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join0 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join22 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_cross_product_check_2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_lineage2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_subq_in org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_subq_in org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cluster org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_column_access_stats org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_view org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cross_product_check_1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cross_product_check_2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ctas_colname org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cteViews org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynamic_rdd_cache org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_gby_star org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_innerjoin org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input26 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input41 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_input_part5 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join22 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lateral_view org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lateral_view_noalias org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lateral_view_onview org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lateral_view_outer org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lateral_view_ppd org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_limit_join_transpose org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lineage2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lineage3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mapjoin1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mapjoin_test_outer org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mergejoin org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_clusterby org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_join4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_vc org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_query_properties org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_rand_partitionpruner2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_reduce_deduplicate_exclude_join org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_semijoin org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_skewjoin_mapjoin5 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_skewjoinopt9 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_special_character_in_tabnames_1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_unqual_corr_expr org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_table_access_keys_stats org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_json_tuple org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udtf_parse_url_tuple org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union10 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union18 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union19 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union27 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union6 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_unionDistinct_1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_between_columns
[jira] [Commented] (HIVE-11527) bypass HiveServer2 thrift interface for query results
[ https://issues.apache.org/jira/browse/HIVE-11527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15171794#comment-15171794 ] Takanobu Asanuma commented on HIVE-11527: - Hi [~sershe], [~jingzhao] Sorry for my late action. As I said in the last comment, since jdbc clients may not be able to resolve the HA namespace, HiveServer2 should resolve it with WebHdfsFileSystem and return the final FQDN to jdbc clients. But currently, WebHdfsFileSystem does not have the API like that. So I want to implement the public API in WebHdfsFileSystem. What do you think about that? > bypass HiveServer2 thrift interface for query results > - > > Key: HIVE-11527 > URL: https://issues.apache.org/jira/browse/HIVE-11527 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Reporter: Sergey Shelukhin >Assignee: Takanobu Asanuma > Attachments: HIVE-11527.WIP.patch > > > Right now, HS2 reads query results and returns them to the caller via its > thrift API. > There should be an option for HS2 to return some pointer to results (an HDFS > link?) and for the user to read the results directly off HDFS inside the > cluster, or via something like WebHDFS outside the cluster > Review board link: https://reviews.apache.org/r/40867 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13179) Allow custom HiveConf to be passed to Authentication Providers
[ https://issues.apache.org/jira/browse/HIVE-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajat Khandelwal updated HIVE-13179: Attachment: HIVE-13179.1.patch Missed one line, attaching patch again. > Allow custom HiveConf to be passed to Authentication Providers > -- > > Key: HIVE-13179 > URL: https://issues.apache.org/jira/browse/HIVE-13179 > Project: Hive > Issue Type: Improvement >Reporter: Rajat Khandelwal >Assignee: Rajat Khandelwal > Attachments: HIVE-13179.1.patch, HIVE-13179.patch, HIVE-13179.patch > > > Right now if I want to create an ldap auth provider, I have to create a > hive-site.xml, set endpoints and other relevant properties there, then > instantiate `LdapAuthenticationProviderImpl`, since inside the constructor, a > new HiveConf is constructed. > A better and more reusable design would be to ask for the conf in the > constructor itself. That will allow an external user to create a HiveConf, > set all relevant properties and instantiate `LdapAuthenticationProviderImpl` > with that conf. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13179) Allow custom HiveConf to be passed to Authentication Providers
[ https://issues.apache.org/jira/browse/HIVE-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15171654#comment-15171654 ] Hive QA commented on HIVE-13179: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12790372/HIVE-13179.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 9768 tests executed *Failed tests:* {noformat} TestSparkCliDriver-groupby3_map.q-sample2.q-auto_join14.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-groupby_map_ppr_multi_distinct.q-table_access_keys_stats.q-groupby4_noskew.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-join_rc.q-insert1.q-vectorized_rcfile_columnar.q-and-12-more - did not produce a TEST-*.xml file TestSparkCliDriver-ppd_join4.q-join9.q-ppd_join3.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_uri_import org.apache.hive.jdbc.TestSSL.testSSLVersion org.apache.hive.service.auth.TestLdapAuthenticationProviderImpl.testLdapEmptyPassword {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7126/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7126/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7126/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 7 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12790372 - PreCommit-HIVE-TRUNK-Build > Allow custom HiveConf to be passed to Authentication Providers > -- > > Key: HIVE-13179 > URL: https://issues.apache.org/jira/browse/HIVE-13179 > Project: Hive > Issue Type: Improvement >Reporter: Rajat Khandelwal >Assignee: Rajat Khandelwal > Attachments: HIVE-13179.patch, HIVE-13179.patch > > > Right now if I want to create an ldap auth provider, I have to create a > hive-site.xml, set endpoints and other relevant properties there, then > instantiate `LdapAuthenticationProviderImpl`, since inside the constructor, a > new HiveConf is constructed. > A better and more reusable design would be to ask for the conf in the > constructor itself. That will allow an external user to create a HiveConf, > set all relevant properties and instantiate `LdapAuthenticationProviderImpl` > with that conf. -- This message was sent by Atlassian JIRA (v6.3.4#6332)