[jira] [Updated] (HIVE-7409) Add workaround for a deadlock issue of Class.getAnnotation()
[ https://issues.apache.org/jira/browse/HIVE-7409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi OZAWA updated HIVE-7409: - Attachment: stacktrace.txt Attached stack trace. Add workaround for a deadlock issue of Class.getAnnotation() - Key: HIVE-7409 URL: https://issues.apache.org/jira/browse/HIVE-7409 Project: Hive Issue Type: Bug Reporter: Tsuyoshi OZAWA Attachments: HIVE-7409.1.patch, stacktrace.txt [JDK-7122142|https://bugs.openjdk.java.net/browse/JDK-7122142] mentions that there is a race condition in getAnnotations. This problem can lead deadlock. The fix on JDK will be merged on jdk8, but hive supports jdk6/jdk7 currently. Therefore, we should add workaround to avoid the issue. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7314) Wrong results of UDF when hive.cache.expr.evaluation is set
[ https://issues.apache.org/jira/browse/HIVE-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063221#comment-14063221 ] Lefty Leverenz commented on HIVE-7314: -- The bug is also documented in the UDF wiki, in an information box at the beginning: * [LanguageManual -- Operators and UDFs | https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF] Wrong results of UDF when hive.cache.expr.evaluation is set --- Key: HIVE-7314 URL: https://issues.apache.org/jira/browse/HIVE-7314 Project: Hive Issue Type: Bug Affects Versions: 0.12.0, 0.13.0, 0.13.1 Reporter: dima machlin Assignee: Navis Fix For: 0.14.0 Attachments: HIVE-7314.1.patch.txt It seems that the expression caching doesn't work when using UDF inside another UDF or a hive function. For example : tbl has one row : 'a','b' The following query : {code:sql} select concat(custUDF(a),' ', custUDF(b)) from tbl; {code} returns 'a a' seems to cache custUDF(a) and use it for custUDF(b). Same query without the concat works fine. Replacing the concat with another custom UDF also returns 'a a' -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063225#comment-14063225 ] Hive QA commented on HIVE-6988: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12655992/HIVE-6988.4.patch {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 5734 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_temp_table org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx org.apache.hadoop.hive.ql.exec.tez.TestTezTask.testBuildDag org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/808/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/808/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-808/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12655992 Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch, HIVE-6988.3.patch, HIVE-6988.4.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-4064) Handle db qualified names consistently across all HiveQL statements
[ https://issues.apache.org/jira/browse/HIVE-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063330#comment-14063330 ] Hive QA commented on HIVE-4064: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12655996/HIVE-4064.1.patch.txt {color:red}ERROR:{color} -1 due to 31 failed/errored test(s), 5719 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_partition_coltype org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_show_columns org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_show_create_table_db_table org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_show_tblproperties org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_temp_table_names org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_temp_table_precedence org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_temp_table org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_stats_counter org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_alter_concatenate_indexed_table org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_alter_partition_coltype_invalidcolname org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_alter_view_failure6 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_merge_negative_1 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_merge_negative_2 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_show_columns3 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_show_tableproperties1 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_temp_table_index org.apache.hadoop.hive.ql.metadata.TestHive.testIndex org.apache.hadoop.hive.ql.metadata.TestHiveRemote.testIndex org.apache.hadoop.hive.ql.parse.authorization.TestHiveAuthorizationTaskFactory.testGrantGroupTable org.apache.hadoop.hive.ql.parse.authorization.TestHiveAuthorizationTaskFactory.testGrantRoleTable org.apache.hadoop.hive.ql.parse.authorization.TestHiveAuthorizationTaskFactory.testGrantUserTable org.apache.hadoop.hive.ql.parse.authorization.TestHiveAuthorizationTaskFactory.testRevokeGroupTable org.apache.hadoop.hive.ql.parse.authorization.TestHiveAuthorizationTaskFactory.testRevokeRoleTable org.apache.hadoop.hive.ql.parse.authorization.TestHiveAuthorizationTaskFactory.testRevokeUserTable org.apache.hadoop.hive.ql.parse.authorization.TestHiveAuthorizationTaskFactory.testShowGrantGroupOnTable org.apache.hadoop.hive.ql.parse.authorization.TestHiveAuthorizationTaskFactory.testShowGrantRoleOnTable org.apache.hadoop.hive.ql.parse.authorization.TestHiveAuthorizationTaskFactory.testShowGrantUserOnTable org.apache.hadoop.hive.ql.parse.authorization.TestPrivilegesV1.testPrivInGrant org.apache.hadoop.hive.ql.parse.authorization.TestPrivilegesV2.testPrivInGrant org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/809/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/809/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-809/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 31 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12655996 Handle db qualified names consistently across all HiveQL statements --- Key: HIVE-4064 URL: https://issues.apache.org/jira/browse/HIVE-4064 Project: Hive Issue Type: Bug Components: SQL Affects Versions: 0.10.0 Reporter: Shreepadma Venugopalan Assignee: Navis Attachments: HIVE-4064-1.patch, HIVE-4064.1.patch.txt Hive doesn't consistently handle db qualified names across all HiveQL statements. While some HiveQL statements such as SELECT support DB qualified names, other such as CREATE INDEX doesn't. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5132) Can't access to hwi due to No Java compiler available
[ https://issues.apache.org/jira/browse/HIVE-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063336#comment-14063336 ] Shengjun Xin commented on HIVE-5132: We meet the same issue and copy jasper-compiler-jdt.jar to $HIVE_HOME/lib can resolve this issue, the error log is: {code} Unable to find a javac compiler; com.sun.tools.javac.Main is not on the classpath. Perhaps JAVA_HOME does not point to the JDK. It is currently set to /usr/java/jdk1.7.0_45/jre at org.apache.tools.ant.taskdefs.compilers.CompilerAdapterFactory.getCompiler(CompilerAdapterFactory.java:129) at org.apache.tools.ant.taskdefs.Javac.findSupportedFileExtensions(Javac.java:979) at org.apache.tools.ant.taskdefs.Javac.scanDir(Javac.java:956) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:927) at org.apache.jasper.compiler.AntCompiler.generateClass(AntCompiler.java:220) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:298) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:277) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:265) at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:564) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:299) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:315) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) at org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:503) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) {code} Can't access to hwi due to No Java compiler available --- Key: HIVE-5132 URL: https://issues.apache.org/jira/browse/HIVE-5132 Project: Hive Issue Type: Bug Affects Versions: 0.10.0, 0.11.0 Environment: JDK1.6, hadoop 2.0.4-alpha Reporter: Bing Li Assignee: Bing Li Priority: Critical Fix For: 0.13.0 Attachments: HIVE-5132-01.patch I want to use hwi to submit hive queries, but after start hwi successfully, I can't open the web page of it. I noticed that someone also met the same issue in hive-0.10. Reproduce steps: -- 1. start hwi bin/hive --config $HIVE_CONF_DIR --service hwi 2. access to http://hive_hwi_node:/hwi via browser got the following error message: HTTP ERROR 500 Problem accessing /hwi/. Reason: No Java compiler available Caused by: java.lang.IllegalStateException: No Java compiler available at
[jira] [Commented] (HIVE-7409) Add workaround for a deadlock issue of Class.getAnnotation()
[ https://issues.apache.org/jira/browse/HIVE-7409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063375#comment-14063375 ] Tsuyoshi OZAWA commented on HIVE-7409: -- {quote} Just curious, have you seen such deadlock occurring in Hive? I can undersatnd if your patch is for the sake of precaution {quote} Yes, we've seen the problem. Please see the attached stack trace. And, your point is correct. Let me check the code again. Add workaround for a deadlock issue of Class.getAnnotation() - Key: HIVE-7409 URL: https://issues.apache.org/jira/browse/HIVE-7409 Project: Hive Issue Type: Bug Reporter: Tsuyoshi OZAWA Attachments: HIVE-7409.1.patch, stacktrace.txt [JDK-7122142|https://bugs.openjdk.java.net/browse/JDK-7122142] mentions that there is a race condition in getAnnotations. This problem can lead deadlock. The fix on JDK will be merged on jdk8, but hive supports jdk6/jdk7 currently. Therefore, we should add workaround to avoid the issue. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7423) produce hive-exec-core.jar from ql module
[ https://issues.apache.org/jira/browse/HIVE-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063481#comment-14063481 ] Xuefu Zhang commented on HIVE-7423: --- by other project, do you mean projects in the Hive, or external projects? There shouldn't be any problem for external projects. For internal projects, you can put maven dependency as you do for external libraries but using $project.version. I guess using classifier is fine too. produce hive-exec-core.jar from ql module - Key: HIVE-7423 URL: https://issues.apache.org/jira/browse/HIVE-7423 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.13.1 Reporter: Eugene Koifman Assignee: Eugene Koifman Attachments: HIVE-7423.patch currently ql module produces hive-exec-$version.jar which is an uber jar. It's also useful to have a thin jar, let's call it hive-exec-$version-core.jar, that only has classes from ql. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6560) varchar and char types cannot be cast to binary
[ https://issues.apache.org/jira/browse/HIVE-6560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao updated HIVE-6560: --- Attachment: HIVE-6560.2.patch Attempting to address test failures due to a unsynced output file. varchar and char types cannot be cast to binary --- Key: HIVE-6560 URL: https://issues.apache.org/jira/browse/HIVE-6560 Project: Hive Issue Type: Bug Components: Types, UDF Affects Versions: 0.12.0, 0.13.0, 0.13.1 Reporter: Xuefu Zhang Assignee: Chao Attachments: HIVE-6560.1.patch, HIVE-6560.2.patch GenericUDFToBinary can convert string to binary. VARCHAR and CHAR are substitutable with string. Thus, GenericUDFToBinary should also be able to convert VARCHAR and CHAR to binary. However, {code} hive select binary(cast('abc' as varchar(5)) from decimal_udf limit 1; FAILED: ParseException line 1:40 missing ) at 'from' near 'EOF' hive select binary(cast('abc' as varchar(5))) from decimal_udf limit 1; FAILED: SemanticException Line 0:-1 Wrong arguments ''abc'': Only string or binary data can be cast into binary data types. {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23387: HIVE-6806: Native avro support
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/#review47894 --- Overall looks good. I added a few comments regarding the Avro schema mapping. It would also be useful to update the documentation (https://cwiki.apache.org/confluence/display/Hive/AvroSerDe) with to cover this feature once it is released, especially the type mappings table. serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java https://reviews.apache.org/r/23387/#comment84124 The doc string here could be the Hive column definition, which would be helpful for debugging. serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java https://reviews.apache.org/r/23387/#comment84121 VOID is missing - it maps to Schema.Type.NULL in Avro. Also, SHORT and BYTE could map to INT. And CHAR, VARCHAR could map to STRING. serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java https://reviews.apache.org/r/23387/#comment84123 Are optional fields supported properly? All schemas should probably be wrapped in an Avro null union to allow values to be null: private static Schema optional(Schema schema) { return Schema.createUnion(Arrays.asList(Schema.create(Schema.Type.NULL), schema)); } serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java https://reviews.apache.org/r/23387/#comment84119 This should be BINARY. BYTE can be mapped to Avro Schema.Type.INT serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java https://reviews.apache.org/r/23387/#comment84122 Throw an exception if the key type isn't string. serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java https://reviews.apache.org/r/23387/#comment84126 COLUMN_NAMES serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java https://reviews.apache.org/r/23387/#comment84125 Pass this as a parameter to getAvroSchemaString() - it's a bit odd to set it in tests then read it in getAvroSchemaString(). serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java https://reviews.apache.org/r/23387/#comment84127 Add a field for each Hive type to test they can all work in the context of a record. Also add a nested record. - Tom White On July 16, 2014, 3:35 a.m., Ashish Singh wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/ --- (Updated July 16, 2014, 3:35 a.m.) Review request for hive. Bugs: HIVE-6806 https://issues.apache.org/jira/browse/HIVE-6806 Repository: hive-git Description --- HIVE-6806: Native avro support Diffs - ql/src/java/org/apache/hadoop/hive/ql/io/AvroStorageFormatDescriptor.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 1bae0a8fee04049f90b16d813ff4c96707b349c8 ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor a23ff115512da5fe3167835a88d582c427585b8e ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java d53ebc65174d66bfeee25fd2891c69c78f9137ee ql/src/test/queries/clientpositive/avro_compression_enabled_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_schema_evolution_native.q PRE-CREATION ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_partitioned_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_schema_evolution_native.q.out PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 1fe31e0034f8988d03a0c51a90904bb93e7cb157 serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java PRE-CREATION serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java PRE-CREATION Diff: https://reviews.apache.org/r/23387/diff/ Testing --- Added qTests and unit tests Thanks, Ashish Singh
[jira] [Commented] (HIVE-6584) Add HiveHBaseTableSnapshotInputFormat
[ https://issues.apache.org/jira/browse/HIVE-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063719#comment-14063719 ] Nick Dimiduk commented on HIVE-6584: Ouch. Most of these tests run/pass for me locally. Will investigate further. I'm also curious why the {{explain}} commands in {{hbase_handler_snapshot.q}} are not including the Input/OutputFormats. [~sushanth], [~ashutoshc] any ideas on this latter issue? Add HiveHBaseTableSnapshotInputFormat - Key: HIVE-6584 URL: https://issues.apache.org/jira/browse/HIVE-6584 Project: Hive Issue Type: Improvement Components: HBase Handler Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.14.0 Attachments: HIVE-6584.0.patch, HIVE-6584.1.patch, HIVE-6584.2.patch, HIVE-6584.3.patch, HIVE-6584.4.patch, HIVE-6584.5.patch, HIVE-6584.6.patch, HIVE-6584.7.patch, HIVE-6584.8.patch HBASE-8369 provided mapreduce support for reading from HBase table snapsopts. This allows a MR job to consume a stable, read-only view of an HBase table directly off of HDFS. Bypassing the online region server API provides a nice performance boost for the full scan. HBASE-10642 is backporting that feature to 0.94/0.96 and also adding a {{mapred}} implementation. Once that's available, we should add an input format. A follow-on patch could work out how to integrate this functionality into the StorageHandler, similar to how HIVE-6473 integrates the HFileOutputFormat into existing table definitions. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6560) varchar and char types cannot be cast to binary
[ https://issues.apache.org/jira/browse/HIVE-6560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063735#comment-14063735 ] Hive QA commented on HIVE-6560: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12656059/HIVE-6560.2.patch {color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 5735 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_optimization org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_temp_table org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_invalid_cast_to_binary_1 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_invalid_cast_to_binary_2 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_invalid_cast_to_binary_3 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_invalid_cast_to_binary_4 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_invalid_cast_to_binary_5 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_invalid_cast_to_binary_6 org.apache.hive.hcatalog.pig.TestOrcHCatLoader.testReadDataPrimitiveTypes {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/811/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/811/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-811/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 10 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12656059 varchar and char types cannot be cast to binary --- Key: HIVE-6560 URL: https://issues.apache.org/jira/browse/HIVE-6560 Project: Hive Issue Type: Bug Components: Types, UDF Affects Versions: 0.12.0, 0.13.0, 0.13.1 Reporter: Xuefu Zhang Assignee: Chao Attachments: HIVE-6560.1.patch, HIVE-6560.2.patch GenericUDFToBinary can convert string to binary. VARCHAR and CHAR are substitutable with string. Thus, GenericUDFToBinary should also be able to convert VARCHAR and CHAR to binary. However, {code} hive select binary(cast('abc' as varchar(5)) from decimal_udf limit 1; FAILED: ParseException line 1:40 missing ) at 'from' near 'EOF' hive select binary(cast('abc' as varchar(5))) from decimal_udf limit 1; FAILED: SemanticException Line 0:-1 Wrong arguments ''abc'': Only string or binary data can be cast into binary data types. {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5132) Can't access to hwi due to No Java compiler available
[ https://issues.apache.org/jira/browse/HIVE-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063773#comment-14063773 ] Lefty Leverenz commented on HIVE-5132: -- Should this be documented in the HWI wiki? * [Hive Web Interface -- Configuartion | https://cwiki.apache.org/confluence/display/Hive/HiveWebInterface#HiveWebInterface-Configuration] Can't access to hwi due to No Java compiler available --- Key: HIVE-5132 URL: https://issues.apache.org/jira/browse/HIVE-5132 Project: Hive Issue Type: Bug Affects Versions: 0.10.0, 0.11.0 Environment: JDK1.6, hadoop 2.0.4-alpha Reporter: Bing Li Assignee: Bing Li Priority: Critical Fix For: 0.13.0 Attachments: HIVE-5132-01.patch I want to use hwi to submit hive queries, but after start hwi successfully, I can't open the web page of it. I noticed that someone also met the same issue in hive-0.10. Reproduce steps: -- 1. start hwi bin/hive --config $HIVE_CONF_DIR --service hwi 2. access to http://hive_hwi_node:/hwi via browser got the following error message: HTTP ERROR 500 Problem accessing /hwi/. Reason: No Java compiler available Caused by: java.lang.IllegalStateException: No Java compiler available at org.apache.jasper.JspCompilationContext.createCompiler(JspCompilationContext.java:225) at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:560) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:299) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:315) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:265) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:327) at org.mortbay.jetty.servlet.Dispatcher.forward(Dispatcher.java:126) at org.mortbay.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:503) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7428) OrcSplit fails to account for columnar projections in its size estimates
Gopal V created HIVE-7428: - Summary: OrcSplit fails to account for columnar projections in its size estimates Key: HIVE-7428 URL: https://issues.apache.org/jira/browse/HIVE-7428 Project: Hive Issue Type: Bug Reporter: Gopal V Currently, ORC generates splits based on stripe offset + stripe length. This means that the splits for all columnar projections are exactly the same size, despite reading the footer which gives the estimated sizes for each column. This is a hold-out from FileSplit which uses getLen() as the I/O cost of reading a file in a map-task. RCFile didn't have a footer with column statistics information, but for ORC this would be extremely useful to reduce task overheads when processing extremely wide tables with highly selective column projections. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23387: HIVE-6806: Native avro support
On July 16, 2014, 4:45 p.m., Tom White wrote: serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java, line 97 https://reviews.apache.org/r/23387/diff/7/?file=633382#file633382line97 Are optional fields supported properly? All schemas should probably be wrapped in an Avro null union to allow values to be null: private static Schema optional(Schema schema) { return Schema.createUnion(Arrays.asList(Schema.create(Schema.Type.NULL), schema)); } Tom, could you elaborate on Are optional fields supported properly?. I am in process of addressing your review comments, which are good. Thanks for the review. - Ashish --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/#review47894 --- On July 16, 2014, 3:35 a.m., Ashish Singh wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23387/ --- (Updated July 16, 2014, 3:35 a.m.) Review request for hive. Bugs: HIVE-6806 https://issues.apache.org/jira/browse/HIVE-6806 Repository: hive-git Description --- HIVE-6806: Native avro support Diffs - ql/src/java/org/apache/hadoop/hive/ql/io/AvroStorageFormatDescriptor.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/io/IOConstants.java 1bae0a8fee04049f90b16d813ff4c96707b349c8 ql/src/main/resources/META-INF/services/org.apache.hadoop.hive.ql.io.StorageFormatDescriptor a23ff115512da5fe3167835a88d582c427585b8e ql/src/test/org/apache/hadoop/hive/ql/io/TestStorageFormatDescriptor.java d53ebc65174d66bfeee25fd2891c69c78f9137ee ql/src/test/queries/clientpositive/avro_compression_enabled_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_decimal_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_joins_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_partitioned_native.q PRE-CREATION ql/src/test/queries/clientpositive/avro_schema_evolution_native.q PRE-CREATION ql/src/test/results/clientpositive/avro_compression_enabled_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_decimal_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_joins_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_partitioned_native.q.out PRE-CREATION ql/src/test/results/clientpositive/avro_schema_evolution_native.q.out PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java 1fe31e0034f8988d03a0c51a90904bb93e7cb157 serde/src/java/org/apache/hadoop/hive/serde2/avro/TypeInfoToSchema.java PRE-CREATION serde/src/test/org/apache/hadoop/hive/serde2/avro/TestTypeInfoToSchema.java PRE-CREATION Diff: https://reviews.apache.org/r/23387/diff/ Testing --- Added qTests and unit tests Thanks, Ashish Singh
[jira] [Updated] (HIVE-6560) varchar and char types cannot be cast to binary
[ https://issues.apache.org/jira/browse/HIVE-6560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao updated HIVE-6560: --- Attachment: HIVE-6560.3.patch Forgot to address invalid_cast_to_binary_N.q. Now fixed. varchar and char types cannot be cast to binary --- Key: HIVE-6560 URL: https://issues.apache.org/jira/browse/HIVE-6560 Project: Hive Issue Type: Bug Components: Types, UDF Affects Versions: 0.12.0, 0.13.0, 0.13.1 Reporter: Xuefu Zhang Assignee: Chao Attachments: HIVE-6560.1.patch, HIVE-6560.2.patch, HIVE-6560.3.patch GenericUDFToBinary can convert string to binary. VARCHAR and CHAR are substitutable with string. Thus, GenericUDFToBinary should also be able to convert VARCHAR and CHAR to binary. However, {code} hive select binary(cast('abc' as varchar(5)) from decimal_udf limit 1; FAILED: ParseException line 1:40 missing ) at 'from' near 'EOF' hive select binary(cast('abc' as varchar(5))) from decimal_udf limit 1; FAILED: SemanticException Line 0:-1 Wrong arguments ''abc'': Only string or binary data can be cast into binary data types. {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7416) provide context information to authorization checkPrivileges api call
[ https://issues.apache.org/jira/browse/HIVE-7416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-7416: Status: Patch Available (was: Open) provide context information to authorization checkPrivileges api call - Key: HIVE-7416 URL: https://issues.apache.org/jira/browse/HIVE-7416 Project: Hive Issue Type: New Feature Components: Authorization, SQLStandardAuthorization Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-7416.1.patch Context information such as request ip address, unique string for session, and original sql command string are useful for audit logging from the authorization implementations. Authorization implementations can also choose to log authorization success along with information about what policies matched and the context information. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7423) produce hive-exec-core.jar from ql module
[ https://issues.apache.org/jira/browse/HIVE-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063893#comment-14063893 ] Daniel Dai commented on HIVE-7423: -- Other projects mean Pig, Cascading, etc. I don't mind using either classifier or new assembly target, but seems classifier less intrusive to Hive codebase. +1 for the patch. produce hive-exec-core.jar from ql module - Key: HIVE-7423 URL: https://issues.apache.org/jira/browse/HIVE-7423 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.13.1 Reporter: Eugene Koifman Assignee: Eugene Koifman Attachments: HIVE-7423.patch currently ql module produces hive-exec-$version.jar which is an uber jar. It's also useful to have a thin jar, let's call it hive-exec-$version-core.jar, that only has classes from ql. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6305) test use of quoted identifiers in user/role names
[ https://issues.apache.org/jira/browse/HIVE-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-6305: Resolution: Fixed Fix Version/s: 0.14.0 Status: Resolved (was: Patch Available) Patch committed to trunk. Thanks for the contribution Jason! test use of quoted identifiers in user/role names - Key: HIVE-6305 URL: https://issues.apache.org/jira/browse/HIVE-6305 Project: Hive Issue Type: Bug Components: Authorization Reporter: Thejas M Nair Assignee: Jason Dere Fix For: 0.14.0 Attachments: HIVE-6305.1.patch Tests need to be added to verify that quoted identifiers can be used with user and role names. For example - {code} grant all on x to user `user-qa`; show grant user `user-qa` on table x; {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7342) support hiveserver2,metastore specific config files
[ https://issues.apache.org/jira/browse/HIVE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063969#comment-14063969 ] Prasad Mujumdar commented on HIVE-7342: --- [~thejas] The patch look fine to me. Just wondering if it would make sense to further split the metastore config into client (or base) and server. There are common configs like setugi, enableSasl etc that need to be in sync on both client and server. If those are available in a common file, it will be less prone to incompatible configs. The server will load both base and server specific configs, the client will only load the base config. support hiveserver2,metastore specific config files --- Key: HIVE-7342 URL: https://issues.apache.org/jira/browse/HIVE-7342 Project: Hive Issue Type: Bug Components: Configuration, HiveServer2, Metastore Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-7342.1.patch, HIVE-7342.2.patch There is currently a single configuration file for all components in hive. ie, components such as hive cli, hiveserver2 and metastore all read from the same hive-site.xml. It will be useful to have a server specific hive-site.xml, so that you can have some different configuration value set for a server. For example, you might want to enabled authorization checks for hiveserver2, while disabling the checks for hive cli. The workaround today is to add any component specific configuration as a commandline (-hiveconf) argument. Using server specific config files (eg hiveserver2-site.xml, hivemetastore-site.xml) that override the entries in hive-site.xml will make the configuration much more easy to manage. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7342) support hiveserver2,metastore specific config files
[ https://issues.apache.org/jira/browse/HIVE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063978#comment-14063978 ] Thejas M Nair commented on HIVE-7342: - bq. The server will load both base and server specific configs, the client will only load the base config. hive-site.xml is the base config file. It gets loaded by clients and servers. So it would be the right place for such config parameters. support hiveserver2,metastore specific config files --- Key: HIVE-7342 URL: https://issues.apache.org/jira/browse/HIVE-7342 Project: Hive Issue Type: Bug Components: Configuration, HiveServer2, Metastore Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-7342.1.patch, HIVE-7342.2.patch There is currently a single configuration file for all components in hive. ie, components such as hive cli, hiveserver2 and metastore all read from the same hive-site.xml. It will be useful to have a server specific hive-site.xml, so that you can have some different configuration value set for a server. For example, you might want to enabled authorization checks for hiveserver2, while disabling the checks for hive cli. The workaround today is to add any component specific configuration as a commandline (-hiveconf) argument. Using server specific config files (eg hiveserver2-site.xml, hivemetastore-site.xml) that override the entries in hive-site.xml will make the configuration much more easy to manage. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7342) support hiveserver2,metastore specific config files
[ https://issues.apache.org/jira/browse/HIVE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-7342: Release Note: Adds support for server specific config files. HiveMetastore server reads hive-site.xml as well as hivemetastore-site.xml configuration files that are available in the $HIVE_CONF_DIR or in the classpath. If metastore is being used in embedded mode (ie hive.metastore.uris is not set or empty) in hive commandline or hiveserver2, the hivemetastore-site.xml gets loaded by the parent process as well. The value of hive.metastore.uris is examined to determine this, and the value should be set appropriately in hive-site.xml . Certain metastore configuration parameters like hive.metastore.sasl.enabled, hive.metastore.kerberos.principal, hive.metastore.execute.setugi, hive.metastore.thrift.framed.transport.enabled are used by the metastore client as well as server. For such common parameters it is better to set the values in hive-site.xml, that will help in keeping them consistent. HiveServer2 reads hive-site.xml as well as hiveserver2-site.xml that are available in the $HIVE_CONF_DIR or in the classpath. If hiveserver2 is using metastore in embedded mode, hivemetastore-site.xml also is loaded. The order of precedence of the config files is as follows (later one has higher precedence) - hive-site.xml - hivemetastore-site.xml - hiveserver2-site.xml - '-hiveconf' commandline parameters was: Adds support for server specific config files. HiveMetastore server reads hive-site.xml as well as hivemetastore-site.xml configuration files that are available in the $HIVE_CONF_DIR or in the classpath. If metastore is being used in embedded mode (ie hive.metastore.uris is not set or empty) in hive commandline or hiveserver2, the hivemetastore-site.xml gets loaded by the parent process as well. The value of hive.metastore.uris is examined to determine this, and the value should be set appropriately in hive-site.xml . HiveServer2 reads hive-site.xml as well as hiveserver2-site.xml that are available in the $HIVE_CONF_DIR or in the classpath. If hiveserver2 is using metastore in embedded mode, hivemetastore-site.xml also is loaded. The order of precedence of the config files is as follows (later one has higher precedence) - hive-site.xml - hivemetastore-site.xml - hiveserver2-site.xml - '-hiveconf' commandline parameters support hiveserver2,metastore specific config files --- Key: HIVE-7342 URL: https://issues.apache.org/jira/browse/HIVE-7342 Project: Hive Issue Type: Bug Components: Configuration, HiveServer2, Metastore Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-7342.1.patch, HIVE-7342.2.patch There is currently a single configuration file for all components in hive. ie, components such as hive cli, hiveserver2 and metastore all read from the same hive-site.xml. It will be useful to have a server specific hive-site.xml, so that you can have some different configuration value set for a server. For example, you might want to enabled authorization checks for hiveserver2, while disabling the checks for hive cli. The workaround today is to add any component specific configuration as a commandline (-hiveconf) argument. Using server specific config files (eg hiveserver2-site.xml, hivemetastore-site.xml) that override the entries in hive-site.xml will make the configuration much more easy to manage. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7342) support hiveserver2,metastore specific config files
[ https://issues.apache.org/jira/browse/HIVE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063986#comment-14063986 ] Thejas M Nair commented on HIVE-7342: - [~prasadm] Added a note about that in release note section, so that it can be included in documentation as well. support hiveserver2,metastore specific config files --- Key: HIVE-7342 URL: https://issues.apache.org/jira/browse/HIVE-7342 Project: Hive Issue Type: Bug Components: Configuration, HiveServer2, Metastore Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-7342.1.patch, HIVE-7342.2.patch There is currently a single configuration file for all components in hive. ie, components such as hive cli, hiveserver2 and metastore all read from the same hive-site.xml. It will be useful to have a server specific hive-site.xml, so that you can have some different configuration value set for a server. For example, you might want to enabled authorization checks for hiveserver2, while disabling the checks for hive cli. The workaround today is to add any component specific configuration as a commandline (-hiveconf) argument. Using server specific config files (eg hiveserver2-site.xml, hivemetastore-site.xml) that override the entries in hive-site.xml will make the configuration much more easy to manage. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6037) Synchronize HiveConf with hive-default.xml.template and support show conf
[ https://issues.apache.org/jira/browse/HIVE-6037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063993#comment-14063993 ] Thejas M Nair commented on HIVE-6037: - bq. I wonder if we shouldn't put generation of the template file in a profile and make it part of the release process to ensure its up to date. I think having a hive-default.xml in trunk that is in sync with HiveConf.java is useful. We can resort to the profile approach as a workaround if we keep hitting into issues of spurious updates to it. Synchronize HiveConf with hive-default.xml.template and support show conf - Key: HIVE-6037 URL: https://issues.apache.org/jira/browse/HIVE-6037 Project: Hive Issue Type: Improvement Components: Configuration Reporter: Navis Assignee: Navis Priority: Minor Labels: TODOC14 Fix For: 0.14.0 Attachments: CHIVE-6037.3.patch.txt, HIVE-6037-0.13.0, HIVE-6037.1.patch.txt, HIVE-6037.10.patch.txt, HIVE-6037.11.patch.txt, HIVE-6037.12.patch.txt, HIVE-6037.14.patch.txt, HIVE-6037.15.patch.txt, HIVE-6037.16.patch.txt, HIVE-6037.17.patch, HIVE-6037.18.patch.txt, HIVE-6037.19.patch.txt, HIVE-6037.19.patch.txt, HIVE-6037.2.patch.txt, HIVE-6037.20.patch.txt, HIVE-6037.4.patch.txt, HIVE-6037.5.patch.txt, HIVE-6037.6.patch.txt, HIVE-6037.7.patch.txt, HIVE-6037.8.patch.txt, HIVE-6037.9.patch.txt, HIVE-6037.patch see HIVE-5879 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-6988: - Status: Open (was: Patch Available) Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch, HIVE-6988.3.patch, HIVE-6988.4.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-6988: - Attachment: HIVE-6988.4.patch Updated to fix the failing tez dag test. Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch, HIVE-6988.3.patch, HIVE-6988.4.patch, HIVE-6988.5.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-6988: - Attachment: HIVE-6988.5.patch Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch, HIVE-6988.3.patch, HIVE-6988.4.patch, HIVE-6988.5.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-6988: - Attachment: (was: HIVE-6988.5.patch) Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch, HIVE-6988.3.patch, HIVE-6988.4.patch, HIVE-6988.5.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-6988: - Status: Patch Available (was: Open) Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch, HIVE-6988.3.patch, HIVE-6988.4.patch, HIVE-6988.5.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-6988: - Attachment: (was: HIVE-6988.4.patch) Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch, HIVE-6988.3.patch, HIVE-6988.4.patch, HIVE-6988.5.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-6988: - Attachment: HIVE-6988.5.patch Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch, HIVE-6988.3.patch, HIVE-6988.4.patch, HIVE-6988.5.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-6988: - Status: Patch Available (was: Open) Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch, HIVE-6988.3.patch, HIVE-6988.4.patch, HIVE-6988.5.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-6988: - Status: Open (was: Patch Available) Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch, HIVE-6988.3.patch, HIVE-6988.4.patch, HIVE-6988.5.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7429) Set replication for archive called before file exists
Daniel Weeks created HIVE-7429: -- Summary: Set replication for archive called before file exists Key: HIVE-7429 URL: https://issues.apache.org/jira/browse/HIVE-7429 Project: Hive Issue Type: Bug Affects Versions: 0.13.1, 0.13.0, 0.12.0, 0.11.0, 0.14.0 Reporter: Daniel Weeks Priority: Critical The call to set replication is called prior to uploading the archive file to hdfs, which does not throw an error, but the replication never gets set. This has a significant impact on large jobs (especially hash joins) due to too many tasks hitting the data nodes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7429) Set replication for archive called before file exists
[ https://issues.apache.org/jira/browse/HIVE-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Weeks updated HIVE-7429: --- Assignee: Daniel Weeks Affects Version/s: (was: 0.14.0) Status: Patch Available (was: Open) Attached patch fixes the problem. Set replication for archive called before file exists - Key: HIVE-7429 URL: https://issues.apache.org/jira/browse/HIVE-7429 Project: Hive Issue Type: Bug Affects Versions: 0.13.1, 0.13.0, 0.12.0, 0.11.0 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Attachments: HIVE-7429.1.patch The call to set replication is called prior to uploading the archive file to hdfs, which does not throw an error, but the replication never gets set. This has a significant impact on large jobs (especially hash joins) due to too many tasks hitting the data nodes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7429) Set replication for archive called before file exists
[ https://issues.apache.org/jira/browse/HIVE-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Weeks updated HIVE-7429: --- Attachment: HIVE-7429.1.patch Fixes replication of archive file on hdfs. Set replication for archive called before file exists - Key: HIVE-7429 URL: https://issues.apache.org/jira/browse/HIVE-7429 Project: Hive Issue Type: Bug Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.13.1 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Attachments: HIVE-7429.1.patch The call to set replication is called prior to uploading the archive file to hdfs, which does not throw an error, but the replication never gets set. This has a significant impact on large jobs (especially hash joins) due to too many tasks hitting the data nodes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7430) Implement SMB join in tez
Vikram Dixit K created HIVE-7430: Summary: Implement SMB join in tez Key: HIVE-7430 URL: https://issues.apache.org/jira/browse/HIVE-7430 Project: Hive Issue Type: Bug Components: Tez Affects Versions: 0.14.0 Reporter: Vikram Dixit K Assignee: Vikram Dixit K We need to enable SMB joins in hive-tez. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7430) Implement SMB join in tez
[ https://issues.apache.org/jira/browse/HIVE-7430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vikram Dixit K updated HIVE-7430: - Issue Type: New Feature (was: Bug) Implement SMB join in tez - Key: HIVE-7430 URL: https://issues.apache.org/jira/browse/HIVE-7430 Project: Hive Issue Type: New Feature Components: Tez Affects Versions: 0.14.0 Reporter: Vikram Dixit K Assignee: Vikram Dixit K We need to enable SMB joins in hive-tez. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7412) column stats collection throws exception if all values for a column is null
[ https://issues.apache.org/jira/browse/HIVE-7412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7412: --- Status: Open (was: Patch Available) column stats collection throws exception if all values for a column is null --- Key: HIVE-7412 URL: https://issues.apache.org/jira/browse/HIVE-7412 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.13.1, 0.13.0 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-7412.1.patch, HIVE-7412.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7412) column stats collection throws exception if all values for a column is null
[ https://issues.apache.org/jira/browse/HIVE-7412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7412: --- Attachment: HIVE-7412.1.patch Updated per feedback. column stats collection throws exception if all values for a column is null --- Key: HIVE-7412 URL: https://issues.apache.org/jira/browse/HIVE-7412 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.13.0, 0.13.1 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-7412.1.patch, HIVE-7412.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7412) column stats collection throws exception if all values for a column is null
[ https://issues.apache.org/jira/browse/HIVE-7412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7412: --- Status: Patch Available (was: Open) column stats collection throws exception if all values for a column is null --- Key: HIVE-7412 URL: https://issues.apache.org/jira/browse/HIVE-7412 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.13.1, 0.13.0 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-7412.1.patch, HIVE-7412.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7342) support hiveserver2,metastore specific config files
[ https://issues.apache.org/jira/browse/HIVE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064062#comment-14064062 ] Prasad Mujumdar commented on HIVE-7342: --- I guess treating hive-site as base file should be sufficient. It's unlikely that you will have variety of metastore setups (embedded and remote, or secure and unsecure) in a single deployment. Thanks for updating the release notes! +1 support hiveserver2,metastore specific config files --- Key: HIVE-7342 URL: https://issues.apache.org/jira/browse/HIVE-7342 Project: Hive Issue Type: Bug Components: Configuration, HiveServer2, Metastore Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-7342.1.patch, HIVE-7342.2.patch There is currently a single configuration file for all components in hive. ie, components such as hive cli, hiveserver2 and metastore all read from the same hive-site.xml. It will be useful to have a server specific hive-site.xml, so that you can have some different configuration value set for a server. For example, you might want to enabled authorization checks for hiveserver2, while disabling the checks for hive cli. The workaround today is to add any component specific configuration as a commandline (-hiveconf) argument. Using server specific config files (eg hiveserver2-site.xml, hivemetastore-site.xml) that override the entries in hive-site.xml will make the configuration much more easy to manage. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7342) support hiveserver2,metastore specific config files
[ https://issues.apache.org/jira/browse/HIVE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-7342: Resolution: Fixed Fix Version/s: 0.14.0 Status: Resolved (was: Patch Available) Patch committed to trunk. Thanks for the reviews Jason, Sushanth and Prasad! support hiveserver2,metastore specific config files --- Key: HIVE-7342 URL: https://issues.apache.org/jira/browse/HIVE-7342 Project: Hive Issue Type: Bug Components: Configuration, HiveServer2, Metastore Reporter: Thejas M Nair Assignee: Thejas M Nair Fix For: 0.14.0 Attachments: HIVE-7342.1.patch, HIVE-7342.2.patch There is currently a single configuration file for all components in hive. ie, components such as hive cli, hiveserver2 and metastore all read from the same hive-site.xml. It will be useful to have a server specific hive-site.xml, so that you can have some different configuration value set for a server. For example, you might want to enabled authorization checks for hiveserver2, while disabling the checks for hive cli. The workaround today is to add any component specific configuration as a commandline (-hiveconf) argument. Using server specific config files (eg hiveserver2-site.xml, hivemetastore-site.xml) that override the entries in hive-site.xml will make the configuration much more easy to manage. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6560) varchar and char types cannot be cast to binary
[ https://issues.apache.org/jira/browse/HIVE-6560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064116#comment-14064116 ] Hive QA commented on HIVE-6560: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12656086/HIVE-6560.3.patch {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 5720 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_temp_table org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx org.apache.hadoop.hive.metastore.txn.TestCompactionTxnHandler.testRevokeTimedOutWorkers org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/812/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/812/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-812/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12656086 varchar and char types cannot be cast to binary --- Key: HIVE-6560 URL: https://issues.apache.org/jira/browse/HIVE-6560 Project: Hive Issue Type: Bug Components: Types, UDF Affects Versions: 0.12.0, 0.13.0, 0.13.1 Reporter: Xuefu Zhang Assignee: Chao Attachments: HIVE-6560.1.patch, HIVE-6560.2.patch, HIVE-6560.3.patch GenericUDFToBinary can convert string to binary. VARCHAR and CHAR are substitutable with string. Thus, GenericUDFToBinary should also be able to convert VARCHAR and CHAR to binary. However, {code} hive select binary(cast('abc' as varchar(5)) from decimal_udf limit 1; FAILED: ParseException line 1:40 missing ) at 'from' near 'EOF' hive select binary(cast('abc' as varchar(5))) from decimal_udf limit 1; FAILED: SemanticException Line 0:-1 Wrong arguments ''abc'': Only string or binary data can be cast into binary data types. {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7416) provide context information to authorization checkPrivileges api call
[ https://issues.apache.org/jira/browse/HIVE-7416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064120#comment-14064120 ] Hive QA commented on HIVE-7416: --- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12655909/HIVE-7416.1.patch Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/813/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/813/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-813/ Messages: {noformat} This message was trimmed, see log for full details [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-shims-0.23 --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-shims-0.23 --- [INFO] Compiling 4 source files to /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/classes [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java uses or overrides a deprecated API. [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ hive-shims-0.23 --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/test/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-shims-0.23 --- [INFO] Executing tasks main: [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/tmp [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/warehouse [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/tmp/conf [copy] Copying 5 files to /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/tmp/conf [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ hive-shims-0.23 --- [INFO] No sources to compile [INFO] [INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-shims-0.23 --- [INFO] Tests are skipped. [INFO] [INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-shims-0.23 --- [INFO] Building jar: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/hive-shims-0.23-0.14.0-SNAPSHOT.jar [INFO] [INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hive-shims-0.23 --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ hive-shims-0.23 --- [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/hive-shims-0.23-0.14.0-SNAPSHOT.jar to /data/hive-ptest/working/maven/org/apache/hive/shims/hive-shims-0.23/0.14.0-SNAPSHOT/hive-shims-0.23-0.14.0-SNAPSHOT.jar [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/pom.xml to /data/hive-ptest/working/maven/org/apache/hive/shims/hive-shims-0.23/0.14.0-SNAPSHOT/hive-shims-0.23-0.14.0-SNAPSHOT.pom [INFO] [INFO] [INFO] Building Hive Shims 0.14.0-SNAPSHOT [INFO] [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-shims --- [INFO] Deleting /data/hive-ptest/working/apache-svn-trunk-source/shims/aggregator (includes = [datanucleus.log, derby.log], excludes = []) [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-shims --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-shims --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-svn-trunk-source/shims/aggregator/src/main/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-shims --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-shims --- [INFO] No sources to compile [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ hive-shims --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory
[jira] [Commented] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064122#comment-14064122 ] Hive QA commented on HIVE-6988: --- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12656120/HIVE-6988.5.patch Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/814/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/814/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-814/ Messages: {noformat} This message was trimmed, see log for full details [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-shims-0.23 --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-shims-0.23 --- [INFO] Compiling 4 source files to /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/classes [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java uses or overrides a deprecated API. [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ hive-shims-0.23 --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/test/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-shims-0.23 --- [INFO] Executing tasks main: [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/tmp [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/warehouse [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/tmp/conf [copy] Copying 5 files to /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/tmp/conf [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ hive-shims-0.23 --- [INFO] No sources to compile [INFO] [INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-shims-0.23 --- [INFO] Tests are skipped. [INFO] [INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-shims-0.23 --- [INFO] Building jar: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/hive-shims-0.23-0.14.0-SNAPSHOT.jar [INFO] [INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hive-shims-0.23 --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ hive-shims-0.23 --- [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/hive-shims-0.23-0.14.0-SNAPSHOT.jar to /data/hive-ptest/working/maven/org/apache/hive/shims/hive-shims-0.23/0.14.0-SNAPSHOT/hive-shims-0.23-0.14.0-SNAPSHOT.jar [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/pom.xml to /data/hive-ptest/working/maven/org/apache/hive/shims/hive-shims-0.23/0.14.0-SNAPSHOT/hive-shims-0.23-0.14.0-SNAPSHOT.pom [INFO] [INFO] [INFO] Building Hive Shims 0.14.0-SNAPSHOT [INFO] [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-shims --- [INFO] Deleting /data/hive-ptest/working/apache-svn-trunk-source/shims/aggregator (includes = [datanucleus.log, derby.log], excludes = []) [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-shims --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-shims --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-svn-trunk-source/shims/aggregator/src/main/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-shims --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-shims --- [INFO] No sources to compile [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ hive-shims --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory
[jira] [Commented] (HIVE-7429) Set replication for archive called before file exists
[ https://issues.apache.org/jira/browse/HIVE-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064124#comment-14064124 ] Hive QA commented on HIVE-7429: --- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12656121/HIVE-7429.1.patch Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/815/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/815/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-815/ Messages: {noformat} This message was trimmed, see log for full details [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-shims-0.23 --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-shims-0.23 --- [INFO] Compiling 4 source files to /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/classes [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java uses or overrides a deprecated API. [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ hive-shims-0.23 --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/test/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-shims-0.23 --- [INFO] Executing tasks main: [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/tmp [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/warehouse [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/tmp/conf [copy] Copying 5 files to /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/tmp/conf [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ hive-shims-0.23 --- [INFO] No sources to compile [INFO] [INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-shims-0.23 --- [INFO] Tests are skipped. [INFO] [INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-shims-0.23 --- [INFO] Building jar: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/hive-shims-0.23-0.14.0-SNAPSHOT.jar [INFO] [INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hive-shims-0.23 --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ hive-shims-0.23 --- [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/hive-shims-0.23-0.14.0-SNAPSHOT.jar to /data/hive-ptest/working/maven/org/apache/hive/shims/hive-shims-0.23/0.14.0-SNAPSHOT/hive-shims-0.23-0.14.0-SNAPSHOT.jar [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/pom.xml to /data/hive-ptest/working/maven/org/apache/hive/shims/hive-shims-0.23/0.14.0-SNAPSHOT/hive-shims-0.23-0.14.0-SNAPSHOT.pom [INFO] [INFO] [INFO] Building Hive Shims 0.14.0-SNAPSHOT [INFO] [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-shims --- [INFO] Deleting /data/hive-ptest/working/apache-svn-trunk-source/shims/aggregator (includes = [datanucleus.log, derby.log], excludes = []) [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-shims --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-shims --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-svn-trunk-source/shims/aggregator/src/main/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-shims --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-shims --- [INFO] No sources to compile [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ hive-shims --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory
[jira] [Commented] (HIVE-7412) column stats collection throws exception if all values for a column is null
[ https://issues.apache.org/jira/browse/HIVE-7412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064126#comment-14064126 ] Hive QA commented on HIVE-7412: --- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12656132/HIVE-7412.1.patch Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/816/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/816/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-816/ Messages: {noformat} This message was trimmed, see log for full details [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-shims-0.23 --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-shims-0.23 --- [INFO] Compiling 4 source files to /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/classes [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java uses or overrides a deprecated API. [WARNING] /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ hive-shims-0.23 --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/src/test/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-shims-0.23 --- [INFO] Executing tasks main: [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/tmp [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/warehouse [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/tmp/conf [copy] Copying 5 files to /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/tmp/conf [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ hive-shims-0.23 --- [INFO] No sources to compile [INFO] [INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-shims-0.23 --- [INFO] Tests are skipped. [INFO] [INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-shims-0.23 --- [INFO] Building jar: /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/hive-shims-0.23-0.14.0-SNAPSHOT.jar [INFO] [INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hive-shims-0.23 --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ hive-shims-0.23 --- [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/target/hive-shims-0.23-0.14.0-SNAPSHOT.jar to /data/hive-ptest/working/maven/org/apache/hive/shims/hive-shims-0.23/0.14.0-SNAPSHOT/hive-shims-0.23-0.14.0-SNAPSHOT.jar [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/shims/0.23/pom.xml to /data/hive-ptest/working/maven/org/apache/hive/shims/hive-shims-0.23/0.14.0-SNAPSHOT/hive-shims-0.23-0.14.0-SNAPSHOT.pom [INFO] [INFO] [INFO] Building Hive Shims 0.14.0-SNAPSHOT [INFO] [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-shims --- [INFO] Deleting /data/hive-ptest/working/apache-svn-trunk-source/shims/aggregator (includes = [datanucleus.log, derby.log], excludes = []) [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-shims --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-shims --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-svn-trunk-source/shims/aggregator/src/main/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-shims --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-shims --- [INFO] No sources to compile [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ hive-shims --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory
[jira] [Commented] (HIVE-7104) Unit tests are disabled
[ https://issues.apache.org/jira/browse/HIVE-7104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064125#comment-14064125 ] Gautam Kowshik commented on HIVE-7104: -- can we commit this back to 0.13.1 branch as well? hive 13.1 checkout doesn't run tests either. Unit tests are disabled --- Key: HIVE-7104 URL: https://issues.apache.org/jira/browse/HIVE-7104 Project: Hive Issue Type: Bug Reporter: David Chen Assignee: David Chen Fix For: 0.14.0 Attachments: HIVE-7104.1.patch When I run {{mvn clean test -Phadoop-1|2}}, none of the unit tests are run. I did a binary search through the commit logs and found that the change that caused the unit tests to be disabled was the the change to the root pom.xml in the patch for HIVE-7067 (e77f38dc44de5a9b10bce8e0a2f1f5452f6921ed). Removing that change allowed the unit tests to be run again. -- This message was sent by Atlassian JIRA (v6.2#6252)
Archive Replication Issue HIVE-7429
I was wondering if someone could quickly review the patch below. It's trivial and is just fixing a order of execution problem when uploading the archive file to hdfs and setting the replication factor (replication is set before archive is uploaded). https://issues.apache.org/jira/browse/HIVE-7429 Thanks, -Dan Weeks
[jira] [Commented] (HIVE-7342) support hiveserver2,metastore specific config files
[ https://issues.apache.org/jira/browse/HIVE-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064131#comment-14064131 ] Thejas M Nair commented on HIVE-7342: - I missed adding some files to svn as part of my first commit, added them now. support hiveserver2,metastore specific config files --- Key: HIVE-7342 URL: https://issues.apache.org/jira/browse/HIVE-7342 Project: Hive Issue Type: Bug Components: Configuration, HiveServer2, Metastore Reporter: Thejas M Nair Assignee: Thejas M Nair Fix For: 0.14.0 Attachments: HIVE-7342.1.patch, HIVE-7342.2.patch There is currently a single configuration file for all components in hive. ie, components such as hive cli, hiveserver2 and metastore all read from the same hive-site.xml. It will be useful to have a server specific hive-site.xml, so that you can have some different configuration value set for a server. For example, you might want to enabled authorization checks for hiveserver2, while disabling the checks for hive cli. The workaround today is to add any component specific configuration as a commandline (-hiveconf) argument. Using server specific config files (eg hiveserver2-site.xml, hivemetastore-site.xml) that override the entries in hive-site.xml will make the configuration much more easy to manage. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7412) column stats collection throws exception if all values for a column is null
[ https://issues.apache.org/jira/browse/HIVE-7412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064134#comment-14064134 ] Thejas M Nair commented on HIVE-7412: - Regarding the build error - I missed adding some files to svn as part of my first commit for HIVE-7342, added them now. column stats collection throws exception if all values for a column is null --- Key: HIVE-7412 URL: https://issues.apache.org/jira/browse/HIVE-7412 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.13.0, 0.13.1 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-7412.1.patch, HIVE-7412.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7429) Set replication for archive called before file exists
[ https://issues.apache.org/jira/browse/HIVE-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064132#comment-14064132 ] Daniel Weeks commented on HIVE-7429: Failure isn't related to my patch. Set replication for archive called before file exists - Key: HIVE-7429 URL: https://issues.apache.org/jira/browse/HIVE-7429 Project: Hive Issue Type: Bug Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.13.1 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Attachments: HIVE-7429.1.patch The call to set replication is called prior to uploading the archive file to hdfs, which does not throw an error, but the replication never gets set. This has a significant impact on large jobs (especially hash joins) due to too many tasks hitting the data nodes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7429) Set replication for archive called before file exists
[ https://issues.apache.org/jira/browse/HIVE-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064135#comment-14064135 ] Thejas M Nair commented on HIVE-7429: - Regarding the build error - I missed adding some files to svn as part of my first commit for HIVE-7342, added them now. Set replication for archive called before file exists - Key: HIVE-7429 URL: https://issues.apache.org/jira/browse/HIVE-7429 Project: Hive Issue Type: Bug Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.13.1 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Attachments: HIVE-7429.1.patch The call to set replication is called prior to uploading the archive file to hdfs, which does not throw an error, but the replication never gets set. This has a significant impact on large jobs (especially hash joins) due to too many tasks hitting the data nodes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7429) Set replication for archive called before file exists
[ https://issues.apache.org/jira/browse/HIVE-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064138#comment-14064138 ] Xuefu Zhang commented on HIVE-7429: --- Patch looks good to me, but could you figure out why there is a compilation error? Set replication for archive called before file exists - Key: HIVE-7429 URL: https://issues.apache.org/jira/browse/HIVE-7429 Project: Hive Issue Type: Bug Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.13.1 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Attachments: HIVE-7429.1.patch The call to set replication is called prior to uploading the archive file to hdfs, which does not throw an error, but the replication never gets set. This has a significant impact on large jobs (especially hash joins) due to too many tasks hitting the data nodes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7429) Set replication for archive called before file exists
[ https://issues.apache.org/jira/browse/HIVE-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064137#comment-14064137 ] Thejas M Nair commented on HIVE-7429: - I have started another pre-commit build for this one. Sorry for the trouble! Set replication for archive called before file exists - Key: HIVE-7429 URL: https://issues.apache.org/jira/browse/HIVE-7429 Project: Hive Issue Type: Bug Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.13.1 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Attachments: HIVE-7429.1.patch The call to set replication is called prior to uploading the archive file to hdfs, which does not throw an error, but the replication never gets set. This has a significant impact on large jobs (especially hash joins) due to too many tasks hitting the data nodes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Issue Comment Deleted] (HIVE-7429) Set replication for archive called before file exists
[ https://issues.apache.org/jira/browse/HIVE-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-7429: -- Comment: was deleted (was: Patch looks good to me, but could you figure out why there is a compilation error?) Set replication for archive called before file exists - Key: HIVE-7429 URL: https://issues.apache.org/jira/browse/HIVE-7429 Project: Hive Issue Type: Bug Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.13.1 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Attachments: HIVE-7429.1.patch The call to set replication is called prior to uploading the archive file to hdfs, which does not throw an error, but the replication never gets set. This has a significant impact on large jobs (especially hash joins) due to too many tasks hitting the data nodes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7429) Set replication for archive called before file exists
[ https://issues.apache.org/jira/browse/HIVE-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064145#comment-14064145 ] Xuefu Zhang commented on HIVE-7429: --- +1 pending on tests. Set replication for archive called before file exists - Key: HIVE-7429 URL: https://issues.apache.org/jira/browse/HIVE-7429 Project: Hive Issue Type: Bug Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.13.1 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Attachments: HIVE-7429.1.patch The call to set replication is called prior to uploading the archive file to hdfs, which does not throw an error, but the replication never gets set. This has a significant impact on large jobs (especially hash joins) due to too many tasks hitting the data nodes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6988) Hive changes for tez-0.5.x compatibility
[ https://issues.apache.org/jira/browse/HIVE-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064146#comment-14064146 ] Thejas M Nair commented on HIVE-6988: - Regarding the build error - I missed adding some files to svn as part of my first commit for HIVE-7342, added them now. Hive changes for tez-0.5.x compatibility Key: HIVE-6988 URL: https://issues.apache.org/jira/browse/HIVE-6988 Project: Hive Issue Type: Task Reporter: Gopal V Attachments: HIVE-6988.1.patch, HIVE-6988.2.patch, HIVE-6988.3.patch, HIVE-6988.4.patch, HIVE-6988.5.patch Umbrella JIRA to track all hive changes needed for tez-0.5.x compatibility. tez-0.4.x - tez.0.5.x is going to break backwards compat. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7429) Set replication for archive called before file exists
[ https://issues.apache.org/jira/browse/HIVE-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-7429: Attachment: HIVE-7429.1.patch Attaching file again for precommit tests. Set replication for archive called before file exists - Key: HIVE-7429 URL: https://issues.apache.org/jira/browse/HIVE-7429 Project: Hive Issue Type: Bug Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.13.1 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Attachments: HIVE-7429.1.patch, HIVE-7429.1.patch The call to set replication is called prior to uploading the archive file to hdfs, which does not throw an error, but the replication never gets set. This has a significant impact on large jobs (especially hash joins) due to too many tasks hitting the data nodes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7284) CBO: create Partition Pruning rules in Optiq
[ https://issues.apache.org/jira/browse/HIVE-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7284: --- Attachment: HIVE-7284.1.patch Rebased patch. Also, since partition list is cached within a query, we need not to worry about passing partition list on the way back. CBO: create Partition Pruning rules in Optiq Key: HIVE-7284 URL: https://issues.apache.org/jira/browse/HIVE-7284 Project: Hive Issue Type: Sub-task Reporter: Harish Butani Assignee: Harish Butani Attachments: HIVE-7284.1.patch, HIVE-7284.1.patch Create rules in Optiq that do the job of the PartitionPruner. For now we will reuse the logic that evaluates the Partition list from prunedExpr. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7284) CBO: create Partition Pruning rules in Optiq
[ https://issues.apache.org/jira/browse/HIVE-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7284: --- Description: NO PRECOMMIT TESTS Create rules in Optiq that do the job of the PartitionPruner. For now we will reuse the logic that evaluates the Partition list from prunedExpr. was: Create rules in Optiq that do the job of the PartitionPruner. For now we will reuse the logic that evaluates the Partition list from prunedExpr. CBO: create Partition Pruning rules in Optiq Key: HIVE-7284 URL: https://issues.apache.org/jira/browse/HIVE-7284 Project: Hive Issue Type: Sub-task Reporter: Harish Butani Assignee: Harish Butani Attachments: HIVE-7284.1.patch, HIVE-7284.1.patch NO PRECOMMIT TESTS Create rules in Optiq that do the job of the PartitionPruner. For now we will reuse the logic that evaluates the Partition list from prunedExpr. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7284) CBO: create Partition Pruning rules in Optiq
[ https://issues.apache.org/jira/browse/HIVE-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7284: --- Status: Patch Available (was: Open) CBO: create Partition Pruning rules in Optiq Key: HIVE-7284 URL: https://issues.apache.org/jira/browse/HIVE-7284 Project: Hive Issue Type: Sub-task Reporter: Harish Butani Assignee: Harish Butani Attachments: HIVE-7284.1.patch, HIVE-7284.1.patch NO PRECOMMIT TESTS Create rules in Optiq that do the job of the PartitionPruner. For now we will reuse the logic that evaluates the Partition list from prunedExpr. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6560) varchar and char types cannot be cast to binary
[ https://issues.apache.org/jira/browse/HIVE-6560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064177#comment-14064177 ] Chao commented on HIVE-6560: The rest of test failures do not appear to be related to this patch. varchar and char types cannot be cast to binary --- Key: HIVE-6560 URL: https://issues.apache.org/jira/browse/HIVE-6560 Project: Hive Issue Type: Bug Components: Types, UDF Affects Versions: 0.12.0, 0.13.0, 0.13.1 Reporter: Xuefu Zhang Assignee: Chao Attachments: HIVE-6560.1.patch, HIVE-6560.2.patch, HIVE-6560.3.patch GenericUDFToBinary can convert string to binary. VARCHAR and CHAR are substitutable with string. Thus, GenericUDFToBinary should also be able to convert VARCHAR and CHAR to binary. However, {code} hive select binary(cast('abc' as varchar(5)) from decimal_udf limit 1; FAILED: ParseException line 1:40 missing ) at 'from' near 'EOF' hive select binary(cast('abc' as varchar(5))) from decimal_udf limit 1; FAILED: SemanticException Line 0:-1 Wrong arguments ''abc'': Only string or binary data can be cast into binary data types. {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7412) column stats collection throws exception if all values for a column is null
[ https://issues.apache.org/jira/browse/HIVE-7412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7412: --- Attachment: HIVE-7412.2.patch column stats collection throws exception if all values for a column is null --- Key: HIVE-7412 URL: https://issues.apache.org/jira/browse/HIVE-7412 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.13.0, 0.13.1 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-7412.1.patch, HIVE-7412.2.patch, HIVE-7412.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7412) column stats collection throws exception if all values for a column is null
[ https://issues.apache.org/jira/browse/HIVE-7412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7412: --- Status: Open (was: Patch Available) column stats collection throws exception if all values for a column is null --- Key: HIVE-7412 URL: https://issues.apache.org/jira/browse/HIVE-7412 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.13.1, 0.13.0 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-7412.1.patch, HIVE-7412.2.patch, HIVE-7412.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7387) Guava version conflict between hadoop and spark [Spark-Branch]
[ https://issues.apache.org/jira/browse/HIVE-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064187#comment-14064187 ] Sean Owen commented on HIVE-7387: - Hi Xuefu, I was wrong about Spark not using Guava 12+. It does now. I posted an update on the Spark JIRA. That makes it somewhat harder to downgrade, although not much. I would not characterize it as not being taken seriously. There are legitimate questions here, like why Hadoop can't get off of Guava 11, which is about 2.5 years old now. It was very helpful to link the Spark JIRA to this one, which has the details. Guava version conflict between hadoop and spark [Spark-Branch] -- Key: HIVE-7387 URL: https://issues.apache.org/jira/browse/HIVE-7387 Project: Hive Issue Type: Bug Components: Spark Reporter: Chengxiang Li Assignee: Chengxiang Li hadoop-hdfs and hadoop-comman have dependency on guava-11.0.2.jar, and spark dependent on guava-14.0.1.jar. guava-11.0.2 has API conflict with guava-14.0.1, as Hive CLI load both dependency into classpath currently, query failed on either spark engine or mr engine. {code} java.lang.NoSuchMethodError: com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode; at org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261) at org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165) at org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102) at org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210) at org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169) at org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161) at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155) at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:75) at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:92) at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:661) at org.apache.spark.storage.BlockManager.put(BlockManager.scala:546) at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:812) at org.apache.spark.broadcast.HttpBroadcast.init(HttpBroadcast.scala:52) at org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:35) at org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:29) at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62) at org.apache.spark.SparkContext.broadcast(SparkContext.scala:776) at org.apache.spark.rdd.HadoopRDD.init(HadoopRDD.scala:112) at org.apache.spark.SparkContext.hadoopRDD(SparkContext.scala:527) at org.apache.spark.api.java.JavaSparkContext.hadoopRDD(JavaSparkContext.scala:307) at org.apache.hadoop.hive.ql.exec.spark.SparkClient.createRDD(SparkClient.java:204) at org.apache.hadoop.hive.ql.exec.spark.SparkClient.execute(SparkClient.java:167) at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:32) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:159) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:72) {code} NO PRECOMMIT TESTS. This is for spark branch only. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7412) column stats collection throws exception if all values for a column is null
[ https://issues.apache.org/jira/browse/HIVE-7412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7412: --- Status: Patch Available (was: Open) column stats collection throws exception if all values for a column is null --- Key: HIVE-7412 URL: https://issues.apache.org/jira/browse/HIVE-7412 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.13.1, 0.13.0 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-7412.1.patch, HIVE-7412.2.patch, HIVE-7412.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7387) Guava version conflict between hadoop and spark [Spark-Branch]
[ https://issues.apache.org/jira/browse/HIVE-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064204#comment-14064204 ] Xuefu Zhang commented on HIVE-7387: --- Thanks, Sean. Linked. Guava version conflict between hadoop and spark [Spark-Branch] -- Key: HIVE-7387 URL: https://issues.apache.org/jira/browse/HIVE-7387 Project: Hive Issue Type: Bug Components: Spark Reporter: Chengxiang Li Assignee: Chengxiang Li hadoop-hdfs and hadoop-comman have dependency on guava-11.0.2.jar, and spark dependent on guava-14.0.1.jar. guava-11.0.2 has API conflict with guava-14.0.1, as Hive CLI load both dependency into classpath currently, query failed on either spark engine or mr engine. {code} java.lang.NoSuchMethodError: com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode; at org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261) at org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165) at org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102) at org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) at org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210) at org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169) at org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161) at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155) at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:75) at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:92) at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:661) at org.apache.spark.storage.BlockManager.put(BlockManager.scala:546) at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:812) at org.apache.spark.broadcast.HttpBroadcast.init(HttpBroadcast.scala:52) at org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:35) at org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:29) at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62) at org.apache.spark.SparkContext.broadcast(SparkContext.scala:776) at org.apache.spark.rdd.HadoopRDD.init(HadoopRDD.scala:112) at org.apache.spark.SparkContext.hadoopRDD(SparkContext.scala:527) at org.apache.spark.api.java.JavaSparkContext.hadoopRDD(JavaSparkContext.scala:307) at org.apache.hadoop.hive.ql.exec.spark.SparkClient.createRDD(SparkClient.java:204) at org.apache.hadoop.hive.ql.exec.spark.SparkClient.execute(SparkClient.java:167) at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:32) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:159) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:72) {code} NO PRECOMMIT TESTS. This is for spark branch only. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23425: HIVE-7361: using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands
On July 16, 2014, 1:13 a.m., Jason Dere wrote: ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java, line 315 https://reviews.apache.org/r/23425/diff/2/?file=629671#file629671line315 What does RESET do, just reset any config settings set via the SET command? If SET is not currently being restricted, then maybe RESET should not either. It also resets config setttings set using -hiveconf commandline parameter. But looks like that happens only with hive cli and not HS2. So it would be safe to allow reset. On July 16, 2014, 1:13 a.m., Jason Dere wrote: ql/src/java/org/apache/hadoop/hive/ql/processors/AddResourceProcessor.java, line 35 https://reviews.apache.org/r/23425/diff/2/?file=629662#file629662line35 Should DeleteResourceProcessor also be updated to use the auth check? Good catch! - Thejas --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23425/#review47832 --- On July 14, 2014, 5:13 p.m., Thejas Nair wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23425/ --- (Updated July 14, 2014, 5:13 p.m.) Review request for hive. Bugs: HIVE-7361 https://issues.apache.org/jira/browse/HIVE-7361 Repository: hive-git Description --- See jira HIVE-7361. Diffs - itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestJdbcWithSQLAuthorization.java abe5ffa itests/util/src/main/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessControllerForTest.java 4474ce5 itests/util/src/main/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidatorForTest.java PRE-CREATION itests/util/src/main/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizerFactoryForTest.java 89e18b3 ql/src/java/org/apache/hadoop/hive/ql/processors/AddResourceProcessor.java 0532666 ql/src/java/org/apache/hadoop/hive/ql/processors/CommandProcessorResponse.java f29a409 ql/src/java/org/apache/hadoop/hive/ql/processors/CommandUtil.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/processors/CompileProcessor.java 8b8475b ql/src/java/org/apache/hadoop/hive/ql/processors/DfsProcessor.java d343a3c ql/src/java/org/apache/hadoop/hive/ql/processors/ResetProcessor.java b8ecfad ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveOperationType.java 0537b92 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrivilegeObject.java db57cb6 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/GrantPrivAuthUtils.java f99109b ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java 151df6a ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLAuthorizationUtils.java beb45f5 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java f2a4004 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java 8937cfa ql/src/test/org/apache/hadoop/hive/ql/security/authorization/plugin/TestHiveOperationType.java b990cb2 ql/src/test/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/TestSQLStdHiveAccessController.java 06f9258 ql/src/test/queries/clientnegative/authorization_compile.q PRE-CREATION ql/src/test/queries/clientnegative/authorization_reset.q PRE-CREATION ql/src/test/results/clientnegative/authorization_addjar.q.out d206dca ql/src/test/results/clientnegative/authorization_addpartition.q.out 6331ae2 ql/src/test/results/clientnegative/authorization_alter_db_owner.q.out 550cbcc ql/src/test/results/clientnegative/authorization_alter_db_owner_default.q.out 4df868e ql/src/test/results/clientnegative/authorization_compile.q.out PRE-CREATION ql/src/test/results/clientnegative/authorization_create_func1.q.out 7c72092 ql/src/test/results/clientnegative/authorization_create_func2.q.out 7c72092 ql/src/test/results/clientnegative/authorization_create_macro1.q.out 7c72092 ql/src/test/results/clientnegative/authorization_createview.q.out c86bdfa ql/src/test/results/clientnegative/authorization_ctas.q.out f8395b7 ql/src/test/results/clientnegative/authorization_desc_table_nosel.q.out be56d34 ql/src/test/results/clientnegative/authorization_dfs.q.out d685e78 ql/src/test/results/clientnegative/authorization_drop_db_cascade.q.out 74ab4c8
Re: Review Request 23425: HIVE-7361: using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23425/ --- (Updated July 16, 2014, 10:10 p.m.) Review request for hive. Changes --- Addressing review comments. Fixed issue with user in admin role not being able to run dfs,add,delete,compile commands. Bugs: HIVE-7361 https://issues.apache.org/jira/browse/HIVE-7361 Repository: hive-git Description --- See jira HIVE-7361. Diffs (updated) - conf/hive-default.xml.template ba5b8a9 itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestJdbcWithSQLAuthorization.java abe5ffa itests/util/src/main/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessControllerForTest.java 4474ce5 itests/util/src/main/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidatorForTest.java PRE-CREATION itests/util/src/main/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizerFactoryForTest.java 89e18b3 ql/src/java/org/apache/hadoop/hive/ql/processors/AddResourceProcessor.java 0532666 ql/src/java/org/apache/hadoop/hive/ql/processors/CommandProcessorResponse.java f29a409 ql/src/java/org/apache/hadoop/hive/ql/processors/CommandUtil.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/processors/CompileProcessor.java 8b8475b ql/src/java/org/apache/hadoop/hive/ql/processors/DeleteResourceProcessor.java bfac5f8 ql/src/java/org/apache/hadoop/hive/ql/processors/DfsProcessor.java d343a3c ql/src/java/org/apache/hadoop/hive/ql/processors/ResetProcessor.java b8ecfad ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveOperationType.java 0537b92 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrivilegeObject.java db57cb6 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/GrantPrivAuthUtils.java f99109b ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java 151df6a ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLAuthorizationUtils.java beb45f5 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java f2a4004 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java 8937cfa ql/src/test/org/apache/hadoop/hive/ql/security/authorization/plugin/TestHiveOperationType.java b990cb2 ql/src/test/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/TestSQLStdHiveAccessController.java 06f9258 ql/src/test/queries/clientnegative/authorization_addjar.q a1709da ql/src/test/queries/clientnegative/authorization_compile.q PRE-CREATION ql/src/test/queries/clientnegative/authorization_deletejar.q PRE-CREATION ql/src/test/queries/clientnegative/authorization_dfs.q 7d47a7b ql/src/test/queries/clientpositive/authorization_admin_almighty2.q PRE-CREATION ql/src/test/queries/clientpositive/authorization_reset.q PRE-CREATION ql/src/test/results/clientnegative/authorization_addjar.q.out d206dca ql/src/test/results/clientnegative/authorization_addpartition.q.out 6331ae2 ql/src/test/results/clientnegative/authorization_alter_db_owner.q.out 550cbcc ql/src/test/results/clientnegative/authorization_alter_db_owner_default.q.out 4df868e ql/src/test/results/clientnegative/authorization_compile.q.out PRE-CREATION ql/src/test/results/clientnegative/authorization_create_func1.q.out 7c72092 ql/src/test/results/clientnegative/authorization_create_func2.q.out 7c72092 ql/src/test/results/clientnegative/authorization_create_macro1.q.out 7c72092 ql/src/test/results/clientnegative/authorization_createview.q.out c86bdfa ql/src/test/results/clientnegative/authorization_ctas.q.out f8395b7 ql/src/test/results/clientnegative/authorization_deletejar.q.out PRE-CREATION ql/src/test/results/clientnegative/authorization_desc_table_nosel.q.out be56d34 ql/src/test/results/clientnegative/authorization_dfs.q.out d685e78 ql/src/test/results/clientnegative/authorization_drop_db_cascade.q.out 74ab4c8 ql/src/test/results/clientnegative/authorization_drop_db_empty.q.out bd7447f ql/src/test/results/clientnegative/authorization_droppartition.q.out 1da250a ql/src/test/results/clientnegative/authorization_grant_table_allpriv.q.out 4aa7058 ql/src/test/results/clientnegative/authorization_grant_table_fail1.q.out f042c1e ql/src/test/results/clientnegative/authorization_grant_table_fail_nogrant.q.out a906a70 ql/src/test/results/clientnegative/authorization_insert_noinspriv.q.out 8de1104 ql/src/test/results/clientnegative/authorization_insert_noselectpriv.q.out 46ada3b
[jira] [Updated] (HIVE-7361) using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands
[ https://issues.apache.org/jira/browse/HIVE-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-7361: Attachment: HIVE-7361.3.patch HIVE-7361.3.patch- Addressing review comments. Fixed issue with user in admin role not being able to run dfs,add,delete,compile commands. using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands - Key: HIVE-7361 URL: https://issues.apache.org/jira/browse/HIVE-7361 Project: Hive Issue Type: Improvement Components: Authorization Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-7361.1.patch, HIVE-7361.2.patch, HIVE-7361.3.patch The only way to disable the commands SET, RESET, DFS, ADD, DELETE and COMPILE that is available currently is to use the hive.security.command.whitelist parameter. Some of these commands are disabled using this configuration parameter for security reasons when SQL standard authorization is enabled. However, it gets disabled in all cases. If authorization api is used authorize the use of these commands, it will give authorization implementations the flexibility to allow/disallow these commands based on user privileges. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7361) using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands
[ https://issues.apache.org/jira/browse/HIVE-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-7361: Labels: TODOC14 (was: ) Release Note: This also changes behavior in SQL std auth - reset command is now allowed. dfs,add,delete,compile commands are now allowed for the admin user. using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands - Key: HIVE-7361 URL: https://issues.apache.org/jira/browse/HIVE-7361 Project: Hive Issue Type: Improvement Components: Authorization Reporter: Thejas M Nair Assignee: Thejas M Nair Labels: TODOC14 Attachments: HIVE-7361.1.patch, HIVE-7361.2.patch, HIVE-7361.3.patch The only way to disable the commands SET, RESET, DFS, ADD, DELETE and COMPILE that is available currently is to use the hive.security.command.whitelist parameter. Some of these commands are disabled using this configuration parameter for security reasons when SQL standard authorization is enabled. However, it gets disabled in all cases. If authorization api is used authorize the use of these commands, it will give authorization implementations the flexibility to allow/disallow these commands based on user privileges. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7416) provide context information to authorization checkPrivileges api call
[ https://issues.apache.org/jira/browse/HIVE-7416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-7416: Attachment: HIVE-7416.1.patch provide context information to authorization checkPrivileges api call - Key: HIVE-7416 URL: https://issues.apache.org/jira/browse/HIVE-7416 Project: Hive Issue Type: New Feature Components: Authorization, SQLStandardAuthorization Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-7416.1.patch, HIVE-7416.1.patch Context information such as request ip address, unique string for session, and original sql command string are useful for audit logging from the authorization implementations. Authorization implementations can also choose to log authorization success along with information about what policies matched and the context information. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6560) varchar and char types cannot be cast to binary
[ https://issues.apache.org/jira/browse/HIVE-6560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064259#comment-14064259 ] Xuefu Zhang commented on HIVE-6560: --- Patch committed to trunk. Thanks Chao for the contribution. varchar and char types cannot be cast to binary --- Key: HIVE-6560 URL: https://issues.apache.org/jira/browse/HIVE-6560 Project: Hive Issue Type: Bug Components: Types, UDF Affects Versions: 0.12.0, 0.13.0, 0.13.1 Reporter: Xuefu Zhang Assignee: Chao Attachments: HIVE-6560.1.patch, HIVE-6560.2.patch, HIVE-6560.3.patch GenericUDFToBinary can convert string to binary. VARCHAR and CHAR are substitutable with string. Thus, GenericUDFToBinary should also be able to convert VARCHAR and CHAR to binary. However, {code} hive select binary(cast('abc' as varchar(5)) from decimal_udf limit 1; FAILED: ParseException line 1:40 missing ) at 'from' near 'EOF' hive select binary(cast('abc' as varchar(5))) from decimal_udf limit 1; FAILED: SemanticException Line 0:-1 Wrong arguments ''abc'': Only string or binary data can be cast into binary data types. {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7365) Explain authorize for auth2 throws exception
[ https://issues.apache.org/jira/browse/HIVE-7365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-7365: Resolution: Fixed Fix Version/s: 0.14.0 Status: Resolved (was: Patch Available) Patch committed to trunk. Thanks for the patch Navis! Explain authorize for auth2 throws exception Key: HIVE-7365 URL: https://issues.apache.org/jira/browse/HIVE-7365 Project: Hive Issue Type: Task Components: Authorization Reporter: Navis Assignee: Navis Priority: Minor Fix For: 0.14.0 Attachments: HIVE-7365.1.patch.txt, HIVE-7365.2.patch.txt throws NPE in auth v2. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7416) provide context information to authorization checkPrivileges api call
[ https://issues.apache.org/jira/browse/HIVE-7416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064231#comment-14064231 ] Thejas M Nair commented on HIVE-7416: - Attaching same patch file again to kick off the tests. provide context information to authorization checkPrivileges api call - Key: HIVE-7416 URL: https://issues.apache.org/jira/browse/HIVE-7416 Project: Hive Issue Type: New Feature Components: Authorization, SQLStandardAuthorization Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-7416.1.patch, HIVE-7416.1.patch Context information such as request ip address, unique string for session, and original sql command string are useful for audit logging from the authorization implementations. Authorization implementations can also choose to log authorization success along with information about what policies matched and the context information. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7429) Set replication for archive called before file exists
[ https://issues.apache.org/jira/browse/HIVE-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064272#comment-14064272 ] Hive QA commented on HIVE-7429: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12656142/HIVE-7429.1.patch {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 5737 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_temp_table org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/818/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/818/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-818/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12656142 Set replication for archive called before file exists - Key: HIVE-7429 URL: https://issues.apache.org/jira/browse/HIVE-7429 Project: Hive Issue Type: Bug Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.13.1 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Attachments: HIVE-7429.1.patch, HIVE-7429.1.patch The call to set replication is called prior to uploading the archive file to hdfs, which does not throw an error, but the replication never gets set. This has a significant impact on large jobs (especially hash joins) due to too many tasks hitting the data nodes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7429) Set replication for archive called before file exists
[ https://issues.apache.org/jira/browse/HIVE-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064276#comment-14064276 ] Daniel Weeks commented on HIVE-7429: This failure also doesn't look like it was caused by this patch. Set replication for archive called before file exists - Key: HIVE-7429 URL: https://issues.apache.org/jira/browse/HIVE-7429 Project: Hive Issue Type: Bug Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.13.1 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Attachments: HIVE-7429.1.patch, HIVE-7429.1.patch The call to set replication is called prior to uploading the archive file to hdfs, which does not throw an error, but the replication never gets set. This has a significant impact on large jobs (especially hash joins) due to too many tasks hitting the data nodes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6560) varchar and char types cannot be cast to binary
[ https://issues.apache.org/jira/browse/HIVE-6560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-6560: -- Tags: TODOC14 Resolution: Fixed Fix Version/s: 0.14.0 Status: Resolved (was: Patch Available) varchar and char types cannot be cast to binary --- Key: HIVE-6560 URL: https://issues.apache.org/jira/browse/HIVE-6560 Project: Hive Issue Type: Bug Components: Types, UDF Affects Versions: 0.12.0, 0.13.0, 0.13.1 Reporter: Xuefu Zhang Assignee: Chao Fix For: 0.14.0 Attachments: HIVE-6560.1.patch, HIVE-6560.2.patch, HIVE-6560.3.patch GenericUDFToBinary can convert string to binary. VARCHAR and CHAR are substitutable with string. Thus, GenericUDFToBinary should also be able to convert VARCHAR and CHAR to binary. However, {code} hive select binary(cast('abc' as varchar(5)) from decimal_udf limit 1; FAILED: ParseException line 1:40 missing ) at 'from' near 'EOF' hive select binary(cast('abc' as varchar(5))) from decimal_udf limit 1; FAILED: SemanticException Line 0:-1 Wrong arguments ''abc'': Only string or binary data can be cast into binary data types. {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6928) Beeline should not chop off describe extended results by default
[ https://issues.apache.org/jira/browse/HIVE-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-6928: -- Attachment: HIVE-6928.3 .patch Beeline should not chop off describe extended results by default -- Key: HIVE-6928 URL: https://issues.apache.org/jira/browse/HIVE-6928 Project: Hive Issue Type: Bug Components: CLI Reporter: Szehon Ho Assignee: Chinna Rao Lalam Attachments: HIVE-6928.1.patch, HIVE-6928.2.patch, HIVE-6928.3 .patch, HIVE-6928.3 .patch, HIVE-6928.3 .patch, HIVE-6928.patch By default, beeline truncates long results based on the console width like: {code} +-+--+ | col_name | | +-+--+ | pat_id | string | | score | float | | acutes | float | | | | | Detailed Table Information | Table(tableName:refills, dbName:default, owner:hdadmin, createTime:1393882396, lastAccessTime:0, retention:0, sd:Sto | +-+--+ 5 rows selected (0.4 seconds) {code} This can be changed by !outputformat, but the default should behave better to give a better experience to the first-time beeline user. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6928) Beeline should not chop off describe extended results by default
[ https://issues.apache.org/jira/browse/HIVE-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064283#comment-14064283 ] Xuefu Zhang commented on HIVE-6928: --- I have started wondering why test wasn't triggered. Reattach and try again. Beeline should not chop off describe extended results by default -- Key: HIVE-6928 URL: https://issues.apache.org/jira/browse/HIVE-6928 Project: Hive Issue Type: Bug Components: CLI Reporter: Szehon Ho Assignee: Chinna Rao Lalam Attachments: HIVE-6928.1.patch, HIVE-6928.2.patch, HIVE-6928.3 .patch, HIVE-6928.3 .patch, HIVE-6928.3 .patch, HIVE-6928.patch By default, beeline truncates long results based on the console width like: {code} +-+--+ | col_name | | +-+--+ | pat_id | string | | score | float | | acutes | float | | | | | Detailed Table Information | Table(tableName:refills, dbName:default, owner:hdadmin, createTime:1393882396, lastAccessTime:0, retention:0, sd:Sto | +-+--+ 5 rows selected (0.4 seconds) {code} This can be changed by !outputformat, but the default should behave better to give a better experience to the first-time beeline user. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23527: HIVE-7416 - provide context information to authorization checkPrivileges api call
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23527/#review47953 --- ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java https://reviews.apache.org/r/23527/#comment84196 Is this the only current usage of the context info? Should it be logged for failed auth checks? - Jason Dere On July 15, 2014, 10:48 p.m., Thejas Nair wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23527/ --- (Updated July 15, 2014, 10:48 p.m.) Review request for hive. Bugs: HIVE-7416 https://issues.apache.org/jira/browse/HIVE-7416 Repository: hive-git Description --- See jira Diffs - itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestHS2AuthzContext.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/Driver.java ac76214 ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java 92545d8 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizationValidator.java 7ffbc44 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizer.java dbef61a ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizerImpl.java 558d4ff ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthzContext.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java 8937cfa ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java 6686bc6 service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java 6a7ee7a Diff: https://reviews.apache.org/r/23527/diff/ Testing --- New tests included. Thanks, Thejas Nair
[jira] [Commented] (HIVE-4765) Improve HBase bulk loading facility
[ https://issues.apache.org/jira/browse/HIVE-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064347#comment-14064347 ] Nick Dimiduk commented on HIVE-4765: This looks like a nice improvement [~navis]! Improve HBase bulk loading facility --- Key: HIVE-4765 URL: https://issues.apache.org/jira/browse/HIVE-4765 Project: Hive Issue Type: Improvement Components: HBase Handler Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-4765.2.patch.txt, HIVE-4765.3.patch.txt, HIVE-4765.D11463.1.patch With some patches, bulk loading process for HBase could be simplified a lot. {noformat} CREATE EXTERNAL TABLE hbase_export(rowkey STRING, col1 STRING, col2 STRING) ROW FORMAT SERDE 'org.apache.hadoop.hive.hbase.HBaseExportSerDe' WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:key,cf2:value) STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.hbase.HiveHFileExporter' LOCATION '/tmp/export'; SET mapred.reduce.tasks=4; set hive.optimize.sampling.orderby=true; INSERT OVERWRITE TABLE hbase_export SELECT * from (SELECT union_kv(key,key,value,:key,cf1:key,cf2:value) as (rowkey,union) FROM src) A ORDER BY rowkey,union; hive !hadoop fs -lsr /tmp/export; drwxr-xr-x - navis supergroup 0 2013-06-20 11:05 /tmp/export/cf1 -rw-r--r-- 1 navis supergroup 4317 2013-06-20 11:05 /tmp/export/cf1/384abe795e1a471cac6d3770ee38e835 -rw-r--r-- 1 navis supergroup 5868 2013-06-20 11:05 /tmp/export/cf1/b8b6d746c48f4d12a4cf1a2077a28a2d -rw-r--r-- 1 navis supergroup 5214 2013-06-20 11:05 /tmp/export/cf1/c8be8117a1734bd68a74338dfc4180f8 -rw-r--r-- 1 navis supergroup 4290 2013-06-20 11:05 /tmp/export/cf1/ce41f5b1cfdc4722be25207fc59a9f10 drwxr-xr-x - navis supergroup 0 2013-06-20 11:05 /tmp/export/cf2 -rw-r--r-- 1 navis supergroup 6744 2013-06-20 11:05 /tmp/export/cf2/409673b517d94e16920e445d07710f52 -rw-r--r-- 1 navis supergroup 4975 2013-06-20 11:05 /tmp/export/cf2/96af002a6b9f4ebd976ecd83c99c8d7e -rw-r--r-- 1 navis supergroup 6096 2013-06-20 11:05 /tmp/export/cf2/c4f696587c5e42ee9341d476876a3db4 -rw-r--r-- 1 navis supergroup 4890 2013-06-20 11:05 /tmp/export/cf2/fd9adc9e982f4fe38c8d62f9a44854ba hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles /tmp/export test {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6928) Beeline should not chop off describe extended results by default
[ https://issues.apache.org/jira/browse/HIVE-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ferdinand updated HIVE-6928: Attachment: HIVE-6928.3.patch reattach file since there is a blank in the file name in order to trigger the hive qa Beeline should not chop off describe extended results by default -- Key: HIVE-6928 URL: https://issues.apache.org/jira/browse/HIVE-6928 Project: Hive Issue Type: Bug Components: CLI Reporter: Szehon Ho Assignee: Chinna Rao Lalam Attachments: HIVE-6928.1.patch, HIVE-6928.2.patch, HIVE-6928.3 .patch, HIVE-6928.3 .patch, HIVE-6928.3 .patch, HIVE-6928.3.patch, HIVE-6928.patch By default, beeline truncates long results based on the console width like: {code} +-+--+ | col_name | | +-+--+ | pat_id | string | | score | float | | acutes | float | | | | | Detailed Table Information | Table(tableName:refills, dbName:default, owner:hdadmin, createTime:1393882396, lastAccessTime:0, retention:0, sd:Sto | +-+--+ 5 rows selected (0.4 seconds) {code} This can be changed by !outputformat, but the default should behave better to give a better experience to the first-time beeline user. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 23470: HIVE-7404 Revoke privilege should support revoking of grant option
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23470/ --- (Updated July 17, 2014, 12:29 a.m.) Review request for hive and Thejas Nair. Changes --- changes the new grantOption arg for the Thrift grant_revoke_privileges() to only apply for revoking privileges. This way there is only one way to set the grant option for grant privilege requests - it should be set in the privilegeBag. Bugs: HIVE-7404 https://issues.apache.org/jira/browse/HIVE-7404 Repository: hive-git Description --- Generated Thrift files removed from diff. New grant_revoke_privilege() method in Thrift Hive metastore interface Existing grant/revoke privilege methods (non-thrift) have additional grantOption arg. Diffs (updated) - itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestAuthorizationApiAuthorizer.java d2b6355 metastore/if/hive_metastore.thrift 2df4876 metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java bace609 metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 32da869 metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java 9ce717a metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 5e2cad7 metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java c9c3037 metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java 5f9ab4d metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java b7997c0 ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java ee074ea ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java a891838 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g f5d0602 ql/src/java/org/apache/hadoop/hive/ql/parse/authorization/HiveAuthorizationTaskFactoryImpl.java c32d81e ql/src/java/org/apache/hadoop/hive/ql/plan/RevokeDesc.java eaef34c ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java f2a4004 ql/src/test/queries/clientnegative/authorization_fail_8.q PRE-CREATION ql/src/test/queries/clientpositive/authorization_revoke_table_priv.q c8f4bc8 ql/src/test/results/clientnegative/authorization_fail_8.q.out PRE-CREATION ql/src/test/results/clientpositive/authorization_revoke_table_priv.q.out 907c889 Diff: https://reviews.apache.org/r/23470/diff/ Testing --- Thanks, Jason Dere
[jira] [Updated] (HIVE-7404) Revoke privilege should support revoking of grant option
[ https://issues.apache.org/jira/browse/HIVE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-7404: - Attachment: HIVE-7404.2.patch Patch v2 changes the new grantOption arg for the Thrift grant_revoke_privileges() to only apply for revoking privileges. This way there is only one way to set the grant option for grant privilege requests - it should be set in the privilegeBag. Revoke privilege should support revoking of grant option Key: HIVE-7404 URL: https://issues.apache.org/jira/browse/HIVE-7404 Project: Hive Issue Type: Sub-task Components: Authorization Reporter: Jason Dere Assignee: Jason Dere Attachments: HIVE-7404.1.patch, HIVE-7404.2.patch Similar to HIVE-6252, but for grant option on privileges: {noformat} REVOKE GRANT OPTION FOR privilege ON object FROM USER user {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7429) Set replication for archive called before file exists
[ https://issues.apache.org/jira/browse/HIVE-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-7429: Resolution: Fixed Fix Version/s: 0.14.0 Status: Resolved (was: Patch Available) Committed to trunk. Thanks Daniel. Set replication for archive called before file exists - Key: HIVE-7429 URL: https://issues.apache.org/jira/browse/HIVE-7429 Project: Hive Issue Type: Bug Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.13.1 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Fix For: 0.14.0 Attachments: HIVE-7429.1.patch, HIVE-7429.1.patch The call to set replication is called prior to uploading the archive file to hdfs, which does not throw an error, but the replication never gets set. This has a significant impact on large jobs (especially hash joins) due to too many tasks hitting the data nodes. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Archive Replication Issue HIVE-7429
Committed to trunk. Thanks Daniel, for the contribution. Navis 2014-07-17 6:20 GMT+09:00 Daniel Weeks dwe...@netflix.com.invalid: I was wondering if someone could quickly review the patch below. It's trivial and is just fixing a order of execution problem when uploading the archive file to hdfs and setting the replication factor (replication is set before archive is uploaded). https://issues.apache.org/jira/browse/HIVE-7429 Thanks, -Dan Weeks
Re: Review Request 23425: HIVE-7361: using authorization api for RESET, DFS, ADD, DELETE, COMPILE commands
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23425/#review47978 --- ql/src/test/queries/clientnegative/authorization_dfs.q https://reviews.apache.org/r/23425/#comment84223 Looks like authorization_dfs.q no longer requires an initial query to initialize auth, whereas authorization_reset.q, authorization_admin_almighty2.q still have one. Should it be removed from those q files? - Jason Dere On July 16, 2014, 10:10 p.m., Thejas Nair wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23425/ --- (Updated July 16, 2014, 10:10 p.m.) Review request for hive. Bugs: HIVE-7361 https://issues.apache.org/jira/browse/HIVE-7361 Repository: hive-git Description --- See jira HIVE-7361. Diffs - conf/hive-default.xml.template ba5b8a9 itests/hive-unit/src/test/java/org/apache/hive/jdbc/authorization/TestJdbcWithSQLAuthorization.java abe5ffa itests/util/src/main/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessControllerForTest.java 4474ce5 itests/util/src/main/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidatorForTest.java PRE-CREATION itests/util/src/main/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizerFactoryForTest.java 89e18b3 ql/src/java/org/apache/hadoop/hive/ql/processors/AddResourceProcessor.java 0532666 ql/src/java/org/apache/hadoop/hive/ql/processors/CommandProcessorResponse.java f29a409 ql/src/java/org/apache/hadoop/hive/ql/processors/CommandUtil.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/processors/CompileProcessor.java 8b8475b ql/src/java/org/apache/hadoop/hive/ql/processors/DeleteResourceProcessor.java bfac5f8 ql/src/java/org/apache/hadoop/hive/ql/processors/DfsProcessor.java d343a3c ql/src/java/org/apache/hadoop/hive/ql/processors/ResetProcessor.java b8ecfad ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveOperationType.java 0537b92 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrivilegeObject.java db57cb6 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/GrantPrivAuthUtils.java f99109b ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/Operation2Privilege.java 151df6a ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLAuthorizationUtils.java beb45f5 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java f2a4004 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java 8937cfa ql/src/test/org/apache/hadoop/hive/ql/security/authorization/plugin/TestHiveOperationType.java b990cb2 ql/src/test/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/TestSQLStdHiveAccessController.java 06f9258 ql/src/test/queries/clientnegative/authorization_addjar.q a1709da ql/src/test/queries/clientnegative/authorization_compile.q PRE-CREATION ql/src/test/queries/clientnegative/authorization_deletejar.q PRE-CREATION ql/src/test/queries/clientnegative/authorization_dfs.q 7d47a7b ql/src/test/queries/clientpositive/authorization_admin_almighty2.q PRE-CREATION ql/src/test/queries/clientpositive/authorization_reset.q PRE-CREATION ql/src/test/results/clientnegative/authorization_addjar.q.out d206dca ql/src/test/results/clientnegative/authorization_addpartition.q.out 6331ae2 ql/src/test/results/clientnegative/authorization_alter_db_owner.q.out 550cbcc ql/src/test/results/clientnegative/authorization_alter_db_owner_default.q.out 4df868e ql/src/test/results/clientnegative/authorization_compile.q.out PRE-CREATION ql/src/test/results/clientnegative/authorization_create_func1.q.out 7c72092 ql/src/test/results/clientnegative/authorization_create_func2.q.out 7c72092 ql/src/test/results/clientnegative/authorization_create_macro1.q.out 7c72092 ql/src/test/results/clientnegative/authorization_createview.q.out c86bdfa ql/src/test/results/clientnegative/authorization_ctas.q.out f8395b7 ql/src/test/results/clientnegative/authorization_deletejar.q.out PRE-CREATION ql/src/test/results/clientnegative/authorization_desc_table_nosel.q.out be56d34 ql/src/test/results/clientnegative/authorization_dfs.q.out d685e78 ql/src/test/results/clientnegative/authorization_drop_db_cascade.q.out 74ab4c8 ql/src/test/results/clientnegative/authorization_drop_db_empty.q.out
[jira] [Commented] (HIVE-7416) provide context information to authorization checkPrivileges api call
[ https://issues.apache.org/jira/browse/HIVE-7416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064462#comment-14064462 ] Hive QA commented on HIVE-7416: --- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12656161/HIVE-7416.1.patch Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/822/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/822/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-822/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]] + export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + export PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.6.0_34/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-Build-822/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ svn = \s\v\n ]] + [[ -n '' ]] + [[ -d apache-svn-trunk-source ]] + [[ ! -d apache-svn-trunk-source/.svn ]] + [[ ! -d apache-svn-trunk-source ]] + cd apache-svn-trunk-source + svn revert -R . Reverted 'ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFComputeStats.java' ++ egrep -v '^X|^Performing status on external' ++ awk '{print $2}' ++ svn status --no-ignore + rm -rf target datanucleus.log ant/target shims/target shims/0.20/target shims/0.20S/target shims/0.23/target shims/aggregator/target shims/common/target shims/common-secure/target packaging/target hbase-handler/target testutils/target jdbc/target metastore/target itests/target itests/hcatalog-unit/target itests/test-serde/target itests/qtest/target itests/hive-unit-hadoop2/target itests/hive-minikdc/target itests/hive-unit/target itests/custom-serde/target itests/util/target hcatalog/target hcatalog/core/target hcatalog/streaming/target hcatalog/server-extensions/target hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target hcatalog/hcatalog-pig-adapter/target hwi/target common/target common/src/gen contrib/target service/target serde/target beeline/target odbc/target cli/target ql/dependency-reduced-pom.xml ql/target ql/src/test/results/clientpositive/colstats_all_nulls.q.out ql/src/test/queries/clientpositive/colstats_all_nulls.q + svn update Uql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java Uql/pom.xml Fetching external item into 'hcatalog/src/test/e2e/harness' Updated external to revision 1611233. Updated to revision 1611233. + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12656161 provide context information to authorization checkPrivileges api call - Key: HIVE-7416 URL: https://issues.apache.org/jira/browse/HIVE-7416 Project: Hive Issue Type: New Feature Components: Authorization, SQLStandardAuthorization Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-7416.1.patch, HIVE-7416.1.patch Context information such as request ip address, unique string for session, and original sql command string are useful for audit logging from the authorization implementations. Authorization implementations can also choose to log authorization success along with information about what policies matched and the context
[jira] [Commented] (HIVE-7412) column stats collection throws exception if all values for a column is null
[ https://issues.apache.org/jira/browse/HIVE-7412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064460#comment-14064460 ] Hive QA commented on HIVE-7412: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12656151/HIVE-7412.2.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5723 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_temp_table org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_ql_rewrite_gbtoidx {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/821/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-Build/821/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-Build-821/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12656151 column stats collection throws exception if all values for a column is null --- Key: HIVE-7412 URL: https://issues.apache.org/jira/browse/HIVE-7412 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.13.0, 0.13.1 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-7412.1.patch, HIVE-7412.2.patch, HIVE-7412.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7412) column stats collection throws exception if all values for a column is null
[ https://issues.apache.org/jira/browse/HIVE-7412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064465#comment-14064465 ] Navis commented on HIVE-7412: - +1 column stats collection throws exception if all values for a column is null --- Key: HIVE-7412 URL: https://issues.apache.org/jira/browse/HIVE-7412 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.13.0, 0.13.1 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-7412.1.patch, HIVE-7412.2.patch, HIVE-7412.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7423) produce hive-exec-core.jar from ql module
[ https://issues.apache.org/jira/browse/HIVE-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-7423: - Resolution: Fixed Fix Version/s: 0.14.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) produce hive-exec-core.jar from ql module - Key: HIVE-7423 URL: https://issues.apache.org/jira/browse/HIVE-7423 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.13.1 Reporter: Eugene Koifman Assignee: Eugene Koifman Fix For: 0.14.0 Attachments: HIVE-7423.patch currently ql module produces hive-exec-$version.jar which is an uber jar. It's also useful to have a thin jar, let's call it hive-exec-$version-core.jar, that only has classes from ql. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7423) produce hive-exec-core.jar from ql module
[ https://issues.apache.org/jira/browse/HIVE-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064466#comment-14064466 ] Daniel Dai commented on HIVE-7423: -- Patch committed to trunk. Thanks Eugene! produce hive-exec-core.jar from ql module - Key: HIVE-7423 URL: https://issues.apache.org/jira/browse/HIVE-7423 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.13.1 Reporter: Eugene Koifman Assignee: Eugene Koifman Fix For: 0.14.0 Attachments: HIVE-7423.patch currently ql module produces hive-exec-$version.jar which is an uber jar. It's also useful to have a thin jar, let's call it hive-exec-$version-core.jar, that only has classes from ql. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7416) provide context information to authorization checkPrivileges api call
[ https://issues.apache.org/jira/browse/HIVE-7416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-7416: Attachment: HIVE-7416.2.patch HIVE-7416.2.patch - patch rebased to latest trunk. provide context information to authorization checkPrivileges api call - Key: HIVE-7416 URL: https://issues.apache.org/jira/browse/HIVE-7416 Project: Hive Issue Type: New Feature Components: Authorization, SQLStandardAuthorization Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-7416.1.patch, HIVE-7416.1.patch, HIVE-7416.2.patch Context information such as request ip address, unique string for session, and original sql command string are useful for audit logging from the authorization implementations. Authorization implementations can also choose to log authorization success along with information about what policies matched and the context information. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7412) column stats collection throws exception if all values for a column is null
[ https://issues.apache.org/jira/browse/HIVE-7412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-7412: --- Resolution: Fixed Fix Version/s: 0.14.0 Status: Resolved (was: Patch Available) Committed to trunk. column stats collection throws exception if all values for a column is null --- Key: HIVE-7412 URL: https://issues.apache.org/jira/browse/HIVE-7412 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.13.0, 0.13.1 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Fix For: 0.14.0 Attachments: HIVE-7412.1.patch, HIVE-7412.2.patch, HIVE-7412.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7409) Add workaround for a deadlock issue of Class.getAnnotation()
[ https://issues.apache.org/jira/browse/HIVE-7409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-7409: Attachment: HIVE-7409.2.patch.txt Add workaround for a deadlock issue of Class.getAnnotation() - Key: HIVE-7409 URL: https://issues.apache.org/jira/browse/HIVE-7409 Project: Hive Issue Type: Bug Reporter: Tsuyoshi OZAWA Attachments: HIVE-7409.1.patch, HIVE-7409.2.patch.txt, stacktrace.txt [JDK-7122142|https://bugs.openjdk.java.net/browse/JDK-7122142] mentions that there is a race condition in getAnnotations. This problem can lead deadlock. The fix on JDK will be merged on jdk8, but hive supports jdk6/jdk7 currently. Therefore, we should add workaround to avoid the issue. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7409) Add workaround for a deadlock issue of Class.getAnnotation()
[ https://issues.apache.org/jira/browse/HIVE-7409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064476#comment-14064476 ] Navis commented on HIVE-7409: - Fair enough. Let's get this in. Refactored a little by using utility method. Add workaround for a deadlock issue of Class.getAnnotation() - Key: HIVE-7409 URL: https://issues.apache.org/jira/browse/HIVE-7409 Project: Hive Issue Type: Bug Reporter: Tsuyoshi OZAWA Attachments: HIVE-7409.1.patch, HIVE-7409.2.patch.txt, stacktrace.txt [JDK-7122142|https://bugs.openjdk.java.net/browse/JDK-7122142] mentions that there is a race condition in getAnnotations. This problem can lead deadlock. The fix on JDK will be merged on jdk8, but hive supports jdk6/jdk7 currently. Therefore, we should add workaround to avoid the issue. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7409) Add workaround for a deadlock issue of Class.getAnnotation()
[ https://issues.apache.org/jira/browse/HIVE-7409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-7409: Assignee: Tsuyoshi OZAWA Add workaround for a deadlock issue of Class.getAnnotation() - Key: HIVE-7409 URL: https://issues.apache.org/jira/browse/HIVE-7409 Project: Hive Issue Type: Bug Reporter: Tsuyoshi OZAWA Assignee: Tsuyoshi OZAWA Attachments: HIVE-7409.1.patch, HIVE-7409.2.patch.txt, stacktrace.txt [JDK-7122142|https://bugs.openjdk.java.net/browse/JDK-7122142] mentions that there is a race condition in getAnnotations. This problem can lead deadlock. The fix on JDK will be merged on jdk8, but hive supports jdk6/jdk7 currently. Therefore, we should add workaround to avoid the issue. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7411) Exclude hadoop 1 from spark dep
[ https://issues.apache.org/jira/browse/HIVE-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14064482#comment-14064482 ] Navis commented on HIVE-7411: - [~brocknoland] I've also seen that and fixed it in HIVE-7398. Exclude hadoop 1 from spark dep --- Key: HIVE-7411 URL: https://issues.apache.org/jira/browse/HIVE-7411 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-7411.patch The branch does not compile on my machine. Attached patch fixes this. NO PRECOMMIT TESTS (I am working on this) -- This message was sent by Atlassian JIRA (v6.2#6252)