[jira] [Commented] (HIVE-15136) LLAP: allow slider placement policy configuration during install
[ https://issues.apache.org/jira/browse/HIVE-15136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658818#comment-15658818 ] Gopal V commented on HIVE-15136: Addendum committed. > LLAP: allow slider placement policy configuration during install > > > Key: HIVE-15136 > URL: https://issues.apache.org/jira/browse/HIVE-15136 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 2.2.0 > > Attachments: HIVE-15136.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP
[ https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658340#comment-15658340 ] Prasanth Jayachandran commented on HIVE-13931: -- This is the actual exception {code} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-shade-plugin:2.2:shade (default) on project hive-jdbc: Error creating shaded jar: 21501 -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-shade-plugin:2.2:shade (default) on project hive-jdbc: Error creating shaded jar: 21501 at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106) at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863) at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288) at org.apache.maven.cli.MavenCli.main(MavenCli.java:199) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415) at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356) Caused by: org.apache.maven.plugin.MojoExecutionException: Error creating shaded jar: 21501 at org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:567) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207) ... 20 more Caused by: java.lang.ArrayIndexOutOfBoundsException: 21501 at org.objectweb.asm.ClassReader.(Unknown Source) at org.objectweb.asm.ClassReader.(Unknown Source) at org.objectweb.asm.ClassReader.(Unknown Source) at org.apache.maven.plugins.shade.DefaultShader.addRemappedClass(DefaultShader.java:329) at org.apache.maven.plugins.shade.DefaultShader.shade(DefaultShader.java:164) at org.apache.maven.plugins.shade.mojo.ShadeMojo.execute(ShadeMojo.java:472) ... 22 more {code} > Add support for HikariCP and replace BoneCP usage with HikariCP > --- > > Key: HIVE-13931 > URL: https://issues.apache.org/jira/browse/HIVE-13931 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Sushanth Sowmyan >Assignee: Prasanth Jayachandran > Attachments: HIVE-13931.2.patch, HIVE-13931.3.patch, HIVE-13931.patch > > > Currently, we use BoneCP as our primary connection pooling mechanism > (overridable by users). However, BoneCP is no longer being actively > developed, and is considered deprecated, replaced by HikariCP. > Thus, we should add support for HikariCP, and try to replace our primary > usage of BoneCP with it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP
[ https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15658070#comment-15658070 ] Hive QA commented on HIVE-13931: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12838606/HIVE-13931.3.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/2092/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/2092/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-2092/ Messages: {noformat} This message was trimmed, see log for full details [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/conf/Configuration.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/fs/Path.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/conf/HiveConfUtil.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/StringUtils.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/VersionInfo.class)]] [loading ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Iterable.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/io/Writable.class)]] [loading ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/String.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/aggregate/jetty-all-server/7.6.0.v20120127/jetty-all-server-7.6.0.v20120127.jar(org/eclipse/jetty/http/HttpStatus.class)]] [loading ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/HashMap.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/MediaType.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/Response.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar(org/codehaus/jackson/map/ObjectMapper.class)]] [loading ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Exception.class)]] [loading ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Throwable.class)]] [loading ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Serializable.class)]] [loading ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Enum.class)]] [loading ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Comparable.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-server/1.14/jersey-server-1.14.jar(com/sun/jersey/api/core/PackagesResourceConfig.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-servlet/1.14/jersey-servlet-1.14.jar(com/sun/jersey/spi/container/servlet/ServletContainer.class)]] [loading ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/FileInputStream.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/StringUtils.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/ArrayUtils.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/common/classification/InterfaceStability.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-hdfs/2.7.2/hadoop-hdfs-2.7.2.jar(org/apache/hadoop/hdfs/web/AuthFilter.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/Utils.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/security/UserGroupInformation.class)]] [loading
[jira] [Updated] (HIVE-13931) Add support for HikariCP and replace BoneCP usage with HikariCP
[ https://issues.apache.org/jira/browse/HIVE-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-13931: - Status: Patch Available (was: Open) > Add support for HikariCP and replace BoneCP usage with HikariCP > --- > > Key: HIVE-13931 > URL: https://issues.apache.org/jira/browse/HIVE-13931 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Sushanth Sowmyan >Assignee: Prasanth Jayachandran > Attachments: HIVE-13931.2.patch, HIVE-13931.3.patch, HIVE-13931.patch > > > Currently, we use BoneCP as our primary connection pooling mechanism > (overridable by users). However, BoneCP is no longer being actively > developed, and is considered deprecated, replaced by HikariCP. > Thus, we should add support for HikariCP, and try to replace our primary > usage of BoneCP with it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-15135) Add an llap mode which fails if queries cannot run in llap
[ https://issues.apache.org/jira/browse/HIVE-15135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15657736#comment-15657736 ] Siddharth Seth commented on HIVE-15135: --- Don't think the test failures are related. [~gopalv] - could you please take a look. Several options. 1) llap_only 2) only 3) Re-purpose all to be the current mode. Add a new mode - all_container_fallback - which behaves like all does today. I think the names make the most sense here, but this ends up being an incompatible change. Between 1 and 2 - think 1 is easier to understand. > Add an llap mode which fails if queries cannot run in llap > -- > > Key: HIVE-15135 > URL: https://issues.apache.org/jira/browse/HIVE-15135 > Project: Hive > Issue Type: Task >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-15135.01.patch, HIVE-15135.02.patch, > HIVE-15135.03.patch > > > ALL currently ends up launching new containers for queries which cannot run > in llap. > There should be a mode where these queries don't run. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-15057) Support other types of operators (other than SELECT)
[ https://issues.apache.org/jira/browse/HIVE-15057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HIVE-15057: Attachment: HIVE-15057.wip.patch > Support other types of operators (other than SELECT) > > > Key: HIVE-15057 > URL: https://issues.apache.org/jira/browse/HIVE-15057 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer, Physical Optimizer >Reporter: Chao Sun >Assignee: Chao Sun > Attachments: HIVE-15057.wip.patch > > > Currently only SELECT operators are supported for nested column pruning. We > should add support for other types of operators so the optimization can work > for complex queries. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-15082) Hive-1.2 cannot read data from complex data types with TIMESTAMP column, stored in Parquet
[ https://issues.apache.org/jira/browse/HIVE-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15657314#comment-15657314 ] Hive QA commented on HIVE-15082: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12838537/HIVE-15082.1-branch-1.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 125 failed/errored test(s), 7910 tests executed *Failed tests:* {noformat} TestAdminUser - did not produce a TEST-*.xml file (likely timed out) (batchId=340) TestAuthorizationPreEventListener - did not produce a TEST-*.xml file (likely timed out) (batchId=371) TestAuthzApiEmbedAuthorizerInEmbed - did not produce a TEST-*.xml file (likely timed out) (batchId=350) TestAuthzApiEmbedAuthorizerInRemote - did not produce a TEST-*.xml file (likely timed out) (batchId=356) TestBeeLineWithArgs - did not produce a TEST-*.xml file (likely timed out) (batchId=378) TestCLIAuthzSessionContext - did not produce a TEST-*.xml file (likely timed out) (batchId=394) TestClientSideAuthorizationProvider - did not produce a TEST-*.xml file (likely timed out) (batchId=370) TestCompactor - did not produce a TEST-*.xml file (likely timed out) (batchId=360) TestCreateUdfEntities - did not produce a TEST-*.xml file (likely timed out) (batchId=359) TestCustomAuthentication - did not produce a TEST-*.xml file (likely timed out) (batchId=379) TestDBTokenStore - did not produce a TEST-*.xml file (likely timed out) (batchId=325) TestDDLWithRemoteMetastoreSecondNamenode - did not produce a TEST-*.xml file (likely timed out) (batchId=358) TestDynamicSerDe - did not produce a TEST-*.xml file (likely timed out) (batchId=328) TestEmbeddedHiveMetaStore - did not produce a TEST-*.xml file (likely timed out) (batchId=337) TestEmbeddedThriftBinaryCLIService - did not produce a TEST-*.xml file (likely timed out) (batchId=382) TestFilterHooks - did not produce a TEST-*.xml file (likely timed out) (batchId=332) TestFolderPermissions - did not produce a TEST-*.xml file (likely timed out) (batchId=365) TestHS2AuthzContext - did not produce a TEST-*.xml file (likely timed out) (batchId=397) TestHS2AuthzSessionContext - did not produce a TEST-*.xml file (likely timed out) (batchId=398) TestHS2ImpersonationWithRemoteMS - did not produce a TEST-*.xml file (likely timed out) (batchId=386) TestHiveAuthorizerCheckInvocation - did not produce a TEST-*.xml file (likely timed out) (batchId=374) TestHiveAuthorizerShowFilters - did not produce a TEST-*.xml file (likely timed out) (batchId=373) TestHiveHistory - did not produce a TEST-*.xml file (likely timed out) (batchId=376) TestHiveMetaStoreTxns - did not produce a TEST-*.xml file (likely timed out) (batchId=352) TestHiveMetaStoreWithEnvironmentContext - did not produce a TEST-*.xml file (likely timed out) (batchId=342) TestHiveMetaTool - did not produce a TEST-*.xml file (likely timed out) (batchId=355) TestHiveServer2 - did not produce a TEST-*.xml file (likely timed out) (batchId=400) TestHiveServer2SessionTimeout - did not produce a TEST-*.xml file (likely timed out) (batchId=401) TestHiveSessionImpl - did not produce a TEST-*.xml file (likely timed out) (batchId=383) TestHs2Hooks - did not produce a TEST-*.xml file (likely timed out) (batchId=357) TestJdbcDriver2 - did not produce a TEST-*.xml file (likely timed out) (batchId=388) TestJdbcMetadataApiAuth - did not produce a TEST-*.xml file (likely timed out) (batchId=399) TestJdbcWithLocalClusterSpark - did not produce a TEST-*.xml file (likely timed out) (batchId=393) TestJdbcWithMiniHS2 - did not produce a TEST-*.xml file (likely timed out) (batchId=390) TestJdbcWithMiniMr - did not produce a TEST-*.xml file (likely timed out) (batchId=389) TestJdbcWithSQLAuthUDFBlacklist - did not produce a TEST-*.xml file (likely timed out) (batchId=395) TestJdbcWithSQLAuthorization - did not produce a TEST-*.xml file (likely timed out) (batchId=396) TestLocationQueries - did not produce a TEST-*.xml file (likely timed out) (batchId=363) TestMTQueries - did not produce a TEST-*.xml file (likely timed out) (batchId=361) TestMarkPartition - did not produce a TEST-*.xml file (likely timed out) (batchId=349) TestMarkPartitionRemote - did not produce a TEST-*.xml file (likely timed out) (batchId=353) TestMetaStoreAuthorization - did not produce a TEST-*.xml file (likely timed out) (batchId=338) TestMetaStoreConnectionUrlHook - did not produce a TEST-*.xml file (likely timed out) (batchId=336) TestMetaStoreEndFunctionListener - did not produce a TEST-*.xml file (likely timed out) (batchId=335) TestMetaStoreEventListener - did not produce a TEST-*.xml file (likely timed out) (batchId=345) TestMetaStoreEventListenerOnlyOnCommit - did not produce a TEST-*.xml file (likely timed out) (batchId=348) TestMetaStoreInitListener - did not produce a TEST-*.xml file
[jira] [Updated] (HIVE-15180) Extend DbNotificationListener to store additional information about Table metadata objects on different table events
[ https://issues.apache.org/jira/browse/HIVE-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-15180: Summary: Extend DbNotificationListener to store additional information about Table metadata objects on different table events (was: Update DbNotificationListener/MessageFactory to store additional information about Table metadata objects ) > Extend DbNotificationListener to store additional information about Table > metadata objects on different table events > > > Key: HIVE-15180 > URL: https://issues.apache.org/jira/browse/HIVE-15180 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > > We want the {{NOTIFICATION_LOG}} table to capture additional information > about the metadata objects when {{DbNotificationListener}} captures different > events for a table (create/drop/alter). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-15057) Support other types of operators (other than SELECT)
[ https://issues.apache.org/jira/browse/HIVE-15057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656952#comment-15656952 ] Hive QA commented on HIVE-15057: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12838513/HIVE-15057.wip.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 56 failed/errored test(s), 10639 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_add_column2] (batchId=80) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_add_column3] (batchId=46) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_date] (batchId=9) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_joins_native] (batchId=79) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_partitioned_native] (batchId=5) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_schema_evolution_native] (batchId=50) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_timestamp] (batchId=26) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avrocountemptytbl] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_map_null] (batchId=75) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[rcfile_createas1] (batchId=133) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[rcfile_merge2] (batchId=134) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[rcfile_merge3] (batchId=135) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[rcfile_merge4] (batchId=131) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[remote_script] (batchId=134) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_16] (batchId=146) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[create_merge_compressed] (batchId=143) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1] (batchId=142) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[join_acid_non_acid] (batchId=150) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lateral_view] (batchId=150) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sample10] (batchId=144) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_acid_part_update] (batchId=148) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[smb_mapjoin_4] (batchId=139) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[smb_mapjoin_5] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[smb_mapjoin_6] (batchId=141) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_empty] (batchId=142) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=145) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_round] (batchId=142) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_bucketmapjoin1] (batchId=140) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_rcfile_columnar] (batchId=142) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[auto_sortmerge_join_16] (batchId=157) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[remote_script] (batchId=157) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_4] (batchId=91) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_16] (batchId=114) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[create_merge_compressed] (batchId=110) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join_rc] (batchId=96) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[list_bucket_dml_2] (batchId=97) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ptf_rcfile] (batchId=107) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[rcfile_bigdata] (batchId=101) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[sample10] (batchId=112) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_1] (batchId=113) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_2] (batchId=116) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_3] (batchId=102) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_4] (batchId=102) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_5] (batchId=130) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_6] (batchId=106) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_8] (batchId=108)
[jira] [Commented] (HIVE-15082) Hive-1.2 cannot read data from complex data types with TIMESTAMP column, stored in Parquet
[ https://issues.apache.org/jira/browse/HIVE-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656755#comment-15656755 ] Oleksiy Sayankin commented on HIVE-15082: - I added to file name and yes, I canceled patch and resubmitted it again each time I attached new patch to the issue. > Hive-1.2 cannot read data from complex data types with TIMESTAMP column, > stored in Parquet > -- > > Key: HIVE-15082 > URL: https://issues.apache.org/jira/browse/HIVE-15082 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Oleksiy Sayankin >Assignee: Oleksiy Sayankin >Priority: Blocker > Attachments: HIVE-15082-branch-1.2.patch, HIVE-15082-branch-1.patch, > HIVE-15082.1-branch-1.2.patch > > > *STEP 1. Create test data* > {code:sql} > select * from dual; > {code} > *EXPECTED RESULT:* > {noformat} > Pretty_UnIQUe_StrinG > {noformat} > {code:sql} > create table test_parquet1(login timestamp) stored as parquet; > insert overwrite table test_parquet1 select from_unixtime(unix_timestamp()) > from dual; > select * from test_parquet1 limit 1; > {code} > *EXPECTED RESULT:* > No exceptions. Current timestamp as result. > {noformat} > 2016-10-27 10:58:19 > {noformat} > *STEP 2. Store timestamp in array in parquet file* > {code:sql} > create table test_parquet2(x array) stored as parquet; > insert overwrite table test_parquet2 select array(login) from test_parquet1; > select * from test_parquet2; > {code} > *EXPECTED RESULT:* > No exceptions. Current timestamp in brackets as result. > {noformat} > ["2016-10-27 10:58:19"] > {noformat} > *ACTUAL RESULT:* > {noformat} > ERROR [main]: CliDriver (SessionState.java:printError(963)) - Failed with > exception java.io.IOException:parquet.io.ParquetDecodingException: Can not > read value at 0 in block -1 in file > hdfs:///user/hive/warehouse/test_parquet2/00_0 > java.io.IOException: parquet.io.ParquetDecodingException: Can not read value > at 0 in block -1 in file hdfs:///user/hive/warehouse/test_parquet2/00_0 > {noformat} > *ROOT-CAUSE:* > Incorrect initialization of {{metadata}} {{HashMap}} causes that it has > {{null}} value in enumeration > {{org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter}} when > executing following line: > {code:java} > boolean skipConversion = > Boolean.valueOf(metadata.get(HiveConf.ConfVars.HIVE_PARQUET_TIMESTAMP_SKIP_CONVERSION.varname)); > {code} > in element {{ETIMESTAMP_CONVERTER}}. > JVM throws NPE and parquet library can not read data from file and throws > {noformat} > java.io.IOException:parquet.io.ParquetDecodingException: Can not read value > at 0 in block -1 in file hdfs:///user/hive/warehouse/test_parquet2/00_0 > {noformat} > for its turn. > *SOLUTION:* > Perform initialization in separate method to skip overriding it with {{null}} > value in block of code > {code:java} > if (parent != null) { > setMetadata(parent.getMetadata()); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-15148) disallow loading data into bucketed tables (by default)
[ https://issues.apache.org/jira/browse/HIVE-15148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15656460#comment-15656460 ] Hive QA commented on HIVE-15148: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12838482/HIVE-15148.01.patch {color:green}SUCCESS:{color} +1 due to 95 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 13 failed/errored test(s), 10637 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_1] (batchId=59) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_2] (batchId=52) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_orig_table] (batchId=55) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_orig_table] (batchId=51) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=56) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[explainuser_2] (batchId=134) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] (batchId=131) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_orig_table] (batchId=147) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[join_acid_non_acid] (batchId=150) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=145) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] (batchId=91) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucket_map_join_1] (batchId=120) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucket_map_join_2] (batchId=116) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/2079/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/2079/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-2079/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 13 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12838482 - PreCommit-HIVE-Build > disallow loading data into bucketed tables (by default) > --- > > Key: HIVE-15148 > URL: https://issues.apache.org/jira/browse/HIVE-15148 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-15148.01.patch, HIVE-15148.patch > > > A few q file tests still use the following, allowed, pattern: > {noformat} > CREATE TABLE bucket_small (key string, value string) partitioned by (ds > string) CLUSTERED BY (key) INTO 2 BUCKETS STORED AS TEXTFILE; > load data local inpath '../../data/files/smallsrcsortbucket1outof4.txt' INTO > TABLE bucket_small partition(ds='2008-04-08'); > load data local inpath '../../data/files/smallsrcsortbucket2outof4.txt' INTO > TABLE bucket_small partition(ds='2008-04-08'); > {noformat} > This relies on the user to load the correct number of files with correctly > hashed data and the correct order of file names; if there's some discrepancy > in any of the above, the queries will fail or may produce incorrect results > if some bucket-based optimizations kick in. > Additionally, even if the user does everything correctly, as far as I know > some code derives bucket number from file name, which won't work in this case > (as opposed to getting buckets based on the order of files, which will work > here but won't work as per HIVE-14970... sigh). > Hive enforces bucketing in other cases (the check cannot even be disabled > these days), so I suggest that we either prohibit the above outright, or at > least add a safety config setting that would disallow it by default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)