[jira] [Updated] (HIVE-27114) Provide a configurable filter for removing useless properties in Partition objects from getPartitions HMS Calls
[ https://issues.apache.org/jira/browse/HIVE-27114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27114: -- Summary: Provide a configurable filter for removing useless properties in Partition objects from getPartitions HMS Calls (was: Provide a configurable filter for removing useless properties from PartitionDesc objects from getPartitions HMS Calls) > Provide a configurable filter for removing useless properties in Partition > objects from getPartitions HMS Calls > --- > > Key: HIVE-27114 > URL: https://issues.apache.org/jira/browse/HIVE-27114 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Priority: Major > > HMS API calls are throwing following exception because of thrift upgrade > > {code:java} > org.apache.thrift.transport.TTransportException: MaxMessageSize reached > at > org.apache.thrift.transport.TEndpointTransport.countConsumedMessageBytes(TEndpointTransport.java:96) > > at > org.apache.thrift.transport.TMemoryInputTransport.read(TMemoryInputTransport.java:97) > > at > org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:390) > at > org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:39) > > at > org.apache.thrift.transport.TTransport.readAll(TTransport.java:109) > at > org.apache.hadoop.hive.metastore.security.TFilterTransport.readAll(TFilterTransport.java:63) > > at > org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:417) > > at > org.apache.thrift.protocol.TBinaryProtocol.readString(TBinaryProtocol.java:411) > > at > org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.read(Partition.java:1286) > > at > org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.read(Partition.java:1205) > > at > org.apache.hadoop.hive.metastore.api.Partition.read(Partition.java:1062) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java) > > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java) > > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result.read(ThriftHiveMetastore.java) > > at > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:88) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partitions(ThriftHiveMetastore.java:3290) > > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions(ThriftHiveMetastore.java:3275) > {code} > > > Large size partition metadata is causing this issue > eg., impala stores huge stats chunk in partitionMetadata with {*}param_keys = > (impala_intermediate_stats_chunk*){*}, these PARTITION_PARAM is not required > for Hive. These params should be skipped while preparing partition object > from HMS to HS2. > Similarly any user defined regex should be skipped in getPartitions HMS API > call. Similar to HIVE-25501 > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27114) Provide a configurable filter for removing useless properties in Partition objects from getPartitions HMS Calls
[ https://issues.apache.org/jira/browse/HIVE-27114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27114: -- Description: HMS API calls are throwing following exception because of thrift upgrade {code:java} org.apache.thrift.transport.TTransportException: MaxMessageSize reached at org.apache.thrift.transport.TEndpointTransport.countConsumedMessageBytes(TEndpointTransport.java:96) at org.apache.thrift.transport.TMemoryInputTransport.read(TMemoryInputTransport.java:97) at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:390) at org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:39) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:109) at org.apache.hadoop.hive.metastore.security.TFilterTransport.readAll(TFilterTransport.java:63) at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:417) at org.apache.thrift.protocol.TBinaryProtocol.readString(TBinaryProtocol.java:411) at org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.read(Partition.java:1286) at org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.read(Partition.java:1205) at org.apache.hadoop.hive.metastore.api.Partition.read(Partition.java:1062) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result.read(ThriftHiveMetastore.java) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:88) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partitions(ThriftHiveMetastore.java:3290) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions(ThriftHiveMetastore.java:3275) {code} Large size partition metadata is causing this issue eg., impala stores huge stats chunk in partitionMetadata with {*}param_keys = (impala_intermediate_stats_chunk*{*}), these PARTITION_PARAM_KEYS are not required for Hive. These params should be skipped while preparing partition object from HMS to HS2. Similar to HIVE-25501, any user defined regex param_keys should be skipped in getPartitions HMS API call response. was: HMS API calls are throwing following exception because of thrift upgrade {code:java} org.apache.thrift.transport.TTransportException: MaxMessageSize reached at org.apache.thrift.transport.TEndpointTransport.countConsumedMessageBytes(TEndpointTransport.java:96) at org.apache.thrift.transport.TMemoryInputTransport.read(TMemoryInputTransport.java:97) at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:390) at org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:39) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:109) at org.apache.hadoop.hive.metastore.security.TFilterTransport.readAll(TFilterTransport.java:63) at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:417) at org.apache.thrift.protocol.TBinaryProtocol.readString(TBinaryProtocol.java:411) at org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.read(Partition.java:1286) at org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.read(Partition.java:1205) at org.apache.hadoop.hive.metastore.api.Partition.read(Partition.java:1062) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result.read(ThriftHiveMetastore.java) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:88) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partitions(ThriftHiveMetastore.java:3290) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions(ThriftHiveMetastore.java:3275) {code} Large size partition metadata is causing this issue eg., impala stores huge stats chunk in partitionMetadata with {*}param_keys = (impala_intermediate_stats_chunk*){*}, these PARTITION_PARAM is not required for Hive. These params should be skipped while preparing partition object from HMS
[jira] [Updated] (HIVE-27114) Provide a configurable filter for removing useless properties in Partition objects from getPartitions HMS Calls
[ https://issues.apache.org/jira/browse/HIVE-27114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27114: -- Description: HMS API calls are throwing following exception because of thrift upgrade {code:java} org.apache.thrift.transport.TTransportException: MaxMessageSize reached at org.apache.thrift.transport.TEndpointTransport.countConsumedMessageBytes(TEndpointTransport.java:96) at org.apache.thrift.transport.TMemoryInputTransport.read(TMemoryInputTransport.java:97) at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:390) at org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:39) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:109) at org.apache.hadoop.hive.metastore.security.TFilterTransport.readAll(TFilterTransport.java:63) at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:417) at org.apache.thrift.protocol.TBinaryProtocol.readString(TBinaryProtocol.java:411) at org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.read(Partition.java:1286) at org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.read(Partition.java:1205) at org.apache.hadoop.hive.metastore.api.Partition.read(Partition.java:1062) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result.read(ThriftHiveMetastore.java) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:88) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partitions(ThriftHiveMetastore.java:3290) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions(ThriftHiveMetastore.java:3275) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitions(HiveMetaStoreClient.java:1782) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.listPartitions(SessionHiveMetaStoreClient.java:1134) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitions(HiveMetaStoreClient.java:1775) at sun.reflect.GeneratedMethodAccessor169.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_311] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_311] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:213) at com.sun.proxy.$Proxy52.listPartitions(Unknown Source) ~[?:?] at sun.reflect.GeneratedMethodAccessor169.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_311] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_311] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:3550) at com.sun.proxy.$Proxy52.listPartitions(Unknown Source) ~[?:?] at org.apache.hadoop.hive.ql.metadata.Hive.getAllPartitionsOf(Hive.java:3793) at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.getAllPartitions(PartitionPruner.java:485) {code} Large size partition metadata is causing this issue eg., impala stores huge stats chunk in partitionMetadata with {*}param_keys = (impala_intermediate_stats_chunk{*}{*}), these PARTITION_PARAM_KEYS are not required for Hive. These params should be skipped while preparing partition object from HMS to HS2. Similar to HIVE-25501, any user defined regex param_keys should be skipped in listPartitions HMS API call response. was: HMS API calls are throwing following exception because of thrift upgrade {code:java} org.apache.thrift.transport.TTransportException: MaxMessageSize reached at org.apache.thrift.transport.TEndpointTransport.countConsumedMessageBytes(TEndpointTransport.java:96) at org.apache.thrift.transport.TMemoryInputTransport.read(TMemoryInputTransport.java:97) at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:390) at org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:39) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:109) at org.apache.hadoop.hive.metastore.security.TFilterTransport.readAll(TFilterTransport.java:63) at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:417) at
[jira] [Updated] (HIVE-27114) Provide a configurable filter for removing useless properties in Partition objects from listPartitions HMS Calls
[ https://issues.apache.org/jira/browse/HIVE-27114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27114: -- Summary: Provide a configurable filter for removing useless properties in Partition objects from listPartitions HMS Calls (was: Provide a configurable filter for removing useless properties in Partition objects from getPartitions HMS Calls) > Provide a configurable filter for removing useless properties in Partition > objects from listPartitions HMS Calls > > > Key: HIVE-27114 > URL: https://issues.apache.org/jira/browse/HIVE-27114 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Priority: Major > > HMS API calls are throwing following exception because of thrift upgrade > {code:java} > org.apache.thrift.transport.TTransportException: MaxMessageSize reached > at > org.apache.thrift.transport.TEndpointTransport.countConsumedMessageBytes(TEndpointTransport.java:96) > > at > org.apache.thrift.transport.TMemoryInputTransport.read(TMemoryInputTransport.java:97) > > at > org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:390) > at > org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:39) > > at > org.apache.thrift.transport.TTransport.readAll(TTransport.java:109) > at > org.apache.hadoop.hive.metastore.security.TFilterTransport.readAll(TFilterTransport.java:63) > > at > org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:417) > > at > org.apache.thrift.protocol.TBinaryProtocol.readString(TBinaryProtocol.java:411) > > at > org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.read(Partition.java:1286) > > at > org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.read(Partition.java:1205) > > at > org.apache.hadoop.hive.metastore.api.Partition.read(Partition.java:1062) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java) > > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.read(ThriftHiveMetastore.java) > > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result.read(ThriftHiveMetastore.java) > > at > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:88) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_partitions(ThriftHiveMetastore.java:3290) > > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions(ThriftHiveMetastore.java:3275) > > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitions(HiveMetaStoreClient.java:1782) > > at > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.listPartitions(SessionHiveMetaStoreClient.java:1134) > > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitions(HiveMetaStoreClient.java:1775) > > at sun.reflect.GeneratedMethodAccessor169.invoke(Unknown Source) > ~[?:?] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_311] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_311] > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:213) > > at com.sun.proxy.$Proxy52.listPartitions(Unknown Source) ~[?:?] > at sun.reflect.GeneratedMethodAccessor169.invoke(Unknown Source) > ~[?:?] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_311] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_311] > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:3550) > > at com.sun.proxy.$Proxy52.listPartitions(Unknown Source) ~[?:?] > at > org.apache.hadoop.hive.ql.metadata.Hive.getAllPartitionsOf(Hive.java:3793) > at > org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.getAllPartitions(PartitionPruner.java:485) > {code} > Large size partition metadata is causing this issue > eg., impala stores huge stats chunk in partitionMetadata with {*}param_keys = > (impala_intermediate_stats_chunk{*}{*}), these PARTITION_PARAM_KEYS are not > required for Hive. These params should be skipped while preparing partition > object from HMS to HS2. > Similar to HIVE-25501, any user defined regex param_keys should be skipped in > listPartitions HMS
[jira] [Updated] (HIVE-27164) Create Temp Txn Table As Select is failing at tablePath validation
[ https://issues.apache.org/jira/browse/HIVE-27164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27164: -- Description: After HIVE-25303, every CTAS goes for HiveMetaStore$HMSHandler#translate_table_dryrun() call to fetch table location for CTAS queries which fails with following exception for temp tables if MetastoreDefaultTransformer is set. {code:java} 2023-03-17 16:41:23,390 INFO org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: [pool-6-thread-196]: Starting translation for CreateTable for processor HMSClient-@localhost with [EXTWRITE, EXTREAD, HIVEBUCKET2, HIVEFULLACIDREAD, HIVEFULLACIDWRITE, HIVECACHEINVALIDATE, HIVEMANAGESTATS, HIVEMANAGEDINSERTWRITE, HIVEMANAGEDINSERTREAD, HIVESQL, HIVEMQT, HIVEONLYMQTWRITE] on table test_temp 2023-03-17 16:41:23,392 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-196]: MetaException(message:Illegal location for managed table, it has to be within database's managed location) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.validateTablePaths(MetastoreDefaultTransformer.java:886) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transformCreateTable(MetastoreDefaultTransformer.java:666) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.translate_table_dryrun(HiveMetaStore.java:2164) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) {code} I am able to repro this issue at apache upstream using attached testcase. [^mm_cttas.q] There are multiple ways to fix this issue * Have temp txn table path under db's managed location path. This will help with encryption zone tables as well. * skips location check for temp tables at MetastoreDefaultTransformer#validateTablePaths() was: After HIVE-25303, every CTAS goes for HiveMetaStore$HMSHandler#translate_table_dryrun() call to fetch table location for CTAS queries which fails with following exception for temp tables if MetastoreDefaultTransformer is set. {code:java} 2023-03-17 16:41:23,390 INFO org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: [pool-6-thread-196]: Starting translation for CreateTable for processor HMSClient-@localhost with [EXTWRITE, EXTREAD, HIVEBUCKET2, HIVEFULLACIDREAD, HIVEFULLACIDWRITE, HIVECACHEINVALIDATE, HIVEMANAGESTATS, HIVEMANAGEDINSERTWRITE, HIVEMANAGEDINSERTREAD, HIVESQL, HIVEMQT, HIVEONLYMQTWRITE] on table test_temp 2023-03-17 16:41:23,392 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-196]: MetaException(message:Illegal location for managed table, it has to be within database's managed location) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.validateTablePaths(MetastoreDefaultTransformer.java:886) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transformCreateTable(MetastoreDefaultTransformer.java:666) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.translate_table_dryrun(HiveMetaStore.java:2164) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) {code} I am able to repro this issue at apache upstream using attached testcase. There are multiple ways to fix this issue * Have temp txn table path under db's managed location path. This will help with encryption zone tables as well. * skips location check for temp tables at MetastoreDefaultTransformer#validateTablePaths() > Create Temp Txn Table As Select is failing at tablePath validation > -- > > Key: HIVE-27164 > URL: https://issues.apache.org/jira/browse/HIVE-27164 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Metastore >Reporter: Naresh P R >Priority: Major > Attachments: mm_cttas.q > > > After HIVE-25303, every CTAS goes for > HiveMetaStore$HMSHandler#translate_table_dryrun() call to fetch table > location for CTAS queries which fails with following exception for temp > tables if MetastoreDefaultTransformer is set. > {code:java} > 2023-03-17 16:41:23,390 INFO > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: > [pool-6-thread-196]: Starting translation for CreateTable for processor > HMSClient-@localhost with [EXTWRITE, EXTREAD, HIVEBUCKET2, HIVEFULLACIDREAD, > HIVEFULLACIDWRITE, HIVECACHEINVALIDATE, HIVEMANAGESTATS, > HIVEMANAGEDINSERTWRITE, HIVEMANAGEDINSERTREAD, HIVESQL, HIVEMQT, > HIVEONLYMQTWRITE] on table test_temp > 2023-03-17 16:41:23,392 ERROR > org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-196]: > MetaException(message:Illegal location for managed table, it has to be within > database's managed location) > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.validateTablePaths(MetastoreDefaultTransformer.jav
[jira] [Updated] (HIVE-27164) Create Temp Txn Table As Select is failing at tablePath validation
[ https://issues.apache.org/jira/browse/HIVE-27164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27164: -- Description: After HIVE-25303, every CTAS goes for HiveMetaStore$HMSHandler#translate_table_dryrun() call to fetch table location for CTAS queries which fails with following exception for temp tables if MetastoreDefaultTransformer is set. {code:java} 2023-03-17 16:41:23,390 INFO org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: [pool-6-thread-196]: Starting translation for CreateTable for processor HMSClient-@localhost with [EXTWRITE, EXTREAD, HIVEBUCKET2, HIVEFULLACIDREAD, HIVEFULLACIDWRITE, HIVECACHEINVALIDATE, HIVEMANAGESTATS, HIVEMANAGEDINSERTWRITE, HIVEMANAGEDINSERTREAD, HIVESQL, HIVEMQT, HIVEONLYMQTWRITE] on table test_temp 2023-03-17 16:41:23,392 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-196]: MetaException(message:Illegal location for managed table, it has to be within database's managed location) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.validateTablePaths(MetastoreDefaultTransformer.java:886) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transformCreateTable(MetastoreDefaultTransformer.java:666) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.translate_table_dryrun(HiveMetaStore.java:2164) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) {code} I am able to repro this issue at apache upstream using attached testcase. [^mm_cttas.q] There are multiple ways to fix this issue * Have temp txn table path under db's managed location path. This will help with encryption zone tables as well. * Skip location check for temp tables at MetastoreDefaultTransformer#validateTablePaths() was: After HIVE-25303, every CTAS goes for HiveMetaStore$HMSHandler#translate_table_dryrun() call to fetch table location for CTAS queries which fails with following exception for temp tables if MetastoreDefaultTransformer is set. {code:java} 2023-03-17 16:41:23,390 INFO org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: [pool-6-thread-196]: Starting translation for CreateTable for processor HMSClient-@localhost with [EXTWRITE, EXTREAD, HIVEBUCKET2, HIVEFULLACIDREAD, HIVEFULLACIDWRITE, HIVECACHEINVALIDATE, HIVEMANAGESTATS, HIVEMANAGEDINSERTWRITE, HIVEMANAGEDINSERTREAD, HIVESQL, HIVEMQT, HIVEONLYMQTWRITE] on table test_temp 2023-03-17 16:41:23,392 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-196]: MetaException(message:Illegal location for managed table, it has to be within database's managed location) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.validateTablePaths(MetastoreDefaultTransformer.java:886) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transformCreateTable(MetastoreDefaultTransformer.java:666) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.translate_table_dryrun(HiveMetaStore.java:2164) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) {code} I am able to repro this issue at apache upstream using attached testcase. [^mm_cttas.q] There are multiple ways to fix this issue * Have temp txn table path under db's managed location path. This will help with encryption zone tables as well. * skips location check for temp tables at MetastoreDefaultTransformer#validateTablePaths() > Create Temp Txn Table As Select is failing at tablePath validation > -- > > Key: HIVE-27164 > URL: https://issues.apache.org/jira/browse/HIVE-27164 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Metastore >Reporter: Naresh P R >Priority: Major > Attachments: mm_cttas.q > > > After HIVE-25303, every CTAS goes for > HiveMetaStore$HMSHandler#translate_table_dryrun() call to fetch table > location for CTAS queries which fails with following exception for temp > tables if MetastoreDefaultTransformer is set. > {code:java} > 2023-03-17 16:41:23,390 INFO > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: > [pool-6-thread-196]: Starting translation for CreateTable for processor > HMSClient-@localhost with [EXTWRITE, EXTREAD, HIVEBUCKET2, HIVEFULLACIDREAD, > HIVEFULLACIDWRITE, HIVECACHEINVALIDATE, HIVEMANAGESTATS, > HIVEMANAGEDINSERTWRITE, HIVEMANAGEDINSERTREAD, HIVESQL, HIVEMQT, > HIVEONLYMQTWRITE] on table test_temp > 2023-03-17 16:41:23,392 ERROR > org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-196]: > MetaException(message:Illegal location for managed table, it has to be within > database's managed location) > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.validateTablePaths(MetastoreDefaultT
[jira] [Updated] (HIVE-27164) Create Temp Txn Table As Select is failing at tablePath validation
[ https://issues.apache.org/jira/browse/HIVE-27164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27164: -- Attachment: mm_cttas.q > Create Temp Txn Table As Select is failing at tablePath validation > -- > > Key: HIVE-27164 > URL: https://issues.apache.org/jira/browse/HIVE-27164 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Metastore >Reporter: Naresh P R >Priority: Major > Attachments: mm_cttas.q > > > After HIVE-25303, every CTAS goes for > HiveMetaStore$HMSHandler#translate_table_dryrun() call to fetch table > location for CTAS queries which fails with following exception for temp > tables if MetastoreDefaultTransformer is set. > {code:java} > 2023-03-17 16:41:23,390 INFO > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: > [pool-6-thread-196]: Starting translation for CreateTable for processor > HMSClient-@localhost with [EXTWRITE, EXTREAD, HIVEBUCKET2, HIVEFULLACIDREAD, > HIVEFULLACIDWRITE, HIVECACHEINVALIDATE, HIVEMANAGESTATS, > HIVEMANAGEDINSERTWRITE, HIVEMANAGEDINSERTREAD, HIVESQL, HIVEMQT, > HIVEONLYMQTWRITE] on table test_temp > 2023-03-17 16:41:23,392 ERROR > org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-196]: > MetaException(message:Illegal location for managed table, it has to be within > database's managed location) > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.validateTablePaths(MetastoreDefaultTransformer.java:886) > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transformCreateTable(MetastoreDefaultTransformer.java:666) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.translate_table_dryrun(HiveMetaStore.java:2164) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) {code} > I am able to repro this issue at apache upstream using attached testcase. > > There are multiple ways to fix this issue > * Have temp txn table path under db's managed location path. This will help > with encryption zone tables as well. > * skips location check for temp tables at > MetastoreDefaultTransformer#validateTablePaths() -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27164) Create Temp Txn Table As Select is failing at tablePath validation
[ https://issues.apache.org/jira/browse/HIVE-27164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27164: -- Description: After HIVE-25303, every CTAS goes for HiveMetaStore$HMSHandler#translate_table_dryrun() call to fetch table location for CTAS queries which fails with following exception for temp tables if MetastoreDefaultTransformer is set. {code:java} 2023-03-17 16:41:23,390 INFO org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: [pool-6-thread-196]: Starting translation for CreateTable for processor HMSClient-@localhost with [EXTWRITE, EXTREAD, HIVEBUCKET2, HIVEFULLACIDREAD, HIVEFULLACIDWRITE, HIVECACHEINVALIDATE, HIVEMANAGESTATS, HIVEMANAGEDINSERTWRITE, HIVEMANAGEDINSERTREAD, HIVESQL, HIVEMQT, HIVEONLYMQTWRITE] on table test_temp 2023-03-17 16:41:23,392 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-196]: MetaException(message:Illegal location for managed table, it has to be within database's managed location) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.validateTablePaths(MetastoreDefaultTransformer.java:886) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transformCreateTable(MetastoreDefaultTransformer.java:666) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.translate_table_dryrun(HiveMetaStore.java:2164) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) {code} I am able to repro this issue at apache upstream using attached testcase. [^mm_cttas.q] There are multiple ways to fix this issue * Have temp txn table path under db's managed location path. This will help with encryption zone paths as well. * Skip location check for temp tables at MetastoreDefaultTransformer#validateTablePaths() was: After HIVE-25303, every CTAS goes for HiveMetaStore$HMSHandler#translate_table_dryrun() call to fetch table location for CTAS queries which fails with following exception for temp tables if MetastoreDefaultTransformer is set. {code:java} 2023-03-17 16:41:23,390 INFO org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: [pool-6-thread-196]: Starting translation for CreateTable for processor HMSClient-@localhost with [EXTWRITE, EXTREAD, HIVEBUCKET2, HIVEFULLACIDREAD, HIVEFULLACIDWRITE, HIVECACHEINVALIDATE, HIVEMANAGESTATS, HIVEMANAGEDINSERTWRITE, HIVEMANAGEDINSERTREAD, HIVESQL, HIVEMQT, HIVEONLYMQTWRITE] on table test_temp 2023-03-17 16:41:23,392 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-196]: MetaException(message:Illegal location for managed table, it has to be within database's managed location) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.validateTablePaths(MetastoreDefaultTransformer.java:886) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transformCreateTable(MetastoreDefaultTransformer.java:666) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.translate_table_dryrun(HiveMetaStore.java:2164) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) {code} I am able to repro this issue at apache upstream using attached testcase. [^mm_cttas.q] There are multiple ways to fix this issue * Have temp txn table path under db's managed location path. This will help with encryption zone tables as well. * Skip location check for temp tables at MetastoreDefaultTransformer#validateTablePaths() > Create Temp Txn Table As Select is failing at tablePath validation > -- > > Key: HIVE-27164 > URL: https://issues.apache.org/jira/browse/HIVE-27164 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Metastore >Reporter: Naresh P R >Priority: Major > Attachments: mm_cttas.q > > > After HIVE-25303, every CTAS goes for > HiveMetaStore$HMSHandler#translate_table_dryrun() call to fetch table > location for CTAS queries which fails with following exception for temp > tables if MetastoreDefaultTransformer is set. > {code:java} > 2023-03-17 16:41:23,390 INFO > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: > [pool-6-thread-196]: Starting translation for CreateTable for processor > HMSClient-@localhost with [EXTWRITE, EXTREAD, HIVEBUCKET2, HIVEFULLACIDREAD, > HIVEFULLACIDWRITE, HIVECACHEINVALIDATE, HIVEMANAGESTATS, > HIVEMANAGEDINSERTWRITE, HIVEMANAGEDINSERTREAD, HIVESQL, HIVEMQT, > HIVEONLYMQTWRITE] on table test_temp > 2023-03-17 16:41:23,392 ERROR > org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-196]: > MetaException(message:Illegal location for managed table, it has to be within > database's managed location) > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.validateTablePaths(MetastoreDefaultTra
[jira] [Updated] (HIVE-27164) Create Temp Txn Table As Select is failing at tablePath validation
[ https://issues.apache.org/jira/browse/HIVE-27164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27164: -- Description: After HIVE-25303, every CTAS goes for HiveMetaStore$HMSHandler#translate_table_dryrun() call to fetch table location for CTAS queries which fails with following exception for temp tables if MetastoreDefaultTransformer is set. {code:java} 2023-03-17 16:41:23,390 INFO org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: [pool-6-thread-196]: Starting translation for CreateTable for processor HMSClient-@localhost with [EXTWRITE, EXTREAD, HIVEBUCKET2, HIVEFULLACIDREAD, HIVEFULLACIDWRITE, HIVECACHEINVALIDATE, HIVEMANAGESTATS, HIVEMANAGEDINSERTWRITE, HIVEMANAGEDINSERTREAD, HIVESQL, HIVEMQT, HIVEONLYMQTWRITE] on table test_temp 2023-03-17 16:41:23,392 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-196]: MetaException(message:Illegal location for managed table, it has to be within database's managed location) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.validateTablePaths(MetastoreDefaultTransformer.java:886) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transformCreateTable(MetastoreDefaultTransformer.java:666) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.translate_table_dryrun(HiveMetaStore.java:2164) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) {code} I am able to repro this issue using attached testcase. [^mm_cttas.q] There are multiple ways to fix this issue * Have temp txn table path under db's managed location path. This will help with encryption zone paths as well. * Skip location check for temp tables at MetastoreDefaultTransformer#validateTablePaths() was: After HIVE-25303, every CTAS goes for HiveMetaStore$HMSHandler#translate_table_dryrun() call to fetch table location for CTAS queries which fails with following exception for temp tables if MetastoreDefaultTransformer is set. {code:java} 2023-03-17 16:41:23,390 INFO org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: [pool-6-thread-196]: Starting translation for CreateTable for processor HMSClient-@localhost with [EXTWRITE, EXTREAD, HIVEBUCKET2, HIVEFULLACIDREAD, HIVEFULLACIDWRITE, HIVECACHEINVALIDATE, HIVEMANAGESTATS, HIVEMANAGEDINSERTWRITE, HIVEMANAGEDINSERTREAD, HIVESQL, HIVEMQT, HIVEONLYMQTWRITE] on table test_temp 2023-03-17 16:41:23,392 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-196]: MetaException(message:Illegal location for managed table, it has to be within database's managed location) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.validateTablePaths(MetastoreDefaultTransformer.java:886) at org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transformCreateTable(MetastoreDefaultTransformer.java:666) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.translate_table_dryrun(HiveMetaStore.java:2164) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) {code} I am able to repro this issue at apache upstream using attached testcase. [^mm_cttas.q] There are multiple ways to fix this issue * Have temp txn table path under db's managed location path. This will help with encryption zone paths as well. * Skip location check for temp tables at MetastoreDefaultTransformer#validateTablePaths() > Create Temp Txn Table As Select is failing at tablePath validation > -- > > Key: HIVE-27164 > URL: https://issues.apache.org/jira/browse/HIVE-27164 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Metastore >Reporter: Naresh P R >Priority: Major > Attachments: mm_cttas.q > > > After HIVE-25303, every CTAS goes for > HiveMetaStore$HMSHandler#translate_table_dryrun() call to fetch table > location for CTAS queries which fails with following exception for temp > tables if MetastoreDefaultTransformer is set. > {code:java} > 2023-03-17 16:41:23,390 INFO > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: > [pool-6-thread-196]: Starting translation for CreateTable for processor > HMSClient-@localhost with [EXTWRITE, EXTREAD, HIVEBUCKET2, HIVEFULLACIDREAD, > HIVEFULLACIDWRITE, HIVECACHEINVALIDATE, HIVEMANAGESTATS, > HIVEMANAGEDINSERTWRITE, HIVEMANAGEDINSERTREAD, HIVESQL, HIVEMQT, > HIVEONLYMQTWRITE] on table test_temp > 2023-03-17 16:41:23,392 ERROR > org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-196]: > MetaException(message:Illegal location for managed table, it has to be within > database's managed location) > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.validateTablePaths(MetastoreDefaultTransformer.java:886) >
[jira] [Assigned] (HIVE-22478) Import command fails from lower version to higher version when hive.strict.managed.tables enabled
[ https://issues.apache.org/jira/browse/HIVE-22478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R reassigned HIVE-22478: - > Import command fails from lower version to higher version when > hive.strict.managed.tables enabled > - > > Key: HIVE-22478 > URL: https://issues.apache.org/jira/browse/HIVE-22478 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > Created non-acid managed orc table in lower version, after inserting some > records, exported the table. > In higher version where hive.strict.managed.enabled=true, > 1) on first attempt, ACID Table is getting created, but LoadTable is failing > with below exception > {code:java} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MoveTask : Write > id is not set in the config by open txn task for migration > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:400) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2712) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2383) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2055) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1753) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1747) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226){code} > 2) On second attempt, as the table is already exist as ACID, > ImportSemanticAnalyzer is creating writeId for the ACID table & LoadTable > command is successful. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work started] (HIVE-22478) Import command fails from lower version to higher version when hive.strict.managed.tables enabled
[ https://issues.apache.org/jira/browse/HIVE-22478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-22478 started by Naresh P R. - > Import command fails from lower version to higher version when > hive.strict.managed.tables enabled > - > > Key: HIVE-22478 > URL: https://issues.apache.org/jira/browse/HIVE-22478 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > Created non-acid managed orc table in lower version, after inserting some > records, exported the table. > In higher version where hive.strict.managed.enabled=true, > 1) on first attempt, ACID Table is getting created, but LoadTable is failing > with below exception > {code:java} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MoveTask : Write > id is not set in the config by open txn task for migration > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:400) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2712) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2383) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2055) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1753) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1747) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226){code} > 2) On second attempt, as the table is already exist as ACID, > ImportSemanticAnalyzer is creating writeId for the ACID table & LoadTable > command is successful. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22478) Import command fails from lower version to higher version when hive.strict.managed.tables enabled
[ https://issues.apache.org/jira/browse/HIVE-22478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-22478: -- Attachment: HIVE-22478.1.patch Affects Version/s: 3.1.0 Status: Patch Available (was: In Progress) > Import command fails from lower version to higher version when > hive.strict.managed.tables enabled > - > > Key: HIVE-22478 > URL: https://issues.apache.org/jira/browse/HIVE-22478 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22478.1.patch > > > Created non-acid managed orc table in lower version, after inserting some > records, exported the table. > In higher version where hive.strict.managed.enabled=true, > 1) on first attempt, ACID Table is getting created, but LoadTable is failing > with below exception > {code:java} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MoveTask : Write > id is not set in the config by open txn task for migration > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:400) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2712) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2383) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2055) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1753) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1747) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226){code} > 2) On second attempt, as the table is already exist as ACID, > ImportSemanticAnalyzer is creating writeId for the ACID table & LoadTable > command is successful. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22478) Import command fails from lower version to higher version when hive.strict.managed.tables enabled
[ https://issues.apache.org/jira/browse/HIVE-22478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-22478: -- Attachment: HIVE-22478.2.patch > Import command fails from lower version to higher version when > hive.strict.managed.tables enabled > - > > Key: HIVE-22478 > URL: https://issues.apache.org/jira/browse/HIVE-22478 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22478.1.patch, HIVE-22478.2.patch > > > Created non-acid managed orc table in lower version, after inserting some > records, exported the table. > In higher version where hive.strict.managed.enabled=true, > 1) on first attempt, ACID Table is getting created, but LoadTable is failing > with below exception > {code:java} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MoveTask : Write > id is not set in the config by open txn task for migration > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:400) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2712) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2383) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2055) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1753) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1747) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226){code} > 2) On second attempt, as the table is already exist as ACID, > ImportSemanticAnalyzer is creating writeId for the ACID table & LoadTable > command is successful. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22478) Import command fails from lower version to higher version when hive.strict.managed.tables enabled
[ https://issues.apache.org/jira/browse/HIVE-22478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-22478: -- Attachment: HIVE-22478.3.patch > Import command fails from lower version to higher version when > hive.strict.managed.tables enabled > - > > Key: HIVE-22478 > URL: https://issues.apache.org/jira/browse/HIVE-22478 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22478.1.patch, HIVE-22478.2.patch, > HIVE-22478.3.patch > > > Created non-acid managed orc table in lower version, after inserting some > records, exported the table. > In higher version where hive.strict.managed.enabled=true, > 1) on first attempt, ACID Table is getting created, but LoadTable is failing > with below exception > {code:java} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MoveTask : Write > id is not set in the config by open txn task for migration > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:400) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2712) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2383) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2055) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1753) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1747) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226){code} > 2) On second attempt, as the table is already exist as ACID, > ImportSemanticAnalyzer is creating writeId for the ACID table & LoadTable > command is successful. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22478) Import command fails from lower version to higher version when hive.strict.managed.tables enabled
[ https://issues.apache.org/jira/browse/HIVE-22478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978668#comment-16978668 ] Naresh P R commented on HIVE-22478: --- [~szita] I am able to repro this issue with the testcase in the attached 3.patch. > Import command fails from lower version to higher version when > hive.strict.managed.tables enabled > - > > Key: HIVE-22478 > URL: https://issues.apache.org/jira/browse/HIVE-22478 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22478.1.patch, HIVE-22478.2.patch, > HIVE-22478.3.patch > > > Created non-acid managed orc table in lower version, after inserting some > records, exported the table. > In higher version where hive.strict.managed.enabled=true, > 1) on first attempt, ACID Table is getting created, but LoadTable is failing > with below exception > {code:java} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MoveTask : Write > id is not set in the config by open txn task for migration > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:400) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2712) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2383) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2055) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1753) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1747) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226){code} > 2) On second attempt, as the table is already exist as ACID, > ImportSemanticAnalyzer is creating writeId for the ACID table & LoadTable > command is successful. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22478) Import command fails from lower version to higher version when hive.strict.managed.tables enabled
[ https://issues.apache.org/jira/browse/HIVE-22478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-22478: -- Attachment: HIVE-22478.4.patch > Import command fails from lower version to higher version when > hive.strict.managed.tables enabled > - > > Key: HIVE-22478 > URL: https://issues.apache.org/jira/browse/HIVE-22478 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22478.1.patch, HIVE-22478.2.patch, > HIVE-22478.3.patch, HIVE-22478.4.patch > > > Created non-acid managed orc table in lower version, after inserting some > records, exported the table. > In higher version where hive.strict.managed.enabled=true, > 1) on first attempt, ACID Table is getting created, but LoadTable is failing > with below exception > {code:java} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MoveTask : Write > id is not set in the config by open txn task for migration > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:400) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2712) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2383) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2055) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1753) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1747) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226){code} > 2) On second attempt, as the table is already exist as ACID, > ImportSemanticAnalyzer is creating writeId for the ACID table & LoadTable > command is successful. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22400) UDF minute with time returns NULL
[ https://issues.apache.org/jira/browse/HIVE-22400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-22400: -- Attachment: HIVE-22400.1.patch > UDF minute with time returns NULL > - > > Key: HIVE-22400 > URL: https://issues.apache.org/jira/browse/HIVE-22400 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Nikunj >Assignee: Naresh P R >Priority: Minor > Attachments: HIVE-22400.1.patch, HIVE-22400.patch > > > [impadmin@impetus-g031 ~]$ beeline > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Connecting to > jdbc:hive2://impetus-dsrv11.impetus.co.in:2181,ct-n0066.impetus.co.in:2181,ct-n0092.impetus.co.in:2181/default;principal=hive/_h...@impetus.co.in;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 > 19/10/24 19:03:42 [main]: INFO jdbc.HiveConnection: Connected to > ct-n0092.impetus.co.in:1 > Connected to: Apache Hive (version 3.1.0.3.1.0.0-78) > Driver: Hive JDBC (version 3.1.0.3.1.0.0-78) > Transaction isolation: TRANSACTION_REPEATABLE_READ > Beeline version 3.1.0.3.1.0.0-78 by Apache Hive > 0: jdbc:hive2://impetus-dsrv11.impetus.co.in:> select minute('12:58:59'); > INFO : Compiling > command(queryId=hive_20191024190401_bc517191-bd20-4f5a-b5f5-44f762c2d395): > select minute('12:58:59') > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_c0, > type:int, comment:null)], properties:null) > INFO : Completed compiling > command(queryId=hive_20191024190401_bc517191-bd20-4f5a-b5f5-44f762c2d395); > Time taken: 0.427 seconds > INFO : Executing > command(queryId=hive_20191024190401_bc517191-bd20-4f5a-b5f5-44f762c2d395): > select minute('12:58:59') > INFO : Completed executing > command(queryId=hive_20191024190401_bc517191-bd20-4f5a-b5f5-44f762c2d395); > Time taken: 0.003 seconds > INFO : OK > +---+ > | _c0 | > +---+ > | NULL | > +---+ > 1 row selected (0.739 seconds) > 0: jdbc:hive2://impetus-dsrv11.impetus.co.in:> -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22400) UDF minute with time returns NULL
[ https://issues.apache.org/jira/browse/HIVE-22400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-22400: -- Attachment: HIVE-22400.2.patch > UDF minute with time returns NULL > - > > Key: HIVE-22400 > URL: https://issues.apache.org/jira/browse/HIVE-22400 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Nikunj >Assignee: Naresh P R >Priority: Minor > Attachments: HIVE-22400.1.patch, HIVE-22400.2.patch, HIVE-22400.patch > > > [impadmin@impetus-g031 ~]$ beeline > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Connecting to > jdbc:hive2://impetus-dsrv11.impetus.co.in:2181,ct-n0066.impetus.co.in:2181,ct-n0092.impetus.co.in:2181/default;principal=hive/_h...@impetus.co.in;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 > 19/10/24 19:03:42 [main]: INFO jdbc.HiveConnection: Connected to > ct-n0092.impetus.co.in:1 > Connected to: Apache Hive (version 3.1.0.3.1.0.0-78) > Driver: Hive JDBC (version 3.1.0.3.1.0.0-78) > Transaction isolation: TRANSACTION_REPEATABLE_READ > Beeline version 3.1.0.3.1.0.0-78 by Apache Hive > 0: jdbc:hive2://impetus-dsrv11.impetus.co.in:> select minute('12:58:59'); > INFO : Compiling > command(queryId=hive_20191024190401_bc517191-bd20-4f5a-b5f5-44f762c2d395): > select minute('12:58:59') > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_c0, > type:int, comment:null)], properties:null) > INFO : Completed compiling > command(queryId=hive_20191024190401_bc517191-bd20-4f5a-b5f5-44f762c2d395); > Time taken: 0.427 seconds > INFO : Executing > command(queryId=hive_20191024190401_bc517191-bd20-4f5a-b5f5-44f762c2d395): > select minute('12:58:59') > INFO : Completed executing > command(queryId=hive_20191024190401_bc517191-bd20-4f5a-b5f5-44f762c2d395); > Time taken: 0.003 seconds > INFO : OK > +---+ > | _c0 | > +---+ > | NULL | > +---+ > 1 row selected (0.739 seconds) > 0: jdbc:hive2://impetus-dsrv11.impetus.co.in:> -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22400) UDF minute with time returns NULL
[ https://issues.apache.org/jira/browse/HIVE-22400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-22400: -- Attachment: HIVE-22400.3.patch > UDF minute with time returns NULL > - > > Key: HIVE-22400 > URL: https://issues.apache.org/jira/browse/HIVE-22400 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Nikunj >Assignee: Naresh P R >Priority: Minor > Attachments: HIVE-22400.1.patch, HIVE-22400.2.patch, > HIVE-22400.3.patch, HIVE-22400.patch > > > [impadmin@impetus-g031 ~]$ beeline > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/usr/hdp/3.1.0.0-78/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Connecting to > jdbc:hive2://impetus-dsrv11.impetus.co.in:2181,ct-n0066.impetus.co.in:2181,ct-n0092.impetus.co.in:2181/default;principal=hive/_h...@impetus.co.in;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 > 19/10/24 19:03:42 [main]: INFO jdbc.HiveConnection: Connected to > ct-n0092.impetus.co.in:1 > Connected to: Apache Hive (version 3.1.0.3.1.0.0-78) > Driver: Hive JDBC (version 3.1.0.3.1.0.0-78) > Transaction isolation: TRANSACTION_REPEATABLE_READ > Beeline version 3.1.0.3.1.0.0-78 by Apache Hive > 0: jdbc:hive2://impetus-dsrv11.impetus.co.in:> select minute('12:58:59'); > INFO : Compiling > command(queryId=hive_20191024190401_bc517191-bd20-4f5a-b5f5-44f762c2d395): > select minute('12:58:59') > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_c0, > type:int, comment:null)], properties:null) > INFO : Completed compiling > command(queryId=hive_20191024190401_bc517191-bd20-4f5a-b5f5-44f762c2d395); > Time taken: 0.427 seconds > INFO : Executing > command(queryId=hive_20191024190401_bc517191-bd20-4f5a-b5f5-44f762c2d395): > select minute('12:58:59') > INFO : Completed executing > command(queryId=hive_20191024190401_bc517191-bd20-4f5a-b5f5-44f762c2d395); > Time taken: 0.003 seconds > INFO : OK > +---+ > | _c0 | > +---+ > | NULL | > +---+ > 1 row selected (0.739 seconds) > 0: jdbc:hive2://impetus-dsrv11.impetus.co.in:> -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-27280) Query on View fails in SemanticAnalyzer if view select has grouping sets
Naresh P R created HIVE-27280: - Summary: Query on View fails in SemanticAnalyzer if view select has grouping sets Key: HIVE-27280 URL: https://issues.apache.org/jira/browse/HIVE-27280 Project: Hive Issue Type: Bug Reporter: Naresh P R Attachments: test14.q View definition is not getting rewritten for grouping UDF columns with proper table alias causing compilation issues with following trace. {code:java} java.lang.RuntimeException: Expression in GROUPING function not present in GROUP BY at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer$2.post(SemanticAnalyzer.java:3429) at org.antlr.runtime.tree.TreeVisitor.visit(TreeVisitor.java:66) at org.antlr.runtime.tree.TreeVisitor.visit(TreeVisitor.java:60) at org.antlr.runtime.tree.TreeVisitor.visit(TreeVisitor.java:60) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.rewriteGroupingFunctionAST(SemanticAnalyzer.java:3438) at org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.internalGenSelectLogicalPlan(CalcitePlanner.java:4743) at org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genSelectLogicalPlan(CalcitePlanner.java:4505) at org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:5173) {code} Attached Repro file : [^test14.q] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27428) CTAS fails with SemanticException when join subquery has complex type column and false filter predicate
Naresh P R created HIVE-27428: - Summary: CTAS fails with SemanticException when join subquery has complex type column and false filter predicate Key: HIVE-27428 URL: https://issues.apache.org/jira/browse/HIVE-27428 Project: Hive Issue Type: Bug Reporter: Naresh P R Repro steps: {code:java} drop table if exists table1; drop table if exists table2; create table table1 (a string, b string); create table table2 (complex_column create table table2 (complex_column array, values:array); -- CTAS failing query create table table3 as with t1 as (select * from table1), t2 as (select * from table2 where 1=0) select t1.*, t2.* from t1 left join t2;{code} Exception: {code:java} Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: CREATE-TABLE-AS-SELECT creates a VOID type, please use CAST to specify the type, near field: t2.df0rrd_prod_wers_x at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.deriveFileSinkColTypes(SemanticAnalyzer.java:8171) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.deriveFileSinkColTypes(SemanticAnalyzer.java:8129) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFileSinkPlan(SemanticAnalyzer.java:7822) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:11248) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:11120) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:12050) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11916) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:12730) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:722) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12831) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:442) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:300) at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:220) at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:105) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:194) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27428) CTAS fails with SemanticException when join subquery has complex type column and false filter predicate
[ https://issues.apache.org/jira/browse/HIVE-27428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27428: -- Description: Repro steps: {code:java} drop table if exists table1; drop table if exists table2; create table table1 (a string, b string); create table table2 (complex_column create table table2 (complex_column array, values:array); -- CTAS failing query create table table3 as with t1 as (select * from table1), t2 as (select * from table2 where 1=0) select t1.*, t2.* from t1 left join t2;{code} Exception: {code:java} Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: CREATE-TABLE-AS-SELECT creates a VOID type, please use CAST to specify the type, near field: t2.complex_column at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.deriveFileSinkColTypes(SemanticAnalyzer.java:8171) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.deriveFileSinkColTypes(SemanticAnalyzer.java:8129) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFileSinkPlan(SemanticAnalyzer.java:7822) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:11248) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:11120) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:12050) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11916) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:12730) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:722) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12831) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:442) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:300) at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:220) at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:105) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:194) {code} was: Repro steps: {code:java} drop table if exists table1; drop table if exists table2; create table table1 (a string, b string); create table table2 (complex_column create table table2 (complex_column array, values:array); -- CTAS failing query create table table3 as with t1 as (select * from table1), t2 as (select * from table2 where 1=0) select t1.*, t2.* from t1 left join t2;{code} Exception: {code:java} Caused by: org.apache.hadoop.hive.ql.parse.SemanticException: CREATE-TABLE-AS-SELECT creates a VOID type, please use CAST to specify the type, near field: t2.df0rrd_prod_wers_x at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.deriveFileSinkColTypes(SemanticAnalyzer.java:8171) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.deriveFileSinkColTypes(SemanticAnalyzer.java:8129) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFileSinkPlan(SemanticAnalyzer.java:7822) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:11248) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:11120) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:12050) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11916) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:12730) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:722) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12831) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:442) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:300) at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:220) at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:105) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:194) {code} > CTAS fails with SemanticException when join subquery has complex type column > and false filter predicate > --- > > Key: HIVE-27428 > URL: https://issues.apache.org/jira/browse/HIVE-27428 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Priority: Major > > Repro steps: > {code:java} > drop table if exists table1; > drop table if exists table2; > create table table1 (a string, b string); > create table table2 (compl
[jira] [Created] (HIVE-27876) Incorrect query results on tables with ClusterBy & SortBy
Naresh P R created HIVE-27876: - Summary: Incorrect query results on tables with ClusterBy & SortBy Key: HIVE-27876 URL: https://issues.apache.org/jira/browse/HIVE-27876 Project: Hive Issue Type: Bug Reporter: Naresh P R Repro: {code:java} create external table test_bucket(age int, name string, dept string) clustered by (age, name) sorted by (age asc, name asc) into 2 buckets stored as orc; insert into test_bucket values (1, 'user1', 'dept1'), ( 2, 'user2' , 'dept2'); insert into test_bucket values (1, 'user1', 'dept1'), ( 2, 'user2' , 'dept2'); //empty wrong results with default CDP configs select age, name, count(*) from test_bucket group by age, name having count(*) > 1; +--+---+--+ | age | name | _c2 | +--+---+--+ +--+---+--+ // Workaround set hive.map.aggr=false; select age, name, count(*) from test_bucket group by age, name having count(*) > 1; +--++--+ | age | name | _c2 | +--++--+ | 1 | user1 | 2 | | 2 | user2 | 2 | +--++--+ {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27876) Incorrect query results on tables with ClusterBy & SortBy
[ https://issues.apache.org/jira/browse/HIVE-27876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27876: -- Description: Repro: {code:java} create external table test_bucket(age int, name string, dept string) clustered by (age, name) sorted by (age asc, name asc) into 2 buckets stored as orc; insert into test_bucket values (1, 'user1', 'dept1'), ( 2, 'user2' , 'dept2'); insert into test_bucket values (1, 'user1', 'dept1'), ( 2, 'user2' , 'dept2'); //empty wrong results select age, name, count(*) from test_bucket group by age, name having count(*) > 1; +--+---+--+ | age | name | _c2 | +--+---+--+ +--+---+--+ // Workaround set hive.map.aggr=false; select age, name, count(*) from test_bucket group by age, name having count(*) > 1; +--++--+ | age | name | _c2 | +--++--+ | 1 | user1 | 2 | | 2 | user2 | 2 | +--++--+ {code} was: Repro: {code:java} create external table test_bucket(age int, name string, dept string) clustered by (age, name) sorted by (age asc, name asc) into 2 buckets stored as orc; insert into test_bucket values (1, 'user1', 'dept1'), ( 2, 'user2' , 'dept2'); insert into test_bucket values (1, 'user1', 'dept1'), ( 2, 'user2' , 'dept2'); //empty wrong results with default CDP configs select age, name, count(*) from test_bucket group by age, name having count(*) > 1; +--+---+--+ | age | name | _c2 | +--+---+--+ +--+---+--+ // Workaround set hive.map.aggr=false; select age, name, count(*) from test_bucket group by age, name having count(*) > 1; +--++--+ | age | name | _c2 | +--++--+ | 1 | user1 | 2 | | 2 | user2 | 2 | +--++--+ {code} > Incorrect query results on tables with ClusterBy & SortBy > - > > Key: HIVE-27876 > URL: https://issues.apache.org/jira/browse/HIVE-27876 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Priority: Major > > Repro: > > {code:java} > create external table test_bucket(age int, name string, dept string) > clustered by (age, name) sorted by (age asc, name asc) into 2 buckets stored > as orc; > insert into test_bucket values (1, 'user1', 'dept1'), ( 2, 'user2' , 'dept2'); > insert into test_bucket values (1, 'user1', 'dept1'), ( 2, 'user2' , 'dept2'); > //empty wrong results > select age, name, count(*) from test_bucket group by age, name having > count(*) > 1; > +--+---+--+ > | age | name | _c2 | > +--+---+--+ > +--+---+--+ > // Workaround > set hive.map.aggr=false; > select age, name, count(*) from test_bucket group by age, name having > count(*) > 1; > +--++--+ > | age | name | _c2 | > +--++--+ > | 1 | user1 | 2 | > | 2 | user2 | 2 | > +--++--+ {code} > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27885) Cast decimal from string with space without digits before dot returns NULL
Naresh P R created HIVE-27885: - Summary: Cast decimal from string with space without digits before dot returns NULL Key: HIVE-27885 URL: https://issues.apache.org/jira/browse/HIVE-27885 Project: Hive Issue Type: Bug Environment: eg., select cast(". " as decimal(8,4)) -- Expected output 0. -- Actual output NULL select cast("0. " as decimal(8,4)) -- Actual output 0. Reporter: Naresh P R Assignee: Naresh P R -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27885) Cast decimal from string with space without digits before dot returns NULL
[ https://issues.apache.org/jira/browse/HIVE-27885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27885: -- Environment: (was: eg., select cast(". " as decimal(8,4)) -- Expected output 0. -- Actual output NULL select cast("0. " as decimal(8,4)) -- Actual output 0.) > Cast decimal from string with space without digits before dot returns NULL > -- > > Key: HIVE-27885 > URL: https://issues.apache.org/jira/browse/HIVE-27885 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27885) Cast decimal from string with space without digits before dot returns NULL
[ https://issues.apache.org/jira/browse/HIVE-27885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27885: -- Description: eg., select cast(". " as decimal(8,4)) {code:java} – Expected output 0. – Actual output NULL {code} select cast("0. " as decimal(8,4)) {code:java} – Actual output 0. {code} was: eg., select cast(". " as decimal(8,4)) {code:java} – Expected output 0. – Actual output NULL {code} select cast("0. " as decimal(8,4)) {code:java} – Actual output 0. {code} > Cast decimal from string with space without digits before dot returns NULL > -- > > Key: HIVE-27885 > URL: https://issues.apache.org/jira/browse/HIVE-27885 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > eg., > select cast(". " as decimal(8,4)) > {code:java} > – Expected output > 0. > – Actual output > NULL > {code} > select cast("0. " as decimal(8,4)) > {code:java} > – Actual output > 0. > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27885) Cast decimal from string with space without digits before dot returns NULL
[ https://issues.apache.org/jira/browse/HIVE-27885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27885: -- Description: eg., select cast(". " as decimal(8,4)) {code:java} – Expected output 0. – Actual output NULL {code} select cast("0. " as decimal(8,4)) {code:java} – Actual output 0. {code} > Cast decimal from string with space without digits before dot returns NULL > -- > > Key: HIVE-27885 > URL: https://issues.apache.org/jira/browse/HIVE-27885 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > eg., > select cast(". " as decimal(8,4)) > > {code:java} > – Expected output > 0. > – Actual output > NULL > {code} > select cast("0. " as decimal(8,4)) > > {code:java} > – Actual output > 0. > {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27885) Cast decimal from string with space without digits before dot returns NULL
[ https://issues.apache.org/jira/browse/HIVE-27885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17789609#comment-17789609 ] Naresh P R commented on HIVE-27885: --- Thank you [~rameshkumar] & [~ngangam] for the review and commit. > Cast decimal from string with space without digits before dot returns NULL > -- > > Key: HIVE-27885 > URL: https://issues.apache.org/jira/browse/HIVE-27885 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > eg., > select cast(". " as decimal(8,4)) > {code:java} > – Expected output > 0. > – Actual output > NULL > {code} > select cast("0. " as decimal(8,4)) > {code:java} > – Actual output > 0. > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-23811) deleteReader SARG rowId is not getting validated properly
[ https://issues.apache.org/jira/browse/HIVE-23811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R reassigned HIVE-23811: - > deleteReader SARG rowId is not getting validated properly > - > > Key: HIVE-23811 > URL: https://issues.apache.org/jira/browse/HIVE-23811 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > Though we are iterating over min/max stripeIndex, we always seem to pick > ColumnStats from first stripe > [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/io/orc/VectorizedOrcAcidRowBatchReader.java#L596] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23811) deleteReader SARG rowId/buckedId are not getting validated properly
[ https://issues.apache.org/jira/browse/HIVE-23811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-23811: -- Summary: deleteReader SARG rowId/buckedId are not getting validated properly (was: deleteReader SARG rowId is not getting validated properly) > deleteReader SARG rowId/buckedId are not getting validated properly > --- > > Key: HIVE-23811 > URL: https://issues.apache.org/jira/browse/HIVE-23811 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Though we are iterating over min/max stripeIndex, we always seem to pick > ColumnStats from first stripe > [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/io/orc/VectorizedOrcAcidRowBatchReader.java#L596] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23811) deleteReader SARG rowId/bucketId are not getting validated properly
[ https://issues.apache.org/jira/browse/HIVE-23811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-23811: -- Summary: deleteReader SARG rowId/bucketId are not getting validated properly (was: deleteReader SARG rowId/buckedId are not getting validated properly) > deleteReader SARG rowId/bucketId are not getting validated properly > --- > > Key: HIVE-23811 > URL: https://issues.apache.org/jira/browse/HIVE-23811 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Though we are iterating over min/max stripeIndex, we always seem to pick > ColumnStats from first stripe > [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/io/orc/VectorizedOrcAcidRowBatchReader.java#L596] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23811) deleteReader SARG rowId/bucketId are not getting validated properly
[ https://issues.apache.org/jira/browse/HIVE-23811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-23811: -- Status: Patch Available (was: Open) > deleteReader SARG rowId/bucketId are not getting validated properly > --- > > Key: HIVE-23811 > URL: https://issues.apache.org/jira/browse/HIVE-23811 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Though we are iterating over min/max stripeIndex, we always seem to pick > ColumnStats from first stripe > [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/io/orc/VectorizedOrcAcidRowBatchReader.java#L596] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-23894) SubmitDag should not be retried incase of query cancel
[ https://issues.apache.org/jira/browse/HIVE-23894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R reassigned HIVE-23894: - > SubmitDag should not be retried incase of query cancel > -- > > Key: HIVE-23894 > URL: https://issues.apache.org/jira/browse/HIVE-23894 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > Incase of query cancel, running tasks will be interrupted & TezTask shutdown > flag is will be set. > Below code is not required to be retried incase of Task shutdown > [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java#L572-L586] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23894) SubmitDag should not be retried incase of query cancel
[ https://issues.apache.org/jira/browse/HIVE-23894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-23894: -- Description: Incase of query cancel, running tasks will be interrupted & TezTask shutdown flag will be set. Below code is not required to be retried incase of Task shutdown [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java#L572-L586] was: Incase of query cancel, running tasks will be interrupted & TezTask shutdown flag is will be set. Below code is not required to be retried incase of Task shutdown [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java#L572-L586] > SubmitDag should not be retried incase of query cancel > -- > > Key: HIVE-23894 > URL: https://issues.apache.org/jira/browse/HIVE-23894 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > Incase of query cancel, running tasks will be interrupted & TezTask shutdown > flag will be set. > Below code is not required to be retried incase of Task shutdown > [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java#L572-L586] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23894) SubmitDag should not be retried incase of query cancel
[ https://issues.apache.org/jira/browse/HIVE-23894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-23894: -- Status: Patch Available (was: Open) > SubmitDag should not be retried incase of query cancel > -- > > Key: HIVE-23894 > URL: https://issues.apache.org/jira/browse/HIVE-23894 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > Incase of query cancel, running tasks will be interrupted & TezTask shutdown > flag will be set. > Below code is not required to be retried incase of Task shutdown > [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java#L572-L586] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23894) SubmitDag should not be retried incase of query cancel
[ https://issues.apache.org/jira/browse/HIVE-23894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-23894: -- Fix Version/s: 4.0.0 > SubmitDag should not be retried incase of query cancel > -- > > Key: HIVE-23894 > URL: https://issues.apache.org/jira/browse/HIVE-23894 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Time Spent: 40m > Remaining Estimate: 0h > > Incase of query cancel, running tasks will be interrupted & TezTask shutdown > flag will be set. > Below code is not required to be retried incase of Task shutdown > [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java#L572-L586] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-23894) SubmitDag should not be retried incase of query cancel
[ https://issues.apache.org/jira/browse/HIVE-23894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-23894: -- Target Version/s: 4.0.0 Resolution: Fixed Status: Resolved (was: Patch Available) Thanks you [~pgaref] & [~maheshk114] for the review and commit. > SubmitDag should not be retried incase of query cancel > -- > > Key: HIVE-23894 > URL: https://issues.apache.org/jira/browse/HIVE-23894 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Incase of query cancel, running tasks will be interrupted & TezTask shutdown > flag will be set. > Below code is not required to be retried incase of Task shutdown > [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java#L572-L586] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23968) CTAS with TBLPROPERTIES ('transactional'='false') does not entertain translated table location
[ https://issues.apache.org/jira/browse/HIVE-23968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17169184#comment-17169184 ] Naresh P R commented on HIVE-23968: --- Incase of CTAS, without HMS translation, this still is valid. (ie., CTAS with tblproperties('transactional'='false') can be managed) only if metastore.metadata.transformer.class = 'org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer', then this issue might happen. I suspect issue here is, instead of transforming table before preparing plan, we are doing it in HMS layer just before creating table cc: [~ngangam] > CTAS with TBLPROPERTIES ('transactional'='false') does not entertain > translated table location > -- > > Key: HIVE-23968 > URL: https://issues.apache.org/jira/browse/HIVE-23968 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > > HMS translation layer convert the table to external based on the > transactional property set to false but MoveTask does not entertain the > translated table location and move the data to the managed table location; > steps to repro: > {code:java} > create table nontxnal TBLPROPERTIES ('transactional'='false') as select * > from abc; > {code} > select query on table return nothing t but the source table has data in it. > {code:java} > select * from nontxnal; > +--+ > | nontxnal.id | > +--+ > +--+ > {code} > --show create table > {code:java} > CREATE EXTERNAL TABLE `nontxnal`( | > | `id` int)| > | ROW FORMAT SERDE | > | 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' | > | STORED AS INPUTFORMAT | > | 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' | > | OUTPUTFORMAT | > | 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' | > | LOCATION | > | 'hdfs://hostname:8020/warehouse/tablespace/external/hive/nontxnal' | > | TBLPROPERTIES (| > | 'TRANSLATED_TO_EXTERNAL'='TRUE', | > | 'bucketing_version'='2', | > | 'external.table.purge'='TRUE', | > | 'transient_lastDdlTime'='1596215634')| > {code} > table data is moved to the managed location: > ``` > dfs -ls -R hdfs://hostname:8020/warehouse/tablespace/managed/hive/nontxnal > . . . . . . . . . . . . . . . . . . . . . . .> ; > ++ > | DFS Output | > ++ > | -rw-rw+ 3 hive hadoop201 2020-07-31 17:05 > hdfs://hostname:8020/warehouse/tablespace/managed/hive/nontxnal/00_0 | > ++ > ``` > The problem seems to be here > isExternal evaluates to false since the statement is missing external > https://github.com/apache/hive/blob/d4bfd2ea1ee797f53227f447749cbc97803cd5dc/ql/src/java/org/apache/hadoop/hive/ql/parse/TaskCompiler.java#L446 > and location return to the managed location > https://github.com/apache/hive/blob/d4bfd2ea1ee797f53227f447749cbc97803cd5dc/ql/src/java/org/apache/hadoop/hive/ql/parse/TaskCompiler.java#L455 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-24036) Kryo Exception while serializing plan for getSplits UDF call
[ https://issues.apache.org/jira/browse/HIVE-24036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R reassigned HIVE-24036: - Assignee: Naresh P R > Kryo Exception while serializing plan for getSplits UDF call > > > Key: HIVE-24036 > URL: https://issues.apache.org/jira/browse/HIVE-24036 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > {code:java} > Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: > java.lang.IllegalArgumentException: Unable to create serializer > "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for > class: org.apache.hadoop.hive.llap.LlapOutputFormatCaused by: > org.apache.hive.com.esotericsoftware.kryo.KryoException: > java.lang.IllegalArgumentException: Unable to create serializer > "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for > class: org.apache.hadoop.hive.llap.LlapOutputFormatSerialization > trace:outputFileFormatClass > (org.apache.hadoop.hive.ql.plan.TableDesc)tableInfo > (org.apache.hadoop.hive.ql.plan.FileSinkDesc)conf > (org.apache.hadoop.hive.ql.exec.FileSinkOperator)childOperators > (org.apache.hadoop.hive.ql.exec.UnionOperator)childOperators > (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators > (org.apache.hadoop.hive.ql.exec.MapJoinOperator)childOperators > (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators > (org.apache.hadoop.hive.ql.exec.PTFOperator)childOperators > (org.apache.hadoop.hive.ql.exec.SelectOperator) at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializeObjectByKryo(SerializationUtilities.java:700) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:571) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:560) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24036) Kryo Exception while serializing plan for getSplits UDF call
[ https://issues.apache.org/jira/browse/HIVE-24036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-24036: -- Description: {code:java} Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: java.lang.IllegalArgumentException: Unable to create serializer "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for class: org.apache.hadoop.hive.llap.LlapOutputFormatCaused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: java.lang.IllegalArgumentException: Unable to create serializer "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for class: org.apache.hadoop.hive.llap.LlapOutputFormatSerialization trace:outputFileFormatClass (org.apache.hadoop.hive.ql.plan.TableDesc)tableInfo (org.apache.hadoop.hive.ql.plan.FileSinkDesc)conf (org.apache.hadoop.hive.ql.exec.FileSinkOperator)childOperators (org.apache.hadoop.hive.ql.exec.UnionOperator)childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators (org.apache.hadoop.hive.ql.exec.PTFOperator)childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializeObjectByKryo(SerializationUtilities.java:700) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:571) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:560) {code} was: {code:java} Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: java.lang.IllegalArgumentException: Unable to create serializer "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for class: org.apache.hadoop.hive.llap.LlapOutputFormatCaused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: java.lang.IllegalArgumentException: Unable to create serializer "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for class: org.apache.hadoop.hive.llap.LlapOutputFormatSerialization trace:outputFileFormatClass (org.apache.hadoop.hive.ql.plan.TableDesc)tableInfo (org.apache.hadoop.hive.ql.plan.FileSinkDesc)conf (org.apache.hadoop.hive.ql.exec.FileSinkOperator)childOperators (org.apache.hadoop.hive.ql.exec.UnionOperator)childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators (org.apache.hadoop.hive.ql.exec.PTFOperator)childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializeObjectByKryo(SerializationUtilities.java:700) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:571) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:560) {code} > Kryo Exception while serializing plan for getSplits UDF call > > > Key: HIVE-24036 > URL: https://issues.apache.org/jira/browse/HIVE-24036 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > {code:java} > Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: > java.lang.IllegalArgumentException: Unable to create serializer > "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for > class: org.apache.hadoop.hive.llap.LlapOutputFormatCaused by: > org.apache.hive.com.esotericsoftware.kryo.KryoException: > java.lang.IllegalArgumentException: Unable to create serializer > "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for > class: org.apache.hadoop.hive.llap.LlapOutputFormatSerialization > trace:outputFileFormatClass > (org.apache.hadoop.hive.ql.plan.TableDesc)tableInfo > (org.apache.hadoop.hive.ql.plan.FileSinkDesc)conf > (org.apache.hadoop.hive.ql.exec.FileSinkOperator)childOperators > (org.apache.hadoop.hive.ql.exec.UnionOperator)childOperators > (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators > (org.apache.hadoop.hive.ql.exec.MapJoinOperator)childOperators > (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators > (org.apache.hadoop.hive.ql.exec.PTFOperator)childOperators > (org.apache.hadoop.hive.ql.exec.SelectOperator) at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializeObjectByKryo(SerializationUtilities.java:700) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:571) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:560) > > {co
[jira] [Updated] (HIVE-24036) Kryo Exception while serializing plan for getSplits UDF call
[ https://issues.apache.org/jira/browse/HIVE-24036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-24036: -- Description: {code:java} Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: java.lang.IllegalArgumentException: Unable to create serializer "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for class: org.apache.hadoop.hive.llap.LlapOutputFormat Serialization trace:outputFileFormatClass (org.apache.hadoop.hive.ql.plan.TableDesc)tableInfo (org.apache.hadoop.hive.ql.plan.FileSinkDesc)conf (org.apache.hadoop.hive.ql.exec.FileSinkOperator)childOperators (org.apache.hadoop.hive.ql.exec.UnionOperator)childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators (org.apache.hadoop.hive.ql.exec.PTFOperator)childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializeObjectByKryo(SerializationUtilities.java:700) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:571) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:560) {code} was: {code:java} Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: java.lang.IllegalArgumentException: Unable to create serializer "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for class: org.apache.hadoop.hive.llap.LlapOutputFormatCaused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: java.lang.IllegalArgumentException: Unable to create serializer "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for class: org.apache.hadoop.hive.llap.LlapOutputFormatSerialization trace:outputFileFormatClass (org.apache.hadoop.hive.ql.plan.TableDesc)tableInfo (org.apache.hadoop.hive.ql.plan.FileSinkDesc)conf (org.apache.hadoop.hive.ql.exec.FileSinkOperator)childOperators (org.apache.hadoop.hive.ql.exec.UnionOperator)childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators (org.apache.hadoop.hive.ql.exec.PTFOperator)childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializeObjectByKryo(SerializationUtilities.java:700) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:571) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:560) {code} > Kryo Exception while serializing plan for getSplits UDF call > > > Key: HIVE-24036 > URL: https://issues.apache.org/jira/browse/HIVE-24036 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > {code:java} > Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: > java.lang.IllegalArgumentException: Unable to create serializer > "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for > class: org.apache.hadoop.hive.llap.LlapOutputFormat > Serialization trace:outputFileFormatClass > (org.apache.hadoop.hive.ql.plan.TableDesc)tableInfo > (org.apache.hadoop.hive.ql.plan.FileSinkDesc)conf > (org.apache.hadoop.hive.ql.exec.FileSinkOperator)childOperators > (org.apache.hadoop.hive.ql.exec.UnionOperator)childOperators > (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators > (org.apache.hadoop.hive.ql.exec.MapJoinOperator)childOperators > (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators > (org.apache.hadoop.hive.ql.exec.PTFOperator)childOperators > (org.apache.hadoop.hive.ql.exec.SelectOperator) at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializeObjectByKryo(SerializationUtilities.java:700) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:571) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:560) > > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24036) Kryo Exception while serializing plan for getSplits UDF call
[ https://issues.apache.org/jira/browse/HIVE-24036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-24036: -- Description: {code:java} Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: java.lang.IllegalArgumentException: Unable to create serializer "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for class: org.apache.hadoop.hive.llap.LlapOutputFormat Serialization trace: outputFileFormatClass (org.apache.hadoop.hive.ql.plan.TableDesc) tableInfo (org.apache.hadoop.hive.ql.plan.FileSinkDesc) conf (org.apache.hadoop.hive.ql.exec.FileSinkOperator) childOperators (org.apache.hadoop.hive.ql.exec.UnionOperator) childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator) childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) childOperators (org.apache.hadoop.hive.ql.exec.PTFOperator) childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializeObjectByKryo(SerializationUtilities.java:700) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:571) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:560) {code} was: {code:java} Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: java.lang.IllegalArgumentException: Unable to create serializer "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for class: org.apache.hadoop.hive.llap.LlapOutputFormat Serialization trace:outputFileFormatClass (org.apache.hadoop.hive.ql.plan.TableDesc)tableInfo (org.apache.hadoop.hive.ql.plan.FileSinkDesc)conf (org.apache.hadoop.hive.ql.exec.FileSinkOperator)childOperators (org.apache.hadoop.hive.ql.exec.UnionOperator)childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators (org.apache.hadoop.hive.ql.exec.MapJoinOperator)childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators (org.apache.hadoop.hive.ql.exec.PTFOperator)childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializeObjectByKryo(SerializationUtilities.java:700) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:571) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:560) {code} > Kryo Exception while serializing plan for getSplits UDF call > > > Key: HIVE-24036 > URL: https://issues.apache.org/jira/browse/HIVE-24036 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > {code:java} > Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: > java.lang.IllegalArgumentException: Unable to create serializer > "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for > class: org.apache.hadoop.hive.llap.LlapOutputFormat > Serialization trace: > outputFileFormatClass (org.apache.hadoop.hive.ql.plan.TableDesc) > tableInfo (org.apache.hadoop.hive.ql.plan.FileSinkDesc) > conf (org.apache.hadoop.hive.ql.exec.FileSinkOperator) > childOperators (org.apache.hadoop.hive.ql.exec.UnionOperator) > childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators > (org.apache.hadoop.hive.ql.exec.MapJoinOperator) > childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) > childOperators (org.apache.hadoop.hive.ql.exec.PTFOperator) > childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) >at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializeObjectByKryo(SerializationUtilities.java:700) > >at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:571) > >at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:560) > > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24036) Kryo Exception while serializing plan for getSplits UDF call
[ https://issues.apache.org/jira/browse/HIVE-24036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-24036: -- Status: Patch Available (was: Open) > Kryo Exception while serializing plan for getSplits UDF call > > > Key: HIVE-24036 > URL: https://issues.apache.org/jira/browse/HIVE-24036 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > {code:java} > Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: > java.lang.IllegalArgumentException: Unable to create serializer > "org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer" for > class: org.apache.hadoop.hive.llap.LlapOutputFormat > Serialization trace: > outputFileFormatClass (org.apache.hadoop.hive.ql.plan.TableDesc) > tableInfo (org.apache.hadoop.hive.ql.plan.FileSinkDesc) > conf (org.apache.hadoop.hive.ql.exec.FileSinkOperator) > childOperators (org.apache.hadoop.hive.ql.exec.UnionOperator) > childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)childOperators > (org.apache.hadoop.hive.ql.exec.MapJoinOperator) > childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) > childOperators (org.apache.hadoop.hive.ql.exec.PTFOperator) > childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator) >at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializeObjectByKryo(SerializationUtilities.java:700) > >at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:571) > >at > org.apache.hadoop.hive.ql.exec.SerializationUtilities.serializePlan(SerializationUtilities.java:560) > > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-24188) CTLT from MM to External fails because table txn properties are not skipped
[ https://issues.apache.org/jira/browse/HIVE-24188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R reassigned HIVE-24188: - > CTLT from MM to External fails because table txn properties are not skipped > --- > > Key: HIVE-24188 > URL: https://issues.apache.org/jira/browse/HIVE-24188 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > Repro steps > > {code:java} > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > create table test_mm(age int, name string) partitioned by(dept string) stored > as orc tblproperties('transactional'='true', > 'transactional_properties'='default'); > create external table test_external like test_mm LOCATION > '${system:test.tmp.dir}/create_like_mm_to_external'; > {code} > Fails with below exception > {code:java} > Error: Error while processing statement: FAILED: Execution Error, return code > 1 from org.apache.hadoop.hive.ql.exec.DDLTask. > MetaException(message:default.test_external cannot be declared transactional > because it's an external table) (state=08S01,code=1){code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24188) CTLT from MM to External or External to MM are failing with hive.strict.managed.tables & hive.create.as.acid
[ https://issues.apache.org/jira/browse/HIVE-24188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-24188: -- Summary: CTLT from MM to External or External to MM are failing with hive.strict.managed.tables & hive.create.as.acid (was: CTLT from MM to External fails because table txn properties are not skipped) > CTLT from MM to External or External to MM are failing with > hive.strict.managed.tables & hive.create.as.acid > > > Key: HIVE-24188 > URL: https://issues.apache.org/jira/browse/HIVE-24188 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > Repro steps > > {code:java} > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > create table test_mm(age int, name string) partitioned by(dept string) stored > as orc tblproperties('transactional'='true', > 'transactional_properties'='default'); > create external table test_external like test_mm LOCATION > '${system:test.tmp.dir}/create_like_mm_to_external'; > {code} > Fails with below exception > {code:java} > Error: Error while processing statement: FAILED: Execution Error, return code > 1 from org.apache.hadoop.hive.ql.exec.DDLTask. > MetaException(message:default.test_external cannot be declared transactional > because it's an external table) (state=08S01,code=1){code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24210) PartitionManagementTask fails if one of tables dropped after fetching TableMeta
[ https://issues.apache.org/jira/browse/HIVE-24210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-24210: -- Summary: PartitionManagementTask fails if one of tables dropped after fetching TableMeta (was: PartitionManagementTask fails if one of tables dropped after fetch TableMeta) > PartitionManagementTask fails if one of tables dropped after fetching > TableMeta > --- > > Key: HIVE-24210 > URL: https://issues.apache.org/jira/browse/HIVE-24210 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > > {code:java} > 2020-09-21T10:45:15,875 ERROR [pool-4-thread-150]: > metastore.PartitionManagementTask (PartitionManagementTask.java:run(163)) - > Exception while running partition discovery task for table: null > org.apache.hadoop.hive.metastore.api.NoSuchObjectException: > hive.default.test_table table not found > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_core(HiveMetaStore.java:3391) > > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3315) > > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3291) > > at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > > at com.sun.proxy.$Proxy30.get_table_req(Unknown Source) ~[?:?] > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1804) > > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1791) > > at > org.apache.hadoop.hive.metastore.PartitionManagementTask.run(PartitionManagementTask.java:130){code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-24210) PartitionManagementTask fails if one of tables dropped after fetch TableMeta
[ https://issues.apache.org/jira/browse/HIVE-24210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R reassigned HIVE-24210: - > PartitionManagementTask fails if one of tables dropped after fetch TableMeta > > > Key: HIVE-24210 > URL: https://issues.apache.org/jira/browse/HIVE-24210 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > > {code:java} > 2020-09-21T10:45:15,875 ERROR [pool-4-thread-150]: > metastore.PartitionManagementTask (PartitionManagementTask.java:run(163)) - > Exception while running partition discovery task for table: null > org.apache.hadoop.hive.metastore.api.NoSuchObjectException: > hive.default.test_table table not found > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_core(HiveMetaStore.java:3391) > > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3315) > > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3291) > > at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > > at com.sun.proxy.$Proxy30.get_table_req(Unknown Source) ~[?:?] > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1804) > > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1791) > > at > org.apache.hadoop.hive.metastore.PartitionManagementTask.run(PartitionManagementTask.java:130){code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24210) PartitionManagementTask fails if one of tables dropped after fetching TableMeta
[ https://issues.apache.org/jira/browse/HIVE-24210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-24210: -- Description: After fetching tableMeta based on configured dbPattern & tablePattern for PMT [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/PartitionManagementTask.java#L130] If one of the tables dropped before scheduling AutoPartition Discovery or MSCK, then entire PMT will be stopped because of below exception even though we can run MSCK for other valid tables. {code:java} 2020-09-21T10:45:15,875 ERROR [pool-4-thread-150]: metastore.PartitionManagementTask (PartitionManagementTask.java:run(163)) - Exception while running partition discovery task for table: null org.apache.hadoop.hive.metastore.api.NoSuchObjectException: hive.default.test_table table not found at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_core(HiveMetaStore.java:3391) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3315) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3291) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at com.sun.proxy.$Proxy30.get_table_req(Unknown Source) ~[?:?] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1804) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1791) at org.apache.hadoop.hive.metastore.PartitionManagementTask.run(PartitionManagementTask.java:130){code} Exception is thrown from here. [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/PartitionManagementTask.java#L130] was: {code:java} 2020-09-21T10:45:15,875 ERROR [pool-4-thread-150]: metastore.PartitionManagementTask (PartitionManagementTask.java:run(163)) - Exception while running partition discovery task for table: null org.apache.hadoop.hive.metastore.api.NoSuchObjectException: hive.default.test_table table not found at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_core(HiveMetaStore.java:3391) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3315) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3291) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at com.sun.proxy.$Proxy30.get_table_req(Unknown Source) ~[?:?] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1804) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1791) at org.apache.hadoop.hive.metastore.PartitionManagementTask.run(PartitionManagementTask.java:130){code} > PartitionManagementTask fails if one of tables dropped after fetching > TableMeta > --- > > Key: HIVE-24210 > URL: https://issues.apache.org/jira/browse/HIVE-24210 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > After fetching tableMeta based on configured dbPattern & tablePattern for PMT > [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/PartitionManagementTask.java#L130] > If one of the tables dropped before scheduling AutoPartition Discovery or > MSCK, then entire PMT will be stopped because of below exception even though > we can run MSCK for other valid tables. > {code:java} > 2020-09-21T10:45:15,875 ERROR [pool-4-thread-150]: > metastore.PartitionManagementTask (PartitionManagementTask.java:run(163)) - > Exception while running partition discovery task for table: null > org.apache.hadoop.hive.metastore.api.NoSuchObjectException: > hive.default.test_table table not found > at > org.apache.hadoop.hive.metastore.HiveMet
[jira] [Commented] (HIVE-24210) PartitionManagementTask fails if one of tables dropped after fetching TableMeta
[ https://issues.apache.org/jira/browse/HIVE-24210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17204942#comment-17204942 ] Naresh P R commented on HIVE-24210: --- Updated Description. Please let me know if that is ok. > PartitionManagementTask fails if one of tables dropped after fetching > TableMeta > --- > > Key: HIVE-24210 > URL: https://issues.apache.org/jira/browse/HIVE-24210 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > After fetching tableMeta based on configured dbPattern & tablePattern for PMT > [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/PartitionManagementTask.java#L130] > If one of the tables dropped before scheduling AutoPartition Discovery or > MSCK, then entire PMT will be stopped because of below exception even though > we can run MSCK for other valid tables. > {code:java} > 2020-09-21T10:45:15,875 ERROR [pool-4-thread-150]: > metastore.PartitionManagementTask (PartitionManagementTask.java:run(163)) - > Exception while running partition discovery task for table: null > org.apache.hadoop.hive.metastore.api.NoSuchObjectException: > hive.default.test_table table not found > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_core(HiveMetaStore.java:3391) > > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3315) > > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3291) > > at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > > at com.sun.proxy.$Proxy30.get_table_req(Unknown Source) ~[?:?] > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1804) > > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1791) > > at > org.apache.hadoop.hive.metastore.PartitionManagementTask.run(PartitionManagementTask.java:130){code} > Exception is thrown from here. > [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/PartitionManagementTask.java#L130] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24210) PartitionManagementTask fails if one of tables dropped after fetching TableMeta
[ https://issues.apache.org/jira/browse/HIVE-24210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-24210: -- Description: After fetching tableMeta based on configured dbPattern & tablePattern for PMT https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/PartitionManagementTask.java#L125 If one of the tables dropped before scheduling AutoPartition Discovery or MSCK, then entire PMT will be stopped because of below exception even though we can run MSCK for other valid tables. {code:java} 2020-09-21T10:45:15,875 ERROR [pool-4-thread-150]: metastore.PartitionManagementTask (PartitionManagementTask.java:run(163)) - Exception while running partition discovery task for table: null org.apache.hadoop.hive.metastore.api.NoSuchObjectException: hive.default.test_table table not found at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_core(HiveMetaStore.java:3391) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3315) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3291) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at com.sun.proxy.$Proxy30.get_table_req(Unknown Source) ~[?:?] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1804) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1791) at org.apache.hadoop.hive.metastore.PartitionManagementTask.run(PartitionManagementTask.java:130){code} Exception is thrown from here. [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/PartitionManagementTask.java#L130] was: After fetching tableMeta based on configured dbPattern & tablePattern for PMT [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/PartitionManagementTask.java#L130] If one of the tables dropped before scheduling AutoPartition Discovery or MSCK, then entire PMT will be stopped because of below exception even though we can run MSCK for other valid tables. {code:java} 2020-09-21T10:45:15,875 ERROR [pool-4-thread-150]: metastore.PartitionManagementTask (PartitionManagementTask.java:run(163)) - Exception while running partition discovery task for table: null org.apache.hadoop.hive.metastore.api.NoSuchObjectException: hive.default.test_table table not found at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_core(HiveMetaStore.java:3391) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3315) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3291) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at com.sun.proxy.$Proxy30.get_table_req(Unknown Source) ~[?:?] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1804) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1791) at org.apache.hadoop.hive.metastore.PartitionManagementTask.run(PartitionManagementTask.java:130){code} Exception is thrown from here. [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/PartitionManagementTask.java#L130] > PartitionManagementTask fails if one of tables dropped after fetching > TableMeta > --- > > Key: HIVE-24210 > URL: https://issues.apache.org/jira/browse/HIVE-24210 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > After fetching tableMeta based on configured dbPattern & tablePattern for PMT > https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apa
[jira] [Updated] (HIVE-26526) MSCK sync is not removing partitions with special characters
[ https://issues.apache.org/jira/browse/HIVE-26526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-26526: -- Issue Type: Bug (was: New Feature) > MSCK sync is not removing partitions with special characters > > > Key: HIVE-26526 > URL: https://issues.apache.org/jira/browse/HIVE-26526 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Priority: Major > > PARTITIONS table were having encoding string & PARTITION_KEY_VALS were having > original string. > {code:java} > hive=> select * from "PARTITION_KEY_VALS" where "PART_ID" IN (46753, 46754, > 46755, 46756); > PART_ID | PART_KEY_VAL | INTEGER_IDX > -+-+- > 46753 | 2022-02-* | 0 > 46754 | 2011-03-01 | 0 > 46755 | 2022-01-* | 0 > 46756 | 2010-01-01 | 0 > > > hive=> select * from "PARTITIONS" where "TBL_ID" = 23567 ; > PART_ID | CREATE_TIME | LAST_ACCESS_TIME | PART_NAME | SD_ID | > TBL_ID | WRITE_ID > -+-+--+---+---++-- > 46753 | 0 | 0 | part_date=2022-02-%2A | 70195 | > 23567 | 0 > 46754 | 0 | 0 | part_date=2011-03-01 | 70196 | > 23567 | 0 > 46755 | 0 | 0 | part_date=2022-01-%2A | 70197 | > 23567 | 0 > 46756 | 0 | 0 | part_date=2010-01-01 | 70198 | > 23567 | 0 > (4 rows){code} > > 1) DirectSQL has a join condition on PARTITION_KEY_VALS.PART_KEY_VAL = > "2022-02-%2A" at here > https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java#L883 > 2) Jdo is having filter condition on PARTITIONS.PART_NAME = > "part_date=2022-02-%252A" (ie., 2 times url encoded) > Once from HS2 > https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java#L353 > 2nd from HMS > [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/parser/ExpressionTree.java#L365] > Above conditions returns 0 partitions, so those are not removed from HMS > metadata. > > Attaching repro q file -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-26526) MSCK sync is not removing partitions with special characters
[ https://issues.apache.org/jira/browse/HIVE-26526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-26526: -- Attachment: test.q > MSCK sync is not removing partitions with special characters > > > Key: HIVE-26526 > URL: https://issues.apache.org/jira/browse/HIVE-26526 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Priority: Major > Attachments: test.q > > > PARTITIONS table were having encoding string & PARTITION_KEY_VALS were having > original string. > {code:java} > hive=> select * from "PARTITION_KEY_VALS" where "PART_ID" IN (46753, 46754, > 46755, 46756); > PART_ID | PART_KEY_VAL | INTEGER_IDX > -+-+- > 46753 | 2022-02-* | 0 > 46754 | 2011-03-01 | 0 > 46755 | 2022-01-* | 0 > 46756 | 2010-01-01 | 0 > > > hive=> select * from "PARTITIONS" where "TBL_ID" = 23567 ; > PART_ID | CREATE_TIME | LAST_ACCESS_TIME | PART_NAME | SD_ID | > TBL_ID | WRITE_ID > -+-+--+---+---++-- > 46753 | 0 | 0 | part_date=2022-02-%2A | 70195 | > 23567 | 0 > 46754 | 0 | 0 | part_date=2011-03-01 | 70196 | > 23567 | 0 > 46755 | 0 | 0 | part_date=2022-01-%2A | 70197 | > 23567 | 0 > 46756 | 0 | 0 | part_date=2010-01-01 | 70198 | > 23567 | 0 > (4 rows){code} > > 1) DirectSQL has a join condition on PARTITION_KEY_VALS.PART_KEY_VAL = > "2022-02-%2A" at here > https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java#L883 > 2) Jdo is having filter condition on PARTITIONS.PART_NAME = > "part_date=2022-02-%252A" (ie., 2 times url encoded) > Once from HS2 > https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java#L353 > 2nd from HMS > [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/parser/ExpressionTree.java#L365] > Above conditions returns 0 partitions, so those are not removed from HMS > metadata. > > Attaching repro q file -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-26495) MSCK repair perf issue HMSChecker ThreadPool is blocked at fs.listStatus
[ https://issues.apache.org/jira/browse/HIVE-26495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17609601#comment-17609601 ] Naresh P R commented on HIVE-26495: --- Thank you for the the review & merge [~srahman] [~ayushtkn] > MSCK repair perf issue HMSChecker ThreadPool is blocked at fs.listStatus > > > Key: HIVE-26495 > URL: https://issues.apache.org/jira/browse/HIVE-26495 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0-alpha-2 > > Time Spent: 2h > Remaining Estimate: 0h > > With hive.metastore.fshandler.threads = 15, all 15 *MSCK-GetPaths-xx* are > slogging at following trace. > {code:java} > "MSCK-GetPaths-11" #12345 daemon prio=5 os_prio=0 tid= nid= waiting on > condition [0x7f9f099a6000] > java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0003f92d1668> (a > java.util.concurrent.CompletableFuture$Signaller) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) > at > java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707) > at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323) > ... > at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:3230) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1953) > at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1995) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreChecker$PathDepthInfoCallable.processPathDepthInfo(HiveMetaStoreChecker.java:550) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreChecker$PathDepthInfoCallable.call(HiveMetaStoreChecker.java:543) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreChecker$PathDepthInfoCallable.call(HiveMetaStoreChecker.java:525) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:750){code} > We should take advantage of non-block listStatusIterator instead of > listStatus which is a blocking call. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27964) Support drop stats similar to Impala
Naresh P R created HIVE-27964: - Summary: Support drop stats similar to Impala Key: HIVE-27964 URL: https://issues.apache.org/jira/browse/HIVE-27964 Project: Hive Issue Type: New Feature Reporter: Naresh P R Hive should support drop stats similar to impala. https://impala.apache.org/docs/build/html/topics/impala_drop_stats.html -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27964) Support drop stats similar to Impala
[ https://issues.apache.org/jira/browse/HIVE-27964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17799522#comment-17799522 ] Naresh P R commented on HIVE-27964: --- Partition table rename gets clogged at PART_COL_STATS for wide tables. {code:java} CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( ... `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, ...){code} Since PART_COL_STATS holds db_name & table_name, incase of table rename, every row in PART_COL_STATS associated with the table should be fetched, stored in memory, dropped & re-added with new tableName. Instead clearing the stats before rename & computing later would help to speed up the process. Another optimization i was about to raise is to remove DB_NAME, TABLE_NAME, PARTITION_NAME from PART_COL_STATS & use PART_ID as FOREIGN KEY from PARTITIONS to avoid touching PART_COL_STATS for table/partition renames. > Support drop stats similar to Impala > > > Key: HIVE-27964 > URL: https://issues.apache.org/jira/browse/HIVE-27964 > Project: Hive > Issue Type: New Feature >Reporter: Naresh P R >Priority: Major > > Hive should support drop stats similar to impala. > https://impala.apache.org/docs/build/html/topics/impala_drop_stats.html -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HIVE-27964) Support drop stats similar to Impala
[ https://issues.apache.org/jira/browse/HIVE-27964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17799522#comment-17799522 ] Naresh P R edited comment on HIVE-27964 at 12/21/23 5:47 PM: - Partition table rename gets clogged at PART_COL_STATS for wide tables. {code:java} CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( ... `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, ...){code} Since PART_COL_STATS holds db_name & table_name, incase of table rename, every row in PART_COL_STATS associated with the table should be fetched, stored in memory, delete & re-insert with new db/table/partition name. Instead clearing the stats before rename & computing later would help to speed up the process. Another optimization i was about to raise is to remove DB_NAME, TABLE_NAME, PARTITION_NAME from PART_COL_STATS & use PART_ID as FOREIGN KEY from PARTITIONS to avoid touching PART_COL_STATS for table/partition renames. was (Author: nareshpr): Partition table rename gets clogged at PART_COL_STATS for wide tables. {code:java} CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( ... `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, ...){code} Since PART_COL_STATS holds db_name & table_name, incase of table rename, every row in PART_COL_STATS associated with the table should be fetched, stored in memory, dropped & re-added with new tableName. Instead clearing the stats before rename & computing later would help to speed up the process. Another optimization i was about to raise is to remove DB_NAME, TABLE_NAME, PARTITION_NAME from PART_COL_STATS & use PART_ID as FOREIGN KEY from PARTITIONS to avoid touching PART_COL_STATS for table/partition renames. > Support drop stats similar to Impala > > > Key: HIVE-27964 > URL: https://issues.apache.org/jira/browse/HIVE-27964 > Project: Hive > Issue Type: New Feature >Reporter: Naresh P R >Priority: Major > > Hive should support drop stats similar to impala. > https://impala.apache.org/docs/build/html/topics/impala_drop_stats.html -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27965) Table/partition rename takes a long time at PART_COL_STATS for wide tables
Naresh P R created HIVE-27965: - Summary: Table/partition rename takes a long time at PART_COL_STATS for wide tables Key: HIVE-27965 URL: https://issues.apache.org/jira/browse/HIVE-27965 Project: Hive Issue Type: Improvement Reporter: Naresh P R Partition table rename gets clogged at PART_COL_STATS for wide tables. {code:java} CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( ... `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, ...){code} Since PART_COL_STATS holds db_name & table_name, incase of table rename, every row in PART_COL_STATS associated with the table should be fetched, stored in memory, delete & re-insert with new db/table/partition name. Remove DB_NAME, TABLE_NAME, PARTITION_NAME from PART_COL_STATS & use PART_ID as FOREIGN KEY from PARTITIONS to avoid touching PART_COL_STATS for table/partition renames. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HIVE-27964) Support drop stats similar to Impala
[ https://issues.apache.org/jira/browse/HIVE-27964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17799522#comment-17799522 ] Naresh P R edited comment on HIVE-27964 at 12/21/23 6:10 PM: - Partition table rename gets clogged at PART_COL_STATS for wide tables. {code:java} CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( ... `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, ...){code} Since PART_COL_STATS holds db_name & table_name, incase of table rename, every row in PART_COL_STATS associated with the table should be fetched, stored in memory, delete & re-insert with new db/table/partition name. Instead clearing the stats before rename & computing later would help to speed up the process. Another optimization i was about to raise is to remove DB_NAME, TABLE_NAME, PARTITION_NAME from PART_COL_STATS & use TBL_ID, DB_ID, PART_ID to avoid touching PART_COL_STATS for table/partition renames + can be used in indexes as well. was (Author: nareshpr): Partition table rename gets clogged at PART_COL_STATS for wide tables. {code:java} CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( ... `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, ...){code} Since PART_COL_STATS holds db_name & table_name, incase of table rename, every row in PART_COL_STATS associated with the table should be fetched, stored in memory, delete & re-insert with new db/table/partition name. Instead clearing the stats before rename & computing later would help to speed up the process. Another optimization i was about to raise is to remove DB_NAME, TABLE_NAME, PARTITION_NAME from PART_COL_STATS & use PART_ID as FOREIGN KEY from PARTITIONS to avoid touching PART_COL_STATS for table/partition renames. > Support drop stats similar to Impala > > > Key: HIVE-27964 > URL: https://issues.apache.org/jira/browse/HIVE-27964 > Project: Hive > Issue Type: New Feature >Reporter: Naresh P R >Priority: Major > > Hive should support drop stats similar to impala. > https://impala.apache.org/docs/build/html/topics/impala_drop_stats.html -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27965) Table/partition rename takes a long time at PART_COL_STATS for wide tables
[ https://issues.apache.org/jira/browse/HIVE-27965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27965: -- Description: Partition table rename gets clogged at PART_COL_STATS for wide tables. {code:java} CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( ... `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, ...){code} Since PART_COL_STATS holds db_name & table_name, incase of table rename, every row in PART_COL_STATS associated with the table should be fetched, stored in memory, delete & re-insert with new db/table/partition name. Remove DB_NAME, TABLE_NAME, PARTITION_NAME from PART_COL_STATS & use & use TBL_ID, DB_ID, PART_ID to avoid touching PART_COL_STATS for table/partition renames. Also TBL_ID, DB_ID, PART_ID can be used for PART_COL_STATS INDEXING. was: Partition table rename gets clogged at PART_COL_STATS for wide tables. {code:java} CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( ... `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, ...){code} Since PART_COL_STATS holds db_name & table_name, incase of table rename, every row in PART_COL_STATS associated with the table should be fetched, stored in memory, delete & re-insert with new db/table/partition name. Remove DB_NAME, TABLE_NAME, PARTITION_NAME from PART_COL_STATS & use PART_ID as FOREIGN KEY from PARTITIONS to avoid touching PART_COL_STATS for table/partition renames. > Table/partition rename takes a long time at PART_COL_STATS for wide tables > -- > > Key: HIVE-27965 > URL: https://issues.apache.org/jira/browse/HIVE-27965 > Project: Hive > Issue Type: Improvement >Reporter: Naresh P R >Priority: Major > > Partition table rename gets clogged at PART_COL_STATS for wide tables. > {code:java} > CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( > ... > `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, > `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, > `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT > NULL, > ...){code} > Since PART_COL_STATS holds db_name & table_name, incase of table rename, > every row in PART_COL_STATS associated with the table should be fetched, > stored in memory, delete & re-insert with new db/table/partition name. > > Remove DB_NAME, TABLE_NAME, PARTITION_NAME from PART_COL_STATS & use & use > TBL_ID, DB_ID, PART_ID to avoid touching PART_COL_STATS for table/partition > renames. > Also TBL_ID, DB_ID, PART_ID can be used for PART_COL_STATS INDEXING. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27965) Table/partition rename takes a long time at PART_COL_STATS for wide tables
[ https://issues.apache.org/jira/browse/HIVE-27965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-27965: -- Description: Partition table rename gets clogged at PART_COL_STATS for wide tables. {code:java} CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( ... `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, ...){code} Since PART_COL_STATS holds db_name & table_name, incase of table rename, every row in PART_COL_STATS associated with the table should be fetched, stored in memory, delete & re-insert with new db/table/partition name. Remove DB_NAME, TABLE_NAME, PARTITION_NAME from PART_COL_STATS, instead use TBL_ID, DB_ID, PART_ID to avoid touching PART_COL_STATS for table/partition renames. Also TBL_ID, DB_ID, PART_ID can be used for PART_COL_STATS INDEXING. was: Partition table rename gets clogged at PART_COL_STATS for wide tables. {code:java} CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( ... `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, ...){code} Since PART_COL_STATS holds db_name & table_name, incase of table rename, every row in PART_COL_STATS associated with the table should be fetched, stored in memory, delete & re-insert with new db/table/partition name. Remove DB_NAME, TABLE_NAME, PARTITION_NAME from PART_COL_STATS & use & use TBL_ID, DB_ID, PART_ID to avoid touching PART_COL_STATS for table/partition renames. Also TBL_ID, DB_ID, PART_ID can be used for PART_COL_STATS INDEXING. > Table/partition rename takes a long time at PART_COL_STATS for wide tables > -- > > Key: HIVE-27965 > URL: https://issues.apache.org/jira/browse/HIVE-27965 > Project: Hive > Issue Type: Improvement >Reporter: Naresh P R >Priority: Major > > Partition table rename gets clogged at PART_COL_STATS for wide tables. > {code:java} > CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( > ... > `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, > `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, > `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT > NULL, > ...){code} > Since PART_COL_STATS holds db_name & table_name, incase of table rename, > every row in PART_COL_STATS associated with the table should be fetched, > stored in memory, delete & re-insert with new db/table/partition name. > > Remove DB_NAME, TABLE_NAME, PARTITION_NAME from PART_COL_STATS, instead use > TBL_ID, DB_ID, PART_ID to avoid touching PART_COL_STATS for table/partition > renames. > Also TBL_ID, DB_ID, PART_ID can be used for PART_COL_STATS INDEXING. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HIVE-27964) Support drop stats similar to Impala
[ https://issues.apache.org/jira/browse/HIVE-27964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17799522#comment-17799522 ] Naresh P R edited comment on HIVE-27964 at 12/21/23 6:17 PM: - Partition table rename gets clogged at PART_COL_STATS for wide tables. {code:java} CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( ... `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, ...){code} Since PART_COL_STATS holds db_name & table_name, incase of table rename, every row in PART_COL_STATS associated with the table should be fetched, stored in memory, delete & re-insert with new db/table/partition name. Instead clearing the stats before rename & computing later would help to speed up the process. Just raised another optimization HIVE-27965, to remove DB_NAME, TABLE_NAME, PARTITION_NAME from PART_COL_STATS & use TBL_ID, DB_ID, PART_ID to avoid touching PART_COL_STATS for table/partition renames + can be used in indexes as well. was (Author: nareshpr): Partition table rename gets clogged at PART_COL_STATS for wide tables. {code:java} CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( ... `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, ...){code} Since PART_COL_STATS holds db_name & table_name, incase of table rename, every row in PART_COL_STATS associated with the table should be fetched, stored in memory, delete & re-insert with new db/table/partition name. Instead clearing the stats before rename & computing later would help to speed up the process. Another optimization i was about to raise is to remove DB_NAME, TABLE_NAME, PARTITION_NAME from PART_COL_STATS & use TBL_ID, DB_ID, PART_ID to avoid touching PART_COL_STATS for table/partition renames + can be used in indexes as well. > Support drop stats similar to Impala > > > Key: HIVE-27964 > URL: https://issues.apache.org/jira/browse/HIVE-27964 > Project: Hive > Issue Type: New Feature >Reporter: Naresh P R >Priority: Major > > Hive should support drop stats similar to impala. > https://impala.apache.org/docs/build/html/topics/impala_drop_stats.html -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27965) Table/partition rename takes a long time at PART_COL_STATS for wide tables
[ https://issues.apache.org/jira/browse/HIVE-27965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17799649#comment-17799649 ] Naresh P R commented on HIVE-27965: --- [~zhangbutao] Yes, this helps. Thanks for letting me know. > Table/partition rename takes a long time at PART_COL_STATS for wide tables > -- > > Key: HIVE-27965 > URL: https://issues.apache.org/jira/browse/HIVE-27965 > Project: Hive > Issue Type: Improvement >Reporter: Naresh P R >Priority: Major > > Partition table rename gets clogged at PART_COL_STATS for wide tables. > {code:java} > CREATE TABLE IF NOT EXISTS `PART_COL_STATS` ( > ... > `DB_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, > `TABLE_NAME` varchar(128) CHARACTER SET latin1 COLLATE latin1_bin NOT NULL, > `PARTITION_NAME` varchar(767) CHARACTER SET latin1 COLLATE latin1_bin NOT > NULL, > ...){code} > Since PART_COL_STATS holds db_name & table_name, incase of table rename, > every row in PART_COL_STATS associated with the table should be fetched, > stored in memory, delete & re-insert with new db/table/partition name. > > Remove DB_NAME, TABLE_NAME, PARTITION_NAME from PART_COL_STATS, instead use > TBL_ID, DB_ID, PART_ID to avoid touching PART_COL_STATS for table/partition > renames. > Also TBL_ID, DB_ID, PART_ID can be used for PART_COL_STATS INDEXING. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-28117) add_months() with output_date_format returning wrong year on leap day
[ https://issues.apache.org/jira/browse/HIVE-28117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17827257#comment-17827257 ] Naresh P R commented on HIVE-28117: --- Can you try your usecase with -MM format ? eg., select add_months(dt, -2, '-MM') > add_months() with output_date_format returning wrong year on leap day > - > > Key: HIVE-28117 > URL: https://issues.apache.org/jira/browse/HIVE-28117 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.1.3 >Reporter: Jesse Petre >Priority: Minor > Attachments: 2024-03-11_12-11-11.png > > > I use an output_date_format option on the add_months() function like so: > {{select add_months(dt, -2, '-MM')}} > On leap day, 2024-02-29, this incorrectly returned 2024-12. I expected > 2023-12. All other days it works fine, only leap day it gave the wrong > result. > > Omitting the output date format will make it calculate the date correctly. > Including the output date format gives the wrong result. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-28213) Incorrect results after insert-select from similar bucketed source & target table
Naresh P R created HIVE-28213: - Summary: Incorrect results after insert-select from similar bucketed source & target table Key: HIVE-28213 URL: https://issues.apache.org/jira/browse/HIVE-28213 Project: Hive Issue Type: Bug Reporter: Naresh P R Attachments: test.q Insert-select is not honoring bucketing if both source & target are bucketed on same column. eg., {code:java} CREATE EXTERNAL TABLE bucketing_table1 (id INT) CLUSTERED BY (id) SORTED BY (id ASC) INTO 32 BUCKETS stored as textfile; INSERT INTO TABLE bucketing_table1 VALUES (1), (2), (3), (4), (5); CREATE EXTERNAL TABLE bucketing_table2 like bucketing_table1; INSERT INTO TABLE bucketing_table2 select * from bucketing_table1;{code} id=1 => murmur_hash(1) %32 should go to 29th bucket file. bucketing_table1 has id=1 at 29th file, but bucketing_table2 doesn't have 29th file because Insert-select dint honor the bucketing. {code:java} SELECT count(*) FROM bucketing_table1 WHERE id = 1; === 1 //correct result SELECT count(*) FROM bucketing_table2 WHERE id = 1; === 0 // incorrect result{code} Workaround: hive.tez.bucket.pruning=false; PS: Attaching repro file [^test.q] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-28213) Incorrect results after insert-select from similar bucketed source & target table
[ https://issues.apache.org/jira/browse/HIVE-28213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-28213: -- Description: Insert-select is not honoring bucketing if both source & target are bucketed on same column. eg., {code:java} CREATE EXTERNAL TABLE bucketing_table1 (id INT) CLUSTERED BY (id) SORTED BY (id ASC) INTO 32 BUCKETS stored as textfile; INSERT INTO TABLE bucketing_table1 VALUES (1), (2), (3), (4), (5); CREATE EXTERNAL TABLE bucketing_table2 like bucketing_table1; INSERT INTO TABLE bucketing_table2 select * from bucketing_table1;{code} id=1 => murmur_hash(1) %32 should go to 29th bucket file. bucketing_table1 has id=1 at 29th file, but bucketing_table2 doesn't have 29th file because Insert-select dint honor the bucketing. {code:java} SELECT count(*) FROM bucketing_table1 WHERE id = 1; === 1 //correct result SELECT count(*) FROM bucketing_table2 WHERE id = 1; === 0 // incorrect result select *, INPUT__FILE__NAME from bucketing_table1; +--++ | bucketing_table1.id | input__file__name | +--++ | 2 | /bucketing_table1/04_0 | | 3 | /bucketing_table1/06_0 | | 5 | /bucketing_table1/15_0 | | 4 | /bucketing_table1/21_0 | | 1 | /bucketing_table1/29_0 | +--++ select *, INPUT__FILE__NAME from bucketing_table2; +-++ | bucketing_table2.id | input__file__name | +-++ | 2 | /bucketing_table2/00_0 | | 3 | /bucketing_table2/01_0 | | 5 | /bucketing_table2/02_0 | | 4 | /bucketing_table2/03_0 | | 1 | /bucketing_table2/04_0 | +--++{code} Workaround for read: hive.tez.bucket.pruning=false; PS: Attaching repro file [^test.q] was: Insert-select is not honoring bucketing if both source & target are bucketed on same column. eg., {code:java} CREATE EXTERNAL TABLE bucketing_table1 (id INT) CLUSTERED BY (id) SORTED BY (id ASC) INTO 32 BUCKETS stored as textfile; INSERT INTO TABLE bucketing_table1 VALUES (1), (2), (3), (4), (5); CREATE EXTERNAL TABLE bucketing_table2 like bucketing_table1; INSERT INTO TABLE bucketing_table2 select * from bucketing_table1;{code} id=1 => murmur_hash(1) %32 should go to 29th bucket file. bucketing_table1 has id=1 at 29th file, but bucketing_table2 doesn't have 29th file because Insert-select dint honor the bucketing. {code:java} SELECT count(*) FROM bucketing_table1 WHERE id = 1; === 1 //correct result SELECT count(*) FROM bucketing_table2 WHERE id = 1; === 0 // incorrect result{code} Workaround: hive.tez.bucket.pruning=false; PS: Attaching repro file [^test.q] > Incorrect results after insert-select from similar bucketed source & target > table > - > > Key: HIVE-28213 > URL: https://issues.apache.org/jira/browse/HIVE-28213 > Project: Hive > Issue Type: Bug >Reporter: Naresh P R >Priority: Major > Attachments: test.q > > > Insert-select is not honoring bucketing if both source & target are bucketed > on same column. > eg., > {code:java} > CREATE EXTERNAL TABLE bucketing_table1 (id INT) > CLUSTERED BY (id) > SORTED BY (id ASC) > INTO 32 BUCKETS stored as textfile; > INSERT INTO TABLE bucketing_table1 VALUES (1), (2), (3), (4), (5); > CREATE EXTERNAL TABLE bucketing_table2 like bucketing_table1; > INSERT INTO TABLE bucketing_table2 select * from bucketing_table1;{code} > id=1 => murmur_hash(1) %32 should go to 29th bucket file. > bucketing_table1 has id=1 at 29th file, > but bucketing_table2 doesn't have 29th file because Insert-select dint honor > the bucketing. > {code:java} > SELECT count(*) FROM bucketing_table1 WHERE id = 1; > === > 1 //correct result > SELECT count(*) FROM bucketing_table2 WHERE id = 1; > === > 0 // incorrect result > select *, INPUT__FILE__NAME from bucketing_table1; > +--++ > | bucketing_table1.id | input__file__name | > +--++ > | 2 | /bucketing_table1/04_0 | > | 3 | /bucketing_table1/06_0 | > | 5 | /bucketing_table1/15_0 | > | 4 | /bucketing_table1/21_0 | > | 1
[jira] [Updated] (HIVE-28213) Incorrect results after insert-select from similar bucketed source & target table
[ https://issues.apache.org/jira/browse/HIVE-28213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-28213: -- Description: Insert-select is not honoring bucketing if both source & target are bucketed on same column. eg., {code:java} CREATE EXTERNAL TABLE bucketing_table1 (id INT) CLUSTERED BY (id) SORTED BY (id ASC) INTO 32 BUCKETS stored as textfile; INSERT INTO TABLE bucketing_table1 VALUES (1), (2), (3), (4), (5); CREATE EXTERNAL TABLE bucketing_table2 like bucketing_table1; INSERT INTO TABLE bucketing_table2 select * from bucketing_table1;{code} id=1 => murmur_hash(1) %32 should go to 29th bucket file. bucketing_table1 has id=1 at 29th file, but bucketing_table2 doesn't have 29th file because Insert-select dint honor the bucketing. {code:java} SELECT count(*) FROM bucketing_table1 WHERE id = 1; === 1 //correct result SELECT count(*) FROM bucketing_table2 WHERE id = 1; === 0 // incorrect result select *, INPUT__FILE__NAME from bucketing_table1; +--++ | bucketing_table1.id | input__file__name | +--++ | 2 | /bucketing_table1/04_0 | | 3 | /bucketing_table1/06_0 | | 5 | /bucketing_table1/15_0 | | 4 | /bucketing_table1/21_0 | | 1 | /bucketing_table1/29_0 | +--++ select *, INPUT__FILE__NAME from bucketing_table2; +-++ | bucketing_table2.id | input__file__name | +-++ | 2 | /bucketing_table2/00_0 | | 3 | /bucketing_table2/01_0 | | 5 | /bucketing_table2/02_0 | | 4 | /bucketing_table2/03_0 | | 1 | /bucketing_table2/04_0 | +--++{code} Query to identify in which bucketFile a particular row should be {code:java} with t as (select *, murmur_hash(id)%32 as bucket, INPUT__FILE__NAME from bucketing_table1) select id, (case when bucket > 0 then bucket else 32 + bucket end) as bucket_number, INPUT__FILE__NAME from t; +-+++ | id | bucket_number | input__file__name | +-+++ | 2 | 4 | /bucketing_table1/04_0 | | 3 | 6 | /bucketing_table1/06_0 | | 5 | 15 | /bucketing_table1/15_0 | | 4 | 21 | /bucketing_table1/21_0 | | 1 | 29 | /bucketing_table1/29_0 | +-+++{code} Workaround for read: hive.tez.bucket.pruning=false; PS: Attaching repro file [^test.q] was: Insert-select is not honoring bucketing if both source & target are bucketed on same column. eg., {code:java} CREATE EXTERNAL TABLE bucketing_table1 (id INT) CLUSTERED BY (id) SORTED BY (id ASC) INTO 32 BUCKETS stored as textfile; INSERT INTO TABLE bucketing_table1 VALUES (1), (2), (3), (4), (5); CREATE EXTERNAL TABLE bucketing_table2 like bucketing_table1; INSERT INTO TABLE bucketing_table2 select * from bucketing_table1;{code} id=1 => murmur_hash(1) %32 should go to 29th bucket file. bucketing_table1 has id=1 at 29th file, but bucketing_table2 doesn't have 29th file because Insert-select dint honor the bucketing. {code:java} SELECT count(*) FROM bucketing_table1 WHERE id = 1; === 1 //correct result SELECT count(*) FROM bucketing_table2 WHERE id = 1; === 0 // incorrect result select *, INPUT__FILE__NAME from bucketing_table1; +--++ | bucketing_table1.id | input__file__name | +--++ | 2 | /bucketing_table1/04_0 | | 3 | /bucketing_table1/06_0 | | 5 | /bucketing_table1/15_0 | | 4 | /bucketing_table1/21_0 | | 1 | /bucketing_table1/29_0 | +--++ select *, INPUT__FILE__NAME from bucketing_table2; +-++ | bucketing_table2.id | input__file__name | +-++ | 2 | /bucketing_table2/00_0 | | 3 | /bucketing_table2/01_0 | | 5
[jira] [Updated] (HIVE-28213) Incorrect results after insert-select from similar bucketed source & target table
[ https://issues.apache.org/jira/browse/HIVE-28213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-28213: -- Description: Insert-select is not honoring bucketing if both source & target are bucketed on same column. eg., {code:java} CREATE EXTERNAL TABLE bucketing_table1 (id INT) CLUSTERED BY (id) SORTED BY (id ASC) INTO 32 BUCKETS stored as textfile; INSERT INTO TABLE bucketing_table1 VALUES (1), (2), (3), (4), (5); CREATE EXTERNAL TABLE bucketing_table2 like bucketing_table1; INSERT INTO TABLE bucketing_table2 select * from bucketing_table1;{code} id=1 => murmur_hash(1) %32 should go to 29th bucket file. bucketing_table1 has id=1 at 29th file, but bucketing_table2 doesn't have 29th file because Insert-select dint honor the bucketing. {code:java} SELECT count(*) FROM bucketing_table1 WHERE id = 1; === 1 //correct result SELECT count(*) FROM bucketing_table2 WHERE id = 1; === 0 // incorrect result select *, INPUT__FILE__NAME from bucketing_table1; +--++ | bucketing_table1.id | input__file__name | +--++ | 2 | /bucketing_table1/04_0 | | 3 | /bucketing_table1/06_0 | | 5 | /bucketing_table1/15_0 | | 4 | /bucketing_table1/21_0 | | 1 | /bucketing_table1/29_0 | +--++ select *, INPUT__FILE__NAME from bucketing_table2; +-++ | bucketing_table2.id | input__file__name | +-++ | 2 | /bucketing_table2/00_0 | | 3 | /bucketing_table2/01_0 | | 5 | /bucketing_table2/02_0 | | 4 | /bucketing_table2/03_0 | | 1 | /bucketing_table2/04_0 | +--++{code} Workaround for read: hive.tez.bucket.pruning=false; PS: Attaching repro file [^test.q] was: Insert-select is not honoring bucketing if both source & target are bucketed on same column. eg., {code:java} CREATE EXTERNAL TABLE bucketing_table1 (id INT) CLUSTERED BY (id) SORTED BY (id ASC) INTO 32 BUCKETS stored as textfile; INSERT INTO TABLE bucketing_table1 VALUES (1), (2), (3), (4), (5); CREATE EXTERNAL TABLE bucketing_table2 like bucketing_table1; INSERT INTO TABLE bucketing_table2 select * from bucketing_table1;{code} id=1 => murmur_hash(1) %32 should go to 29th bucket file. bucketing_table1 has id=1 at 29th file, but bucketing_table2 doesn't have 29th file because Insert-select dint honor the bucketing. {code:java} SELECT count(*) FROM bucketing_table1 WHERE id = 1; === 1 //correct result SELECT count(*) FROM bucketing_table2 WHERE id = 1; === 0 // incorrect result select *, INPUT__FILE__NAME from bucketing_table1; +--++ | bucketing_table1.id | input__file__name | +--++ | 2 | /bucketing_table1/04_0 | | 3 | /bucketing_table1/06_0 | | 5 | /bucketing_table1/15_0 | | 4 | /bucketing_table1/21_0 | | 1 | /bucketing_table1/29_0 | +--++ select *, INPUT__FILE__NAME from bucketing_table2; +-++ | bucketing_table2.id | input__file__name | +-++ | 2 | /bucketing_table2/00_0 | | 3 | /bucketing_table2/01_0 | | 5 | /bucketing_table2/02_0 | | 4 | /bucketing_table2/03_0 | | 1 | /bucketing_table2/04_0 | +--++{code} Query to identify in which bucketFile a particular row should be {code:java} with t as (select *, murmur_hash(id)%32 as bucket, INPUT__FILE__NAME from bucketing_table1) select id, (case when bucket > 0 then bucket else 32 + bucket end) as bucket_number, INPUT__FILE__NAME from t; +-+++ | id | bucket_number | input__file__name | +-+++ | 2 | 4 | /bucketing_table1/04_0 | | 3 | 6 | /bucketing_table1/06_0 | | 5 | 15 | /bucketing_ta
[jira] [Assigned] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R reassigned HIVE-20599: - > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-20599: -- Attachment: HIVE-20599.1-branch-3.1.patch Status: Patch Available (was: In Progress) > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-20599.1-branch-3.1.patch > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-20599 started by Naresh P R. - > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-20599.1-branch-3.1.patch > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-20599: -- Attachment: HIVE-20599.1.patch > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-20599.1-branch-3.1.patch, HIVE-20599.1.patch > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16621994#comment-16621994 ] Naresh P R commented on HIVE-20599: --- Testcase failures are not related to changes. > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-20599.1-branch-3.1.patch, HIVE-20599.1.patch > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-20599: -- Attachment: HIVE-20599.2-branch-3.1.patch > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-20599.1-branch-3.1.patch, HIVE-20599.1.patch, > HIVE-20599.2-branch-3.1.patch > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-20599: -- Attachment: (was: HIVE-20599.2-branch-3.1.patch) > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-20599.1-branch-3.1.patch, HIVE-20599.1.patch > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-20599: -- Attachment: HIVE-20599-branch-3.patch > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-20599-branch-3.patch, > HIVE-20599.1-branch-3.1.patch, HIVE-20599.1.patch > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-20599: -- Attachment: HIVE-20599.1-branch-3.patch > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-20599-branch-3.patch, > HIVE-20599.1-branch-3.1.patch, HIVE-20599.1-branch-3.patch, HIVE-20599.1.patch > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16623728#comment-16623728 ] Naresh P R commented on HIVE-20599: --- Rebased to branch-3 & attached new patch. > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-20599-branch-3.patch, > HIVE-20599.1-branch-3.1.patch, HIVE-20599.1-branch-3.patch, HIVE-20599.1.patch > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-20599: -- Attachment: HIVE-20599.3.patch > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-20599-branch-3.patch, > HIVE-20599.1-branch-3.1.patch, HIVE-20599.1-branch-3.patch, > HIVE-20599.1.patch, HIVE-20599.2.patch, HIVE-20599.3.patch > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
[ https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-20599: -- Attachment: HIVE-20599.4.patch > CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException > --- > > Key: HIVE-20599 > URL: https://issues.apache.org/jira/browse/HIVE-20599 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-20599-branch-3.patch, > HIVE-20599.1-branch-3.1.patch, HIVE-20599.1-branch-3.patch, > HIVE-20599.1.patch, HIVE-20599.2.patch, HIVE-20599.3.patch, HIVE-20599.4.patch > > > SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - > from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING); > throws below Exception > {code:java} > Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 > Wrong arguments ''PST'': No matching method for class > org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible > choices: _FUNC_(bigint) _FUNC_(binary) _FUNC_(boolean) _FUNC_(date) > _FUNC_(decimal(38,18)) _FUNC_(double) _FUNC_(float) _FUNC_(int) > _FUNC_(smallint) _FUNC_(string) _FUNC_(timestamp) _FUNC_(tinyint) > _FUNC_(void) (state=42000,code=4){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18112) show create for view having special char in where clause is not showing properly
[ https://issues.apache.org/jira/browse/HIVE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-18112: -- Attachment: (was: HIVE-18112-branch-2.2.patch) > show create for view having special char in where clause is not showing > properly > > > Key: HIVE-18112 > URL: https://issues.apache.org/jira/browse/HIVE-18112 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Minor > Fix For: 2.2.0 > > > e.g., > CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` where > `evil_byte1`.`a` = 'abcÖdefÖgh'; > Output: > == > 0: jdbc:hive2://172.26.122.227:1> show create table v2; > ++--+ > | createtab_stmt >| > ++--+ > | CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` > where `evil_byte1`.`a` = 'abc�def�gh' | > ++--+ > Only show create output is having invalid characters, actual source table > content is displayed properly in the console. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18112) show create for view having special char in where clause is not showing properly
[ https://issues.apache.org/jira/browse/HIVE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-18112: -- Attachment: HIVE-18112-branch-2.2.patch > show create for view having special char in where clause is not showing > properly > > > Key: HIVE-18112 > URL: https://issues.apache.org/jira/browse/HIVE-18112 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-18112-branch-2.2.patch > > > e.g., > CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` where > `evil_byte1`.`a` = 'abcÖdefÖgh'; > Output: > == > 0: jdbc:hive2://172.26.122.227:1> show create table v2; > ++--+ > | createtab_stmt >| > ++--+ > | CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` > where `evil_byte1`.`a` = 'abc�def�gh' | > ++--+ > Only show create output is having invalid characters, actual source table > content is displayed properly in the console. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18112) show create for view having special char in where clause is not showing properly
[ https://issues.apache.org/jira/browse/HIVE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285505#comment-16285505 ] Naresh P R commented on HIVE-18112: --- I verified those failing testcases locally. This failures are not related to the patch. > show create for view having special char in where clause is not showing > properly > > > Key: HIVE-18112 > URL: https://issues.apache.org/jira/browse/HIVE-18112 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-18112-branch-2.2.patch > > > e.g., > CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` where > `evil_byte1`.`a` = 'abcÖdefÖgh'; > Output: > == > 0: jdbc:hive2://172.26.122.227:1> show create table v2; > ++--+ > | createtab_stmt >| > ++--+ > | CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` > where `evil_byte1`.`a` = 'abc�def�gh' | > ++--+ > Only show create output is having invalid characters, actual source table > content is displayed properly in the console. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18112) show create for view having special char in where clause is not showing properly
[ https://issues.apache.org/jira/browse/HIVE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-18112: -- Attachment: HIVE-18112.1-branch-2.2.diff > show create for view having special char in where clause is not showing > properly > > > Key: HIVE-18112 > URL: https://issues.apache.org/jira/browse/HIVE-18112 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-18112-branch-2.2.patch > > > e.g., > CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` where > `evil_byte1`.`a` = 'abcÖdefÖgh'; > Output: > == > 0: jdbc:hive2://172.26.122.227:1> show create table v2; > ++--+ > | createtab_stmt >| > ++--+ > | CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` > where `evil_byte1`.`a` = 'abc�def�gh' | > ++--+ > Only show create output is having invalid characters, actual source table > content is displayed properly in the console. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18112) show create for view having special char in where clause is not showing properly
[ https://issues.apache.org/jira/browse/HIVE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-18112: -- Attachment: (was: HIVE-18112.1-branch-2.2.diff) > show create for view having special char in where clause is not showing > properly > > > Key: HIVE-18112 > URL: https://issues.apache.org/jira/browse/HIVE-18112 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-18112-branch-2.2.patch > > > e.g., > CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` where > `evil_byte1`.`a` = 'abcÖdefÖgh'; > Output: > == > 0: jdbc:hive2://172.26.122.227:1> show create table v2; > ++--+ > | createtab_stmt >| > ++--+ > | CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` > where `evil_byte1`.`a` = 'abc�def�gh' | > ++--+ > Only show create output is having invalid characters, actual source table > content is displayed properly in the console. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18112) show create for view having special char in where clause is not showing properly
[ https://issues.apache.org/jira/browse/HIVE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-18112: -- Attachment: HIVE-18112.1-branch-2.2.patch > show create for view having special char in where clause is not showing > properly > > > Key: HIVE-18112 > URL: https://issues.apache.org/jira/browse/HIVE-18112 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-18112-branch-2.2.patch, > HIVE-18112.1-branch-2.2.patch > > > e.g., > CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` where > `evil_byte1`.`a` = 'abcÖdefÖgh'; > Output: > == > 0: jdbc:hive2://172.26.122.227:1> show create table v2; > ++--+ > | createtab_stmt >| > ++--+ > | CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` > where `evil_byte1`.`a` = 'abc�def�gh' | > ++--+ > Only show create output is having invalid characters, actual source table > content is displayed properly in the console. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18112) show create for view having special char in where clause is not showing properly
[ https://issues.apache.org/jira/browse/HIVE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285593#comment-16285593 ] Naresh P R commented on HIVE-18112: --- Thanks for the review [~sankarh], i attached new patch with fix for table as well. > show create for view having special char in where clause is not showing > properly > > > Key: HIVE-18112 > URL: https://issues.apache.org/jira/browse/HIVE-18112 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-18112-branch-2.2.patch, > HIVE-18112.1-branch-2.2.patch > > > e.g., > CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` where > `evil_byte1`.`a` = 'abcÖdefÖgh'; > Output: > == > 0: jdbc:hive2://172.26.122.227:1> show create table v2; > ++--+ > | createtab_stmt >| > ++--+ > | CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` > where `evil_byte1`.`a` = 'abc�def�gh' | > ++--+ > Only show create output is having invalid characters, actual source table > content is displayed properly in the console. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18112) show create for view having special char in where clause is not showing properly
[ https://issues.apache.org/jira/browse/HIVE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287101#comment-16287101 ] Naresh P R commented on HIVE-18112: --- I verified those failing testcases locally. This failures are not related to the patch. [~sankarh], Can you please review and merge this patch into branch-2.2 ? > show create for view having special char in where clause is not showing > properly > > > Key: HIVE-18112 > URL: https://issues.apache.org/jira/browse/HIVE-18112 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-18112-branch-2.2.patch, > HIVE-18112.1-branch-2.2.patch > > > e.g., > CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` where > `evil_byte1`.`a` = 'abcÖdefÖgh'; > Output: > == > 0: jdbc:hive2://172.26.122.227:1> show create table v2; > ++--+ > | createtab_stmt >| > ++--+ > | CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` > where `evil_byte1`.`a` = 'abc�def�gh' | > ++--+ > Only show create output is having invalid characters, actual source table > content is displayed properly in the console. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18112) show create for view having special char in where clause is not showing properly
[ https://issues.apache.org/jira/browse/HIVE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-18112: -- Attachment: HIVE-18112.2-branch-2.2.patch > show create for view having special char in where clause is not showing > properly > > > Key: HIVE-18112 > URL: https://issues.apache.org/jira/browse/HIVE-18112 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-18112-branch-2.2.patch, > HIVE-18112.1-branch-2.2.patch, HIVE-18112.2-branch-2.2.patch > > > e.g., > CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` where > `evil_byte1`.`a` = 'abcÖdefÖgh'; > Output: > == > 0: jdbc:hive2://172.26.122.227:1> show create table v2; > ++--+ > | createtab_stmt >| > ++--+ > | CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` > where `evil_byte1`.`a` = 'abc�def�gh' | > ++--+ > Only show create output is having invalid characters, actual source table > content is displayed properly in the console. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18112) show create for view having special char in where clause is not showing properly
[ https://issues.apache.org/jira/browse/HIVE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288795#comment-16288795 ] Naresh P R commented on HIVE-18112: --- Thanks for the update [~owen.omalley]. I had attached new patch with the suggested changes. > show create for view having special char in where clause is not showing > properly > > > Key: HIVE-18112 > URL: https://issues.apache.org/jira/browse/HIVE-18112 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-18112-branch-2.2.patch, > HIVE-18112.1-branch-2.2.patch, HIVE-18112.2-branch-2.2.patch > > > e.g., > CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` where > `evil_byte1`.`a` = 'abcÖdefÖgh'; > Output: > == > 0: jdbc:hive2://172.26.122.227:1> show create table v2; > ++--+ > | createtab_stmt >| > ++--+ > | CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` > where `evil_byte1`.`a` = 'abc�def�gh' | > ++--+ > Only show create output is having invalid characters, actual source table > content is displayed properly in the console. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Work started] (HIVE-16906) Hive ATSHook should check for yarn.timeline-service.enabled before connecting to ATS
[ https://issues.apache.org/jira/browse/HIVE-16906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-16906 started by Naresh P R. - > Hive ATSHook should check for yarn.timeline-service.enabled before connecting > to ATS > > > Key: HIVE-16906 > URL: https://issues.apache.org/jira/browse/HIVE-16906 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.2 >Reporter: Prabhu Joseph >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-16906.1.patch > > > Hive ATShook has to check yarn.timeline-service.enabled (Indicate to clients > whether timeline service is enabled or not. If enabled, clients will put > entities and events to the timeline server.) before creating TimelineClient -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16906) Hive ATSHook should check for yarn.timeline-service.enabled before connecting to ATS
[ https://issues.apache.org/jira/browse/HIVE-16906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-16906: -- Attachment: HIVE-16906.1.patch > Hive ATSHook should check for yarn.timeline-service.enabled before connecting > to ATS > > > Key: HIVE-16906 > URL: https://issues.apache.org/jira/browse/HIVE-16906 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.2 >Reporter: Prabhu Joseph >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-16906.1.patch > > > Hive ATShook has to check yarn.timeline-service.enabled (Indicate to clients > whether timeline service is enabled or not. If enabled, clients will put > entities and events to the timeline server.) before creating TimelineClient -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-16906) Hive ATSHook should check for yarn.timeline-service.enabled before connecting to ATS
[ https://issues.apache.org/jira/browse/HIVE-16906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R reassigned HIVE-16906: - Assignee: Naresh P R (was: Bing Li) > Hive ATSHook should check for yarn.timeline-service.enabled before connecting > to ATS > > > Key: HIVE-16906 > URL: https://issues.apache.org/jira/browse/HIVE-16906 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.2 >Reporter: Prabhu Joseph >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-16906.1.patch > > > Hive ATShook has to check yarn.timeline-service.enabled (Indicate to clients > whether timeline service is enabled or not. If enabled, clients will put > entities and events to the timeline server.) before creating TimelineClient -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16906) Hive ATSHook should check for yarn.timeline-service.enabled before connecting to ATS
[ https://issues.apache.org/jira/browse/HIVE-16906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-16906: -- Attachment: (was: HIVE-16906.1.patch) > Hive ATSHook should check for yarn.timeline-service.enabled before connecting > to ATS > > > Key: HIVE-16906 > URL: https://issues.apache.org/jira/browse/HIVE-16906 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.2 >Reporter: Prabhu Joseph >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-16906.1.patch > > > Hive ATShook has to check yarn.timeline-service.enabled (Indicate to clients > whether timeline service is enabled or not. If enabled, clients will put > entities and events to the timeline server.) before creating TimelineClient -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16906) Hive ATSHook should check for yarn.timeline-service.enabled before connecting to ATS
[ https://issues.apache.org/jira/browse/HIVE-16906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-16906: -- Attachment: HIVE-16906.1.patch > Hive ATSHook should check for yarn.timeline-service.enabled before connecting > to ATS > > > Key: HIVE-16906 > URL: https://issues.apache.org/jira/browse/HIVE-16906 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.2 >Reporter: Prabhu Joseph >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-16906.1.patch > > > Hive ATShook has to check yarn.timeline-service.enabled (Indicate to clients > whether timeline service is enabled or not. If enabled, clients will put > entities and events to the timeline server.) before creating TimelineClient -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16906) Hive ATSHook should check for yarn.timeline-service.enabled before connecting to ATS
[ https://issues.apache.org/jira/browse/HIVE-16906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-16906: -- Status: Patch Available (was: In Progress) > Hive ATSHook should check for yarn.timeline-service.enabled before connecting > to ATS > > > Key: HIVE-16906 > URL: https://issues.apache.org/jira/browse/HIVE-16906 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.2 >Reporter: Prabhu Joseph >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-16906.1.patch > > > Hive ATShook has to check yarn.timeline-service.enabled (Indicate to clients > whether timeline service is enabled or not. If enabled, clients will put > entities and events to the timeline server.) before creating TimelineClient -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16906) Hive ATSHook should check for yarn.timeline-service.enabled before connecting to ATS
[ https://issues.apache.org/jira/browse/HIVE-16906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-16906: -- Attachment: (was: HIVE-16906.1.patch) > Hive ATSHook should check for yarn.timeline-service.enabled before connecting > to ATS > > > Key: HIVE-16906 > URL: https://issues.apache.org/jira/browse/HIVE-16906 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.2 >Reporter: Prabhu Joseph >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-16906.1.patch > > > Hive ATShook has to check yarn.timeline-service.enabled (Indicate to clients > whether timeline service is enabled or not. If enabled, clients will put > entities and events to the timeline server.) before creating TimelineClient -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16906) Hive ATSHook should check for yarn.timeline-service.enabled before connecting to ATS
[ https://issues.apache.org/jira/browse/HIVE-16906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-16906: -- Attachment: HIVE-16906.1.patch > Hive ATSHook should check for yarn.timeline-service.enabled before connecting > to ATS > > > Key: HIVE-16906 > URL: https://issues.apache.org/jira/browse/HIVE-16906 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.2 >Reporter: Prabhu Joseph >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-16906.1.patch > > > Hive ATShook has to check yarn.timeline-service.enabled (Indicate to clients > whether timeline service is enabled or not. If enabled, clients will put > entities and events to the timeline server.) before creating TimelineClient -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16906) Hive ATSHook should check for yarn.timeline-service.enabled before connecting to ATS
[ https://issues.apache.org/jira/browse/HIVE-16906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-16906: -- Attachment: HIVE-16906.2.patch > Hive ATSHook should check for yarn.timeline-service.enabled before connecting > to ATS > > > Key: HIVE-16906 > URL: https://issues.apache.org/jira/browse/HIVE-16906 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.2 >Reporter: Prabhu Joseph >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-16906.1.patch, HIVE-16906.2.patch > > > Hive ATShook has to check yarn.timeline-service.enabled (Indicate to clients > whether timeline service is enabled or not. If enabled, clients will put > entities and events to the timeline server.) before creating TimelineClient -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16906) Hive ATSHook should check for yarn.timeline-service.enabled before connecting to ATS
[ https://issues.apache.org/jira/browse/HIVE-16906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-16906: -- Attachment: HIVE-16906.3.patch > Hive ATSHook should check for yarn.timeline-service.enabled before connecting > to ATS > > > Key: HIVE-16906 > URL: https://issues.apache.org/jira/browse/HIVE-16906 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.2 >Reporter: Prabhu Joseph >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-16906.1.patch, HIVE-16906.2.patch, > HIVE-16906.3.patch > > > Hive ATShook has to check yarn.timeline-service.enabled (Indicate to clients > whether timeline service is enabled or not. If enabled, clients will put > entities and events to the timeline server.) before creating TimelineClient -- This message was sent by Atlassian JIRA (v7.6.3#76005)