[jira] [Created] (HIVE-23710) Add table meta cache limit when starting Hive server2

2020-06-17 Thread Deegue (Jira)
Deegue created HIVE-23710:
-

 Summary: Add table meta cache limit when starting Hive server2
 Key: HIVE-23710
 URL: https://issues.apache.org/jira/browse/HIVE-23710
 Project: Hive
  Issue Type: Improvement
 Environment: Hive 2.3.6
Reporter: Deegue


When we start up Hive server2, it will connect to metastore to get table meta 
info by database and cache them. If there are many tables in a database, 
however, will exceed `hive.metastore.client.socket.timeout`.
Then exception thrown like:
{noformat}
2020-06-17T11:38:27,595  WARN [main] metastore.RetryingMetaStoreClient: 
MetaStoreClient lost connection. Attempting to reconnect (1 of 1) after 1s. 
getTableObjectsByName
org.apache.thrift.transport.TTransportException: 
java.net.SocketTimeoutException: Read timed out
at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
 ~[hive-exec-2.3.6.jar:2.3.6]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) 
~[hive-exec-2.3.6.jar:2.3.6]
at 
org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) 
~[hive-exec-2.3.6.jar:2.3.6]
at 
org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) 
~[hive-exec-2.3.6.jar:2.3.6]
at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
 ~[hive-exec-2.3.6.jar:2.3.6]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) 
~[hive-exec-2.3.6.jar:2.3.6]
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table_objects_by_name_req(ThriftHiveMetastore.java:1596)
 ~[hive-exec-2.3.6.jar:2.3.6]
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table_objects_by_name_req(ThriftHiveMetastore.java:1583)
 ~[hive-exec-2.3.6.jar:2.3.6]
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTableObjectsByName(HiveMetaStoreClient.java:1370)
 ~[hive-exec-2.3.6.jar:2.3.6]
at 
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.getTableObjectsByName(SessionHiveMetaStoreClient.java:238)
 ~[hive-exec-2.3.6.jar:2.3.6]
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source) ~[?:?]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_121]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121]
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:206)
 ~[hive-exec-2.3.6.jar:2.3.6]
at com.sun.proxy.$Proxy38.getTableObjectsByName(Unknown Source) ~[?:?]
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source) ~[?:?]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_121]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121]
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2336)
 ~[hive-exec-2.3.6.jar:2.3.6]
at com.sun.proxy.$Proxy38.getTableObjectsByName(Unknown Source) ~[?:?]
at 
org.apache.hadoop.hive.ql.metadata.Hive.getAllTableObjects(Hive.java:1343) 
~[hive-exec-2.3.6.jar:2.3.6]
at 
org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry.init(HiveMaterializedViewsRegistry.java:127)
 ~[hive-exec-2.3.6.jar:2.3.6]
at 
org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:167) 
~[hive-service-2.3.6.jar:2.3.6]
at 
org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:607)
 ~[hive-service-2.3.6.jar:2.3.6]
at 
org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:100) 
~[hive-service-2.3.6.jar:2.3.6]
at 
org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:855)
 ~[hive-service-2.3.6.jar:2.3.6]
at 
org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:724) 
~[hive-service-2.3.6.jar:2.3.6]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[?:1.8.0_121]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_121]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_121]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121]
at org.apache.hadoop.util.RunJar.run(RunJar.java:226) 
~[hadoop-common-2.6.0-cdh5.16.1.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:141) 
~[hadoop-common-2.6.0-cdh5.16.1.jar:?]
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[?:1.8.0_121]
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) 
~[?:1.8.0_121]
at 

[jira] [Created] (HIVE-22830) Support ALL privilege in grant option and SQL authorization

2020-02-04 Thread Deegue (Jira)
Deegue created HIVE-22830:
-

 Summary: Support ALL privilege in grant option and SQL 
authorization
 Key: HIVE-22830
 URL: https://issues.apache.org/jira/browse/HIVE-22830
 Project: Hive
  Issue Type: Improvement
Reporter: Deegue


Upgraded from Hive 1.1.0 or other 1.x version, ALL privilege should be 
supported.

When user A grant ALL on table t1 to user B, user A has ALL privilege on table 
t1, it will thrown exception like:

{code:java}
FAILED: HiveAuthzPluginException ALLUnsupported privilege type ALL
{code}





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HIVE-22757) NullPointerException when executing SQLs

2020-01-21 Thread Deegue (Jira)
Deegue created HIVE-22757:
-

 Summary: NullPointerException when executing SQLs
 Key: HIVE-22757
 URL: https://issues.apache.org/jira/browse/HIVE-22757
 Project: Hive
  Issue Type: Bug
Affects Versions: 2.3.6
Reporter: Deegue


When executing SQL:


{code:java}
insert overwrite table ods.ods_1 partition(stat_day='20191209')
select
id
,user_id
,teacher_user_id
,partner_user_id
,order_id
,barcode
,sub_order_id
,item_id
,sales
,refund
,teacher_profit
,partner_profit
,teacher_refund_profit
,partner_refund_profit
,teacher_commission_value
,partner_commission_value
,biz_type
,pay_time
,item_profit_type
,black_mark
,is_deleted
,create_time
,modify_time
from src.src_1
where partition_date='20191209'
union all
select
t1.id
,t1.user_id
,t1.teacher_user_id
,t1.partner_user_id
,t1.order_id
,t1.barcode
,t1.sub_order_id
,t1.item_id
,t1.sales
,t1.refund
,t1.teacher_profit
,t1.partner_profit
,t1.teacher_refund_profit
,t1.partner_refund_profit
,t1.teacher_commission_value
,t1.partner_commission_value
,t1.biz_type
,t1.pay_time
,t1.item_profit_type
,t1.black_mark
,t1.is_deleted
,t1.create_time
,t1.modify_time
from
(select *
from ods.ods_1
where stat_day='20191208'
) t1
left join
( select order_id
,sub_order_id
from src.src_1
where partition_date='20191209'
) t2
on t1.order_id=t2.order_id
and t1.sub_order_id=t2.sub_order_id
where t2.order_id is null
{code}

`java.lang.NullPointerException` thrown because the array list 
`neededNestedColumnPaths` haven't been inited when `addAll` method is invoked.


{code:java}
Launching Job 5 out of 5
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1566481621886_4925755, Tracking URL = 
http://TXIDC65-bigdata-resourcemanager1:8042/proxy/application_1566481621886_4925755/
Kill Command = /usr/local/yunji/hadoop/bin/hadoop job  -kill 
job_1566481621886_4925755
Hadoop job information for Stage-4: number of mappers: 1; number of reducers: 0
2019-12-24 16:00:40,584 Stage-4 map = 0%,  reduce = 0%
2019-12-24 16:01:40,956 Stage-4 map = 0%,  reduce = 0%
2019-12-24 16:02:41,451 Stage-4 map = 0%,  reduce = 0%
2019-12-24 16:02:45,550 Stage-4 map = 100%,  reduce = 0%
Ended Job = job_1566481621886_4925755 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1566481621886_4925755_m_00 (and more) from job 
job_1566481621886_4925755

Task with the most failures(4):
-
Task ID:
  task_1566481621886_4925755_m_00

URL:
  
http://TXIDC65-bigdata-resourcemanager1:8088/taskdetails.jsp?jobid=job_1566481621886_4925755=task_1566481621886_4925755_m_00
-
Diagnostic Messages for this Task:
Error: java.io.IOException: java.lang.reflect.InvocationTargetException
at 
org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at 
org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:271)
at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.(HadoopShimsSecure.java:217)
at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:345)
at 
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:695)
at 
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:438)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:257)
... 11 more
Caused by: java.lang.NullPointerException
at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
at 
org.apache.hadoop.hive.ql.io.parquet.ProjectionPusher.pushProjectionsAndFilters(ProjectionPusher.java:118)
at