[jira] [Assigned] (HIVE-23790) The error message length of 2000 is exceeded for scheduled query

2020-07-01 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi reassigned HIVE-23790:
--

Assignee: Zoltan Haindrich  (was: Aasha Medhi)

> The error message length of 2000 is exceeded for scheduled query
> 
>
> Key: HIVE-23790
> URL: https://issues.apache.org/jira/browse/HIVE-23790
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Zoltan Haindrich
>Priority: Major
>
> {code:java}
> 2020-07-01 08:24:23,916 ERROR org.apache.thrift.server.TThreadPoolServer: 
> [pool-7-thread-189]: Error occurred during processing of message.
> org.datanucleus.exceptions.NucleusUserException: Attempt to store value 
> "FAILED: Execution Error, return code 30045 from 
> org.apache.hadoop.hive.ql.exec.repl.DirCopyTask. Permission denied: 
> user=hive, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:496)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:336)
>   at 
> org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:626)
>   at 
> org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkRangerPermission(RangerHdfsAuthorizer.java:388)
>   at 
> org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermissionWithContext(RangerHdfsAuthorizer.java:229)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:239)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1908)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1851)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3226)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1130)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:729)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:985)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:913)
>   at java.base/java.security.AccessController.doPrivileged(Native Method)
>   at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2882)
> " in column ""ERROR_MESSAGE"" that has maximum length of 2000. Please correct 
> your data!
>   at 
> org.datanucleus.store.rdbms.mapping.datastore.CharRDBMSMapping.setString(CharRDBMSMapping.java:254)
>  ~[datanucleus-rdbms-4.1.19.jar:?]
>   at 
> org.datanucleus.store.rdbms.mapping.java.SingleFieldMapping.setString(SingleFieldMapping.java:180)
>  ~[datanucleus-rdbms-4.1.19.jar:?]
>   at 
> org.datanucleus.store.rdbms.fieldmanager.ParameterSetter.storeStringField(ParameterSetter.java:158)
>  ~[datanucleus-rdbms-4.1.19.jar:?]
>   at 
> org.datanucleus.state.AbstractStateManager.providedStringField(AbstractStateManager.java:1448)
>  ~[datanucleus-core-4.1.17.jar:?]
>   at 
> org.datanucleus.state.StateManagerImpl.providedStringField(StateManagerImpl.java:120)
>  ~[datanucleus-core-4.1.17.jar:?]
>   at 
> org.apache.hadoop.hive.metastore.model.MScheduledExecution.dnProvideField(MScheduledExecution.java)
>  ~[hive-exec-3.1.3000.7.2.1.0-246.jar:3.1.3000.7.2.1.0-246]
>   at 
> org.apache.hadoop.hive.metastore.model.MScheduledExecution.dnProvideFields(MScheduledExecution.java)
>  ~[hive-exec-3.1.3000.7.2.1.0-246.jar:3.1.3000.7.2.1.0-246]
>   at 
> org.datanucleus.state.StateManagerImpl.provideFields(StateManagerImpl.java:1170)
>  ~[datanucleus-core-4.1.17.jar:?]
>   at 
> org.datanucleus.store.rdbms.request.UpdateRequest.execute(UpdateRequest.java:326)
>  ~[datanucleus-rdbms-4.1.19.jar:?]
>   at 
> org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateObjectInTable(RDBMSPersi

[jira] [Assigned] (HIVE-23790) The error message length of 2000 is exceeded for scheduled query

2020-07-01 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi reassigned HIVE-23790:
--


> The error message length of 2000 is exceeded for scheduled query
> 
>
> Key: HIVE-23790
> URL: https://issues.apache.org/jira/browse/HIVE-23790
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>
> {code:java}
> 2020-07-01 08:24:23,916 ERROR org.apache.thrift.server.TThreadPoolServer: 
> [pool-7-thread-189]: Error occurred during processing of message.
> org.datanucleus.exceptions.NucleusUserException: Attempt to store value 
> "FAILED: Execution Error, return code 30045 from 
> org.apache.hadoop.hive.ql.exec.repl.DirCopyTask. Permission denied: 
> user=hive, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:496)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:336)
>   at 
> org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:626)
>   at 
> org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkRangerPermission(RangerHdfsAuthorizer.java:388)
>   at 
> org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermissionWithContext(RangerHdfsAuthorizer.java:229)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:239)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1908)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1892)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1851)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3226)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1130)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:729)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:985)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:913)
>   at java.base/java.security.AccessController.doPrivileged(Native Method)
>   at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2882)
> " in column ""ERROR_MESSAGE"" that has maximum length of 2000. Please correct 
> your data!
>   at 
> org.datanucleus.store.rdbms.mapping.datastore.CharRDBMSMapping.setString(CharRDBMSMapping.java:254)
>  ~[datanucleus-rdbms-4.1.19.jar:?]
>   at 
> org.datanucleus.store.rdbms.mapping.java.SingleFieldMapping.setString(SingleFieldMapping.java:180)
>  ~[datanucleus-rdbms-4.1.19.jar:?]
>   at 
> org.datanucleus.store.rdbms.fieldmanager.ParameterSetter.storeStringField(ParameterSetter.java:158)
>  ~[datanucleus-rdbms-4.1.19.jar:?]
>   at 
> org.datanucleus.state.AbstractStateManager.providedStringField(AbstractStateManager.java:1448)
>  ~[datanucleus-core-4.1.17.jar:?]
>   at 
> org.datanucleus.state.StateManagerImpl.providedStringField(StateManagerImpl.java:120)
>  ~[datanucleus-core-4.1.17.jar:?]
>   at 
> org.apache.hadoop.hive.metastore.model.MScheduledExecution.dnProvideField(MScheduledExecution.java)
>  ~[hive-exec-3.1.3000.7.2.1.0-246.jar:3.1.3000.7.2.1.0-246]
>   at 
> org.apache.hadoop.hive.metastore.model.MScheduledExecution.dnProvideFields(MScheduledExecution.java)
>  ~[hive-exec-3.1.3000.7.2.1.0-246.jar:3.1.3000.7.2.1.0-246]
>   at 
> org.datanucleus.state.StateManagerImpl.provideFields(StateManagerImpl.java:1170)
>  ~[datanucleus-core-4.1.17.jar:?]
>   at 
> org.datanucleus.store.rdbms.request.UpdateRequest.execute(UpdateRequest.java:326)
>  ~[datanucleus-rdbms-4.1.19.jar:?]
>   at 
> org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateObjectInTable(RDBMSPersistenceHandler.java:409)
>  ~[datanucleus-rdbms-4.1.19.ja