[ 
https://issues.apache.org/jira/browse/METRON-1696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16576081#comment-16576081
 ] 

ASF GitHub Bot commented on METRON-1696:
----------------------------------------

Github user MohanDV commented on the issue:

    https://github.com/apache/metron/pull/1134
  
    @mmiklavc I see the 'storm' user as the owner of pcap topology in 
non-kerberized setup, where as the 'metron'  as the owner in kerberized setup. 
IMHO The pcap should have 'metron' as owner of the topology, then having the 
hdfs permissions set to metron:hadoop will work on both the environment. 


> Pcap parser fails to write pacap sequence file to hdfs on kerberized cluster 
> -----------------------------------------------------------------------------
>
>                 Key: METRON-1696
>                 URL: https://issues.apache.org/jira/browse/METRON-1696
>             Project: Metron
>          Issue Type: Bug
>            Reporter: Mohan
>            Assignee: Mohan
>            Priority: Major
>
> pcap parser fails to write the pcap sequence files to hdfs directory due to 
> insufficient privileges to hdfs folder for 'metron' user 
> {code:java}
> 2018-07-25 10:15:50.035 o.a.m.s.p.HDFSWriterCallback 
> Thread-9-kafkaSpout-executor[3 3] [ERROR] Permission denied: user=metron, 
> access=WRITE, 
> inode="/apps/metron/pcap/pcap_pcap_1532513746365022000_0_pcap-20-1532414055":hdfs:hdfs:drwxr-xr-x
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:353)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:325)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:246)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1950)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1934)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1917)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2767)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2702)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2586)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:736)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to