Chown it to metron:hadoop and it'll work.  Storm is in the hadoop group and
with 775 will be able to write.

On Wed, Apr 12, 2017 at 7:56 AM, David Lyle <dlyle65...@gmail.com> wrote:

> It's curious to me that you're writing directly from parsing, but I suspect
> that your parsing topology is running as the storm user and it can't write
> to those directories.
>
> -D...
>
> On Wed, Apr 12, 2017 at 7:51 AM, Otto Fowler <ottobackwa...@gmail.com>
> wrote:
>
> > The indexing dir is created:
> >
> > self.__params.HdfsResource(self.__params.metron_apps_indexed_hdfs_dir,
> >                            type="directory",
> >                            action="create_on_execute",
> >                            owner=self.__params.metron_user,
> >                            group=self.__params.metron_group,
> >                            mode=0775,
> >                            )
> >
> >
> >
> >
> > On April 12, 2017 at 07:49:16, Otto Fowler (ottobackwa...@gmail.com)
> > wrote:
> >
> >
> > I am trying to write to HDFS from ParserBolt, but I’m getting the
> following
> > exception:
> >
> > Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied:
> > user=storm, access=WRITE,
> > inode="/apps/metron/extension_working/framework":metron:hdfs:drwxrwxr-x
> >         at
> > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> > check(FSPermissionChecker.java:319)
> >         at
> > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> > check(FSPermissionChecker.java:292)
> >         at
> > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> > checkPermission(FSPermissionChecker.java:213)
> >         at
> > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> > checkPermission(FSPermissionChecker.java:190)
> >         at
> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> > checkPermission(FSDirectory.java:1827)
> >         at
> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> > checkPermission(FSDirectory.java:1811)
> >         at
> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(
> > FSDirectory.java:1794)
> >         at
> > org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(
> > FSDirMkdirOp.java:71)
> >         at
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(
> > FSNamesystem.java:4011)
> >         at
> > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> > mkdirs(NameNodeRpcServer.java:1102)
> >         at
> > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
> > deTranslatorPB.mkdirs(ClientNamenodeProtocolServerSi
> > deTranslatorPB.java:630)
> >         at
> > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
> > ClientNamenodeProtocol$2.callBlockingMethod(
> ClientNamenodeProtocolProtos.
> > java)
> >         at
> > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(
> > ProtobufRpcEngine.java:640)
> >
> >
> > The HDFS directory is created as such:
> >
> > self.__params.HdfsResource(self.__params.hdfs_metron_
> > apps_extensions_working,
> >                        type="directory",
> >                        action="create_on_execute",
> >                        owner=self.__params.metron_user,
> >                        mode=0775)
> >
> >
> > As the hdfs write handlers I am logging in as such:
> >
> > HdfsSecurityUtil.login(stormConfig, fsConf);
> > FileSystem fileSystem = FileSystem.get(fsConf);
> >
> > I am not sure what is different from the indexing hdfs writer setup here,
> > but what I’m doing obviously is not working.
> >
> > Any ideas?
> >
> >
> > - the branch:
> > https://github.com/ottobackwards/incubator-metron/tree/parser_deploy
> >
> > I am not up to date with master.
> >
>

Reply via email to