[GitHub] incubator-metron issue #521: METRON-835 Use Profiler with Kerberos

2017-04-12 Thread mmiklavc
Github user mmiklavc commented on the issue:

https://github.com/apache/incubator-metron/pull/521
  
I pulled that from our dev guidelines - 
https://cwiki.apache.org/confluence/display/METRON/Development+Guidelines

I'm ok with it and I like the new formatting. What was the end result with 
the Kafka ACL authorization problem you were seeing before? These instructions 
still have the user creating the ACL's as the 'metron' user - did that work the 
second time through?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [HELP!]Writing to HDFS from Storm

2017-04-12 Thread Otto Fowler
What Casey and I had talked about ( and he actually implemented first with
stellar ) is having a single shared repository, without having to tie the
deployment
to the machine.

This all still has to stand review so we’ll see.
Maybe Casey can comment.



On April 12, 2017 at 08:30:31, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

Makes sense, however, would it make sense to unpack to local file system,
which would then avoid further HDFS ops?


On 12 Apr 2017, at 13:24, Otto Fowler  wrote:

"The parsers are packaged as ‘bundles’ ( our version of NIFI NARS ) and are
deployed
if required.  So the writes for HDFS are the unpacking the bundles ( with
the unpacked jars being loaded into a classloader ).

If the unbundled is the same as the bundle, there is no op.  In this case,
it is first run.

So this is the parser bolt using the HDFS backed extension loader.”


This is part of the parser side loading, also feeds into the stellar load
from hdfs stuff.

* metron has extensions
* extensions are packaged with their configurations and their ‘bundle’ (
the nar part )
* extension bundles are deployed to hdfs *lib directory
* parts of the system that use extensions discover and load them, with
class loaders setup for the bundle
* part of that, the bundles are unpackaged into a ‘working’ area if
required ( so if you drop a new version into the lib, the next app will
unpack the new version )
* thus, the bundle system needs to be able to write to the working areas in
hdfs
* ???
* profit


On April 12, 2017 at 08:11:28, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

I’m curious: Otto, what’s your use case for writing to HDFS from a parsers?

Simon

> On 12 Apr 2017, at 13:04, Justin Leet  wrote:
>
> Chown it to metron:hadoop and it'll work. Storm is in the hadoop group and

> with 775 will be able to write.
>
> On Wed, Apr 12, 2017 at 7:56 AM, David Lyle  wrote:
>
>> It's curious to me that you're writing directly from parsing, but I
suspect
>> that your parsing topology is running as the storm user and it can't
write
>> to those directories.
>>
>> -D...
>>
>> On Wed, Apr 12, 2017 at 7:51 AM, Otto Fowler 
>> wrote:
>>
>>> The indexing dir is created:
>>>
>>> self.__params.HdfsResource(self.__params.metron_apps_indexed_hdfs_dir,
>>> type="directory",
>>> action="create_on_execute",
>>> owner=self.__params.metron_user,
>>> group=self.__params.metron_group,
>>> mode=0775,
>>> )
>>>
>>>
>>>
>>>
>>> On April 12, 2017 at 07:49:16, Otto Fowler (ottobackwa...@gmail.com)
>>> wrote:
>>>
>>>
>>> I am trying to write to HDFS from ParserBolt, but I’m getting the
>> following
>>> exception:
>>>
>>> Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied:
>>> user=storm, access=WRITE,
>>> inode="/apps/metron/extension_working/framework":metron:hdfs:drwxrwxr-x
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
>>> check(FSPermissionChecker.java:319)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
>>> check(FSPermissionChecker.java:292)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
>>> checkPermission(FSPermissionChecker.java:213)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
>>> checkPermission(FSPermissionChecker.java:190)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
>>> checkPermission(FSDirectory.java:1827)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
>>> checkPermission(FSDirectory.java:1811)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(
>>> FSDirectory.java:1794)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(
>>> FSDirMkdirOp.java:71)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(
>>> FSNamesystem.java:4011)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
>>> mkdirs(NameNodeRpcServer.java:1102)
>>> at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
>>> deTranslatorPB.mkdirs(ClientNamenodeProtocolServerSi
>>> deTranslatorPB.java:630)
>>> at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
>>> ClientNamenodeProtocol$2.callBlockingMethod(
>> ClientNamenodeProtocolProtos.
>>> java)
>>> at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(
>>> ProtobufRpcEngine.java:640)
>>>
>>>
>>> The HDFS directory is created as such:
>>>
>>> self.__params.HdfsResource(self.__params.hdfs_metron_
>>> apps_extensions_working,
>>> type="directory",
>>> action="create_on_execute",
>>> owner=self.__params.metron_user,
>>> mode=0775)
>>>
>>>
>>> As the hdfs write handlers I am logging in as such:
>>>
>>> HdfsSecurityUtil.login(stormConfig, fsConf);
>>> FileSystem fileSystem = FileSystem.get(fsConf);
>>>
>>> I am not sure what is different from the indexing hdfs writer setup
here,
>>> but what I’m doing obviously is not working.

Re: [HELP!]Writing to HDFS from Storm

2017-04-12 Thread Otto Fowler
Thanks Justin,
I’m working on rebasing right now, then I’ll be able to try it out.


On April 12, 2017 at 08:26:08, Justin Leet (justinjl...@gmail.com) wrote:

We changed the HDFS owner of /apps/metron/indexing/indexed a bit ago to be
metron:hadoop, specifically because Storm wasn't writing to HDFS (and had
perms issues). If you look into the indexing output directories, my
expectation is that you aren't getting any data out (and the Storm indexing
topology is throwing permissions errors).

PR for that change: https://github.com/apache/incubator-metron/pull/488

Your branch has the old code:
https://github.com/ottobackwards/incubator-metron/blob/parser_deploy/metron-deployment/packaging/ambari/metron-mpack/src/main/resources/common-services/METRON/CURRENT/package/scripts/indexing_commands.py#L98
New code:
https://github.com/apache/incubator-metron/blob/master/metron-deployment/packaging/ambari/metron-mpack/src/main/resources/common-services/METRON/CURRENT/package/scripts/indexing_commands.py

To set up ownership on startup with the hadoop group after you pull in
master (or at least make the changes from that PR)

self.__params.HdfsResource(self.__params.metron_apps_indexed_hdfs_dir,
type="directory",
action="create_on_execute",
owner=self.__params.metron_user,
group=self.__params.hadoop_group,
mode=0775,
)

Justin

On Wed, Apr 12, 2017 at 8:24 AM, Otto Fowler 
wrote:

> "The parsers are packaged as ‘bundles’ ( our version of NIFI NARS ) and
are
> deployed
> if required. So the writes for HDFS are the unpacking the bundles ( with
> the unpacked jars being loaded into a classloader ).
>
> If the unbundled is the same as the bundle, there is no op. In this case,
> it is first run.
>
> So this is the parser bolt using the HDFS backed extension loader.”
>
>
> This is part of the parser side loading, also feeds into the stellar load
> from hdfs stuff.
>
> * metron has extensions
> * extensions are packaged with their configurations and their ‘bundle’ (
> the nar part )
> * extension bundles are deployed to hdfs *lib directory
> * parts of the system that use extensions discover and load them, with
> class loaders setup for the bundle
> * part of that, the bundles are unpackaged into a ‘working’ area if
> required ( so if you drop a new version into the lib, the next app will
> unpack the new version )
> * thus, the bundle system needs to be able to write to the working areas
in
> hdfs
> * ???
> * profit
>
>
> On April 12, 2017 at 08:11:28, Simon Elliston Ball (
> si...@simonellistonball.com) wrote:
>
> I’m curious: Otto, what’s your use case for writing to HDFS from a
parsers?
>
> Simon
>
> > On 12 Apr 2017, at 13:04, Justin Leet  wrote:
> >
> > Chown it to metron:hadoop and it'll work. Storm is in the hadoop group
> and
> > with 775 will be able to write.
> >
> > On Wed, Apr 12, 2017 at 7:56 AM, David Lyle 
> wrote:
> >
> >> It's curious to me that you're writing directly from parsing, but I
> suspect
> >> that your parsing topology is running as the storm user and it can't
> write
> >> to those directories.
> >>
> >> -D...
> >>
> >> On Wed, Apr 12, 2017 at 7:51 AM, Otto Fowler 
> >> wrote:
> >>
> >>> The indexing dir is created:
> >>>
> >>>
self.__params.HdfsResource(self.__params.metron_apps_indexed_hdfs_dir,
> >>> type="directory",
> >>> action="create_on_execute",
> >>> owner=self.__params.metron_user,
> >>> group=self.__params.metron_group,
> >>> mode=0775,
> >>> )
> >>>
> >>>
> >>>
> >>>
> >>> On April 12, 2017 at 07:49:16, Otto Fowler (ottobackwa...@gmail.com)
> >>> wrote:
> >>>
> >>>
> >>> I am trying to write to HDFS from ParserBolt, but I’m getting the
> >> following
> >>> exception:
> >>>
> >>> Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied:
> >>> user=storm, access=WRITE,
> >>> inode="/apps/metron/extension_working/framework":metron:
> hdfs:drwxrwxr-x
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> >>> check(FSPermissionChecker.java:319)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> >>> check(FSPermissionChecker.java:292)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> >>> checkPermission(FSPermissionChecker.java:213)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> >>> checkPermission(FSPermissionChecker.java:190)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> >>> checkPermission(FSDirectory.java:1827)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> >>> checkPermission(FSDirectory.java:1811)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> checkAncestorAccess(
> >>> FSDirectory.java:1794)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(
> >>> FSDirMkdirOp.java:71)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(
> >>> FSNamesystem.java:4011)
> >>> at
> >>> 

[GitHub] incubator-metron issue #510: METRON-821 Minor fixes in full dev kerberos set...

2017-04-12 Thread JonZeolla
Github user JonZeolla commented on the issue:

https://github.com/apache/incubator-metron/pull/510
  
Thanks, did some updates.  Still need to adjust the test steps to not send 
to the `yaf` topic but everything else should be done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-metron pull request #510: METRON-821 Minor fixes in full dev kerbe...

2017-04-12 Thread JonZeolla
Github user JonZeolla commented on a diff in the pull request:

https://github.com/apache/incubator-metron/pull/510#discussion_r83194
  
--- Diff: metron-deployment/vagrant/Kerberos-setup.md ---
@@ -167,39 +167,48 @@ KafkaClient {
serviceName="kafka"
principal="met...@example.com";
 };
+EOF
   ```
 
 18. Create a storm.yaml with jaas file info. Set the array of nimbus hosts 
accordingly.
   ```
-[metron@node1 .storm]$ cat storm.yaml
+cat << EOF > storm.yaml
 nimbus.seeds : ['node1']
 java.security.auth.login.config : '/home/metron/.storm/client_jaas.conf'
 storm.thrift.transport : 
'org.apache.storm.security.auth.kerberos.KerberosSaslTransportPlugin'
+EOF
   ```
 
 19. Create an auxiliary storm configuration json file in the metron 
user’s home directory. Note the login config option in the file points to our 
custom client_jaas.conf.
   ```
-cd /home/metron
-[metron@node1 ~]$ cat storm-config.json
+cd
+cat << EOF > storm-config.json
 {
   "topology.worker.childopts" : 
"-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf"
 }
+EOF
   ```
 
 20. Setup enrichment and indexing.
 
 a. Modify enrichment.properties - 
`${METRON_HOME}/config/enrichment.properties`
 
 ```
-kafka.security.protocol=PLAINTEXTSASL
-
topology.worker.childopts=-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf
+if [[ $EUID -ne 0 ]]; then
+echo "You must be root to run these commands"
+else
+sed -i 
's/kafka.security.protocol=.*/kafka.security.protocol=PLAINTEXTSASL/' 
${METRON_HOME}/config/enrichment.properties
+sed -i 
's/topology.worker.childopts=.*/topology.worker.childopts=-Djava.security.auth.login.config=\/home\/metron\/.storm\/client_jaas.conf/'
 ${METRON_HOME}/config/enrichment.properties
+fi
 ```
 
 b. Modify elasticsearch.properties - 
`${METRON_HOME}/config/elasticsearch.properties`
 
 ```
-kafka.security.protocol=PLAINTEXTSASL
-
topology.worker.childopts=-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf
+sed -i 
's/kafka.security.protocol=.*/kafka.security.protocol=PLAINTEXTSASL/' 
${METRON_HOME}/config/elasticsearch.properties
+sed -i 
's/topology.worker.childopts=.*/topology.worker.childopts=-Djava.security.auth.login.config=\/home\/metron\/.storm\/client_jaas.conf/'
 ${METRON_HOME}/config/elasticsearch.properties
+su metron
--- End diff --

I'll move it down to the next step


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-metron pull request #510: METRON-821 Minor fixes in full dev kerbe...

2017-04-12 Thread JonZeolla
Github user JonZeolla commented on a diff in the pull request:

https://github.com/apache/incubator-metron/pull/510#discussion_r82531
  
--- Diff: metron-deployment/vagrant/Kerberos-setup.md ---
@@ -107,23 +107,23 @@ ${HDP_HOME}/kafka-broker/bin/kafka-topics.sh 
--zookeeper ${ZOOKEEPER}:2181 --cre
 12. Setup Kafka ACLs for the topics
   ```
 export KERB_USER=metron;
-for topic in bro enrichments indexing snort; do
+for topic in bro enrichments indexing snort yaf; do
--- End diff --

Yes, that was the reasoning.  I'm game with either way, I assumed that 
there a specific reason why yaf was used.  I'll update the instructions to 
consider limited resources.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-metron pull request #510: METRON-821 Minor fixes in full dev kerbe...

2017-04-12 Thread JonZeolla
Github user JonZeolla commented on a diff in the pull request:

https://github.com/apache/incubator-metron/pull/510#discussion_r81738
  
--- Diff: metron-deployment/vagrant/Kerberos-setup.md ---
@@ -263,5 +272,12 @@ cat sample-yaf.txt | 
${HDP_HOME}/kafka-broker/bin/kafka-console-producer.sh --br
 ${HDP_HOME}/kafka-broker/bin/kafka-console-consumer.sh --zookeeper 
${ZOOKEEPER}:2181 --security-protocol PLAINTEXTSASL --topic yaf
 ```
 
+# Modify the sensor-stubs to send logs via SASL
+```
+sed -i 's/node1:6667 --topic/node1:6667 --security-protocol PLAINTEXTSASL 
--topic/' /opt/sensor-stubs/bin/start-*-stub
+# Restart the appropriate sensor-stubs
+for sensorstub in bro snort; do service sensor-stubs stop $sensorstub; 
service sensor-stubs start $sensorstub; done
--- End diff --

Thanks, I did try restart earlier and snagged an error so I assumed it 
needed a stop/start.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-metron pull request #507: METRON-819: Document kafka console produ...

2017-04-12 Thread nickwallen
Github user nickwallen commented on a diff in the pull request:

https://github.com/apache/incubator-metron/pull/507#discussion_r64016
  
--- Diff: metron-deployment/vagrant/Kerberos-setup.md ---
@@ -221,6 +221,10 @@ curl -XGET "${ZOOKEEPER}:9200/yaf*/_count"
 
 25. You should have data flowing from the parsers all the way through to 
the indexes. This completes the Kerberization instructions
 
+### Sensors
+
+For sensors that leverage the Kafka console producer to pipe data into 
Metron, e.g. Snort and Yaf, you will need to modify the corresponding sensor 
shell script to append the SASL security protocol property. 
`--security-protocol SASL_PLAINTEXT`
+
--- End diff --

Should we call out the need to `kinit` beforehand?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-metron pull request #510: METRON-821 Minor fixes in full dev kerbe...

2017-04-12 Thread nickwallen
Github user nickwallen commented on a diff in the pull request:

https://github.com/apache/incubator-metron/pull/510#discussion_r62379
  
--- Diff: metron-deployment/vagrant/Kerberos-setup.md ---
@@ -167,39 +167,48 @@ KafkaClient {
serviceName="kafka"
principal="met...@example.com";
 };
+EOF
   ```
 
 18. Create a storm.yaml with jaas file info. Set the array of nimbus hosts 
accordingly.
   ```
-[metron@node1 .storm]$ cat storm.yaml
+cat << EOF > storm.yaml
 nimbus.seeds : ['node1']
 java.security.auth.login.config : '/home/metron/.storm/client_jaas.conf'
 storm.thrift.transport : 
'org.apache.storm.security.auth.kerberos.KerberosSaslTransportPlugin'
+EOF
   ```
 
 19. Create an auxiliary storm configuration json file in the metron 
user’s home directory. Note the login config option in the file points to our 
custom client_jaas.conf.
   ```
-cd /home/metron
-[metron@node1 ~]$ cat storm-config.json
+cd
+cat << EOF > storm-config.json
 {
   "topology.worker.childopts" : 
"-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf"
 }
+EOF
   ```
 
 20. Setup enrichment and indexing.
 
 a. Modify enrichment.properties - 
`${METRON_HOME}/config/enrichment.properties`
 
 ```
-kafka.security.protocol=PLAINTEXTSASL
-
topology.worker.childopts=-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf
+if [[ $EUID -ne 0 ]]; then
+echo "You must be root to run these commands"
+else
+sed -i 
's/kafka.security.protocol=.*/kafka.security.protocol=PLAINTEXTSASL/' 
${METRON_HOME}/config/enrichment.properties
+sed -i 
's/topology.worker.childopts=.*/topology.worker.childopts=-Djava.security.auth.login.config=\/home\/metron\/.storm\/client_jaas.conf/'
 ${METRON_HOME}/config/enrichment.properties
+fi
 ```
 
 b. Modify elasticsearch.properties - 
`${METRON_HOME}/config/elasticsearch.properties`
 
 ```
-kafka.security.protocol=PLAINTEXTSASL
-
topology.worker.childopts=-Djava.security.auth.login.config=/home/metron/.storm/client_jaas.conf
+sed -i 
's/kafka.security.protocol=.*/kafka.security.protocol=PLAINTEXTSASL/' 
${METRON_HOME}/config/elasticsearch.properties
+sed -i 
's/topology.worker.childopts=.*/topology.worker.childopts=-Djava.security.auth.login.config=\/home\/metron\/.storm\/client_jaas.conf/'
 ${METRON_HOME}/config/elasticsearch.properties
+su metron
--- End diff --

Why `su metron; cd` here?  We could move them to the step that actually 
needs them done (maybe the next step) or call them out as a separate step.  
Their purpose is not very clear to me when we tack them onto the end of this 
step.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-metron pull request #510: METRON-821 Minor fixes in full dev kerbe...

2017-04-12 Thread nickwallen
Github user nickwallen commented on a diff in the pull request:

https://github.com/apache/incubator-metron/pull/510#discussion_r63138
  
--- Diff: metron-deployment/vagrant/Kerberos-setup.md ---
@@ -263,5 +272,12 @@ cat sample-yaf.txt | 
${HDP_HOME}/kafka-broker/bin/kafka-console-producer.sh --br
 ${HDP_HOME}/kafka-broker/bin/kafka-console-consumer.sh --zookeeper 
${ZOOKEEPER}:2181 --security-protocol PLAINTEXTSASL --topic yaf
 ```
 
+# Modify the sensor-stubs to send logs via SASL
+```
+sed -i 's/node1:6667 --topic/node1:6667 --security-protocol PLAINTEXTSASL 
--topic/' /opt/sensor-stubs/bin/start-*-stub
+# Restart the appropriate sensor-stubs
+for sensorstub in bro snort; do service sensor-stubs stop $sensorstub; 
service sensor-stubs start $sensorstub; done
--- End diff --

This can be even simpler; `service sensor-stubs restart bro snort`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-metron pull request #510: METRON-821 Minor fixes in full dev kerbe...

2017-04-12 Thread nickwallen
Github user nickwallen commented on a diff in the pull request:

https://github.com/apache/incubator-metron/pull/510#discussion_r61348
  
--- Diff: metron-deployment/vagrant/Kerberos-setup.md ---
@@ -107,23 +107,23 @@ ${HDP_HOME}/kafka-broker/bin/kafka-topics.sh 
--zookeeper ${ZOOKEEPER}:2181 --cre
 12. Setup Kafka ACLs for the topics
   ```
 export KERB_USER=metron;
-for topic in bro enrichments indexing snort; do
+for topic in bro enrichments indexing snort yaf; do
--- End diff --

Did you want to start `yaf` because the instructions use the YAF topology 
for validation later on?

Another option, is to not start YAF here (as we know resources are 
constrained in Full/Quick Dev) and simply change the instructions below to 
validate against Snort or Bro, rather than YAF.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] Extracting Stellar as a component/module

2017-04-12 Thread Casey Stella
I'm ok with google docs as long as when consensus is reached, it lives in
the wiki.

On Tue, Apr 11, 2017 at 6:35 PM, Matt Foley  wrote:

> I’ve copied it to the cwiki, but the thing is that cwiki only allows
> comments at the bottom.  With a long doc like this, that’s not very good.
> I’d much rather keep everyone’s comments in the same system, and local to
> the text they’re commenting on.
>
>
>
> Is it okay to leave this in google doc?
>
>
>
> If anyone can’t abide logging in to google, the cwiki version is here:
>
> https://cwiki.apache.org/confluence/display/METRON/
> Extracting+Stellar+into+an+Independent+Module
>
>
>
> Thanks,
>
> --Matt
>
>
>
> On 4/11/17, 3:15 PM, "Matt Foley"  wrote:
>
>
>
> No, actually you’re right.  Will have it moved over shortly.
>
>
>
> On 4/11/17, 2:56 PM, "Otto Fowler"  wrote:
>
>
>
> Nevermind
>
>
>
>
>
> On April 11, 2017 at 17:47:57, Otto Fowler (
> ottobackwa...@gmail.com) wrote:
>
>
>
> Can’t we do this in confluence?
>
>
>
>
>
>
>
> On April 11, 2017 at 17:38:40, Matt Foley (ma...@apache.org)
> wrote:
>
>
>
> Hi all,
>
> This is a new discussion thread, and if the proposed change is
> accepted by
>
> the community, it will be submitted to the next release, not the
> current
>
> 0.4.0 branch.
>
>
>
> Stellar has 126 verbs today, and seems only likely to continue
> growing.
>
> Furthermore, we expect Stellar to be extended by users, and
> probably grow
>
> into having one or more Registry/ Repositories, etc. All this
> suggests that
>
> we should start viewing Stellar itself as a component, and make
> sure it is
>
> maintainable and has clean interfaces to the rest of the system.
> And that
>
> will be easier if we extract it into its own module, both in the
> code tree
>
> and in maven.
>
>
>
> I’ve written a combination proposal / discussion about how to
> extract
>
> Stellar from its current deep embed in Metron. Comments are
> welcome, and
>
> encouraged. Please read:
>
> https://docs.google.com/document/d/1EP7Jt4ePHe2A-_
> oboLl2QbN1muh7uKeET_kbpIgjcJM/edit#heading=h.4vsrmths49wk
>
>
>
> I believe I’ve set access so anyone can read and comment on it.
> However,
>
> google docs may still ask you log in with a google-registered email
>
> address. If this is a problem for anyone, let me know and I can
> send you a
>
> Word document.
>
>
>
> Thanks,
>
> --Matt
>
>
>
>
>
>
>
>


[GitHub] incubator-metron issue #521: METRON-835 Use Profiler with Kerberos

2017-04-12 Thread nickwallen
Github user nickwallen commented on the issue:

https://github.com/apache/incubator-metron/pull/521
  
> We would normally want to push format changes to a separate PR because 
it's hard to follow what has changed here,

I don't know if we have a normal.  I've seen many instances go both ways 
during the course of the project.  I can certainly spend the time to split out 
formatting, if you need me to.  If you don't like the formatting, but want the 
Kerberos instructions that would be one reason for me to do so.

I had a hard time understanding what we were trying to accomplish with each 
of the steps, which user the commands would be run under, etc which is why I 
split them under goal oriented headings.  It also helped me generalize the 
instructions to run in environments other than Full/Quick Dev.  






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [HELP!]Writing to HDFS from Storm

2017-04-12 Thread Simon Elliston Ball
Makes sense, however, would it make sense to unpack to local file system, which 
would then avoid further HDFS ops?


> On 12 Apr 2017, at 13:24, Otto Fowler  wrote:
> 
> "The parsers are packaged as ‘bundles’ ( our version of NIFI NARS ) and are 
> deployed
> if required.  So the writes for HDFS are the unpacking the bundles ( with the 
> unpacked jars being loaded into a classloader ).
> 
> If the unbundled is the same as the bundle, there is no op.  In this case, it 
> is first run.
> 
> So this is the parser bolt using the HDFS backed extension loader.”
> 
> 
> This is part of the parser side loading, also feeds into the stellar load 
> from hdfs stuff.
> 
> * metron has extensions
> * extensions are packaged with their configurations and their ‘bundle’ ( the 
> nar part )
> * extension bundles are deployed to hdfs *lib directory
> * parts of the system that use extensions discover and load them, with class 
> loaders setup for the bundle
> * part of that, the bundles are unpackaged into a ‘working’ area if required 
> ( so if you drop a new version into the lib, the next app will unpack the new 
> version )
> * thus, the bundle system needs to be able to write to the working areas in 
> hdfs
> * ???
> * profit
> 
> 
> On April 12, 2017 at 08:11:28, Simon Elliston Ball 
> (si...@simonellistonball.com ) wrote:
> 
>> I’m curious: Otto, what’s your use case for writing to HDFS from a parsers? 
>> 
>> Simon 
>> 
>> > On 12 Apr 2017, at 13:04, Justin Leet > > > wrote: 
>> >  
>> > Chown it to metron:hadoop and it'll work. Storm is in the hadoop group and 
>> > with 775 will be able to write. 
>> >  
>> > On Wed, Apr 12, 2017 at 7:56 AM, David Lyle > > > wrote: 
>> >  
>> >> It's curious to me that you're writing directly from parsing, but I 
>> >> suspect 
>> >> that your parsing topology is running as the storm user and it can't 
>> >> write 
>> >> to those directories. 
>> >>  
>> >> -D... 
>> >>  
>> >> On Wed, Apr 12, 2017 at 7:51 AM, Otto Fowler > >> > 
>> >> wrote: 
>> >>  
>> >>> The indexing dir is created: 
>> >>>  
>> >>> self.__params.HdfsResource(self.__params.metron_apps_indexed_hdfs_dir, 
>> >>> type="directory", 
>> >>> action="create_on_execute", 
>> >>> owner=self.__params.metron_user, 
>> >>> group=self.__params.metron_group, 
>> >>> mode=0775, 
>> >>> ) 
>> >>>  
>> >>>  
>> >>>  
>> >>>  
>> >>> On April 12, 2017 at 07:49:16, Otto Fowler (ottobackwa...@gmail.com 
>> >>> ) 
>> >>> wrote: 
>> >>>  
>> >>>  
>> >>> I am trying to write to HDFS from ParserBolt, but I’m getting the 
>> >> following 
>> >>> exception: 
>> >>>  
>> >>> Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied: 
>> >>> user=storm, access=WRITE, 
>> >>> inode="/apps/metron/extension_working/framework":metron:hdfs:drwxrwxr-x 
>> >>> at 
>> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker. 
>> >>> check(FSPermissionChecker.java:319) 
>> >>> at 
>> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker. 
>> >>> check(FSPermissionChecker.java:292) 
>> >>> at 
>> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker. 
>> >>> checkPermission(FSPermissionChecker.java:213) 
>> >>> at 
>> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker. 
>> >>> checkPermission(FSPermissionChecker.java:190) 
>> >>> at 
>> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory. 
>> >>> checkPermission(FSDirectory.java:1827) 
>> >>> at 
>> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory. 
>> >>> checkPermission(FSDirectory.java:1811) 
>> >>> at 
>> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess( 
>> >>> FSDirectory.java:1794) 
>> >>> at 
>> >>> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs( 
>> >>> FSDirMkdirOp.java:71) 
>> >>> at 
>> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs( 
>> >>> FSNamesystem.java:4011) 
>> >>> at 
>> >>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer. 
>> >>> mkdirs(NameNodeRpcServer.java:1102) 
>> >>> at 
>> >>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi 
>> >>> deTranslatorPB.mkdirs(ClientNamenodeProtocolServerSi 
>> >>> deTranslatorPB.java:630) 
>> >>> at 
>> >>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ 
>> >>> ClientNamenodeProtocol$2.callBlockingMethod( 
>> >> ClientNamenodeProtocolProtos. 
>> >>> java) 
>> >>> at 
>> >>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call( 
>> >>> ProtobufRpcEngine.java:640) 
>> >>>  
>> >>>  
>> >>> The HDFS directory is created as such: 
>> >>>  
>> >>> self.__params.HdfsResource(self.__params.hdfs_metron_ 
>> >>> apps_extensions_working, 
>> >>> type="directory", 
>> >>> action="create_on_execute", 
>> >>> 

Re: [HELP!]Writing to HDFS from Storm

2017-04-12 Thread Justin Leet
We changed the HDFS owner of /apps/metron/indexing/indexed a bit ago to be
metron:hadoop, specifically because Storm wasn't writing to HDFS (and had
perms issues).  If you look into the indexing output directories, my
expectation is that you aren't getting any data out (and the Storm indexing
topology is throwing permissions errors).

PR for that change: https://github.com/apache/incubator-metron/pull/488

Your branch has the old code:
https://github.com/ottobackwards/incubator-metron/blob/parser_deploy/metron-deployment/packaging/ambari/metron-mpack/src/main/resources/common-services/METRON/CURRENT/package/scripts/indexing_commands.py#L98
New code:
https://github.com/apache/incubator-metron/blob/master/metron-deployment/packaging/ambari/metron-mpack/src/main/resources/common-services/METRON/CURRENT/package/scripts/indexing_commands.py

To set up ownership on startup with the hadoop group after you pull in
master (or at least make the changes from that PR)

self.__params.HdfsResource(self.__params.metron_apps_indexed_hdfs_dir,
   type="directory",
   action="create_on_execute",
   owner=self.__params.metron_user,
   group=self.__params.hadoop_group,
   mode=0775,
   )

Justin

On Wed, Apr 12, 2017 at 8:24 AM, Otto Fowler 
wrote:

> "The parsers are packaged as ‘bundles’ ( our version of NIFI NARS ) and are
> deployed
> if required.  So the writes for HDFS are the unpacking the bundles ( with
> the unpacked jars being loaded into a classloader ).
>
> If the unbundled is the same as the bundle, there is no op.  In this case,
> it is first run.
>
> So this is the parser bolt using the HDFS backed extension loader.”
>
>
> This is part of the parser side loading, also feeds into the stellar load
> from hdfs stuff.
>
> * metron has extensions
> * extensions are packaged with their configurations and their ‘bundle’ (
> the nar part )
> * extension bundles are deployed to hdfs *lib directory
> * parts of the system that use extensions discover and load them, with
> class loaders setup for the bundle
> * part of that, the bundles are unpackaged into a ‘working’ area if
> required ( so if you drop a new version into the lib, the next app will
> unpack the new version )
> * thus, the bundle system needs to be able to write to the working areas in
> hdfs
> * ???
> * profit
>
>
> On April 12, 2017 at 08:11:28, Simon Elliston Ball (
> si...@simonellistonball.com) wrote:
>
> I’m curious: Otto, what’s your use case for writing to HDFS from a parsers?
>
> Simon
>
> > On 12 Apr 2017, at 13:04, Justin Leet  wrote:
> >
> > Chown it to metron:hadoop and it'll work. Storm is in the hadoop group
> and
> > with 775 will be able to write.
> >
> > On Wed, Apr 12, 2017 at 7:56 AM, David Lyle 
> wrote:
> >
> >> It's curious to me that you're writing directly from parsing, but I
> suspect
> >> that your parsing topology is running as the storm user and it can't
> write
> >> to those directories.
> >>
> >> -D...
> >>
> >> On Wed, Apr 12, 2017 at 7:51 AM, Otto Fowler 
> >> wrote:
> >>
> >>> The indexing dir is created:
> >>>
> >>> self.__params.HdfsResource(self.__params.metron_apps_indexed_hdfs_dir,
> >>> type="directory",
> >>> action="create_on_execute",
> >>> owner=self.__params.metron_user,
> >>> group=self.__params.metron_group,
> >>> mode=0775,
> >>> )
> >>>
> >>>
> >>>
> >>>
> >>> On April 12, 2017 at 07:49:16, Otto Fowler (ottobackwa...@gmail.com)
> >>> wrote:
> >>>
> >>>
> >>> I am trying to write to HDFS from ParserBolt, but I’m getting the
> >> following
> >>> exception:
> >>>
> >>> Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied:
> >>> user=storm, access=WRITE,
> >>> inode="/apps/metron/extension_working/framework":metron:
> hdfs:drwxrwxr-x
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> >>> check(FSPermissionChecker.java:319)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> >>> check(FSPermissionChecker.java:292)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> >>> checkPermission(FSPermissionChecker.java:213)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> >>> checkPermission(FSPermissionChecker.java:190)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> >>> checkPermission(FSDirectory.java:1827)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> >>> checkPermission(FSDirectory.java:1811)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> checkAncestorAccess(
> >>> FSDirectory.java:1794)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(
> >>> FSDirMkdirOp.java:71)
> >>> at
> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(
> >>> FSNamesystem.java:4011)
> >>> at

Re: [HELP!]Writing to HDFS from Storm

2017-04-12 Thread Otto Fowler
"The parsers are packaged as ‘bundles’ ( our version of NIFI NARS ) and are
deployed
if required.  So the writes for HDFS are the unpacking the bundles ( with
the unpacked jars being loaded into a classloader ).

If the unbundled is the same as the bundle, there is no op.  In this case,
it is first run.

So this is the parser bolt using the HDFS backed extension loader.”


This is part of the parser side loading, also feeds into the stellar load
from hdfs stuff.

* metron has extensions
* extensions are packaged with their configurations and their ‘bundle’ (
the nar part )
* extension bundles are deployed to hdfs *lib directory
* parts of the system that use extensions discover and load them, with
class loaders setup for the bundle
* part of that, the bundles are unpackaged into a ‘working’ area if
required ( so if you drop a new version into the lib, the next app will
unpack the new version )
* thus, the bundle system needs to be able to write to the working areas in
hdfs
* ???
* profit


On April 12, 2017 at 08:11:28, Simon Elliston Ball (
si...@simonellistonball.com) wrote:

I’m curious: Otto, what’s your use case for writing to HDFS from a parsers?

Simon

> On 12 Apr 2017, at 13:04, Justin Leet  wrote:
>
> Chown it to metron:hadoop and it'll work. Storm is in the hadoop group
and
> with 775 will be able to write.
>
> On Wed, Apr 12, 2017 at 7:56 AM, David Lyle  wrote:
>
>> It's curious to me that you're writing directly from parsing, but I
suspect
>> that your parsing topology is running as the storm user and it can't
write
>> to those directories.
>>
>> -D...
>>
>> On Wed, Apr 12, 2017 at 7:51 AM, Otto Fowler 
>> wrote:
>>
>>> The indexing dir is created:
>>>
>>> self.__params.HdfsResource(self.__params.metron_apps_indexed_hdfs_dir,
>>> type="directory",
>>> action="create_on_execute",
>>> owner=self.__params.metron_user,
>>> group=self.__params.metron_group,
>>> mode=0775,
>>> )
>>>
>>>
>>>
>>>
>>> On April 12, 2017 at 07:49:16, Otto Fowler (ottobackwa...@gmail.com)
>>> wrote:
>>>
>>>
>>> I am trying to write to HDFS from ParserBolt, but I’m getting the
>> following
>>> exception:
>>>
>>> Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied:
>>> user=storm, access=WRITE,
>>> inode="/apps/metron/extension_working/framework":metron:hdfs:drwxrwxr-x
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
>>> check(FSPermissionChecker.java:319)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
>>> check(FSPermissionChecker.java:292)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
>>> checkPermission(FSPermissionChecker.java:213)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
>>> checkPermission(FSPermissionChecker.java:190)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
>>> checkPermission(FSDirectory.java:1827)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
>>> checkPermission(FSDirectory.java:1811)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(
>>> FSDirectory.java:1794)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(
>>> FSDirMkdirOp.java:71)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(
>>> FSNamesystem.java:4011)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
>>> mkdirs(NameNodeRpcServer.java:1102)
>>> at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
>>> deTranslatorPB.mkdirs(ClientNamenodeProtocolServerSi
>>> deTranslatorPB.java:630)
>>> at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
>>> ClientNamenodeProtocol$2.callBlockingMethod(
>> ClientNamenodeProtocolProtos.
>>> java)
>>> at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(
>>> ProtobufRpcEngine.java:640)
>>>
>>>
>>> The HDFS directory is created as such:
>>>
>>> self.__params.HdfsResource(self.__params.hdfs_metron_
>>> apps_extensions_working,
>>> type="directory",
>>> action="create_on_execute",
>>> owner=self.__params.metron_user,
>>> mode=0775)
>>>
>>>
>>> As the hdfs write handlers I am logging in as such:
>>>
>>> HdfsSecurityUtil.login(stormConfig, fsConf);
>>> FileSystem fileSystem = FileSystem.get(fsConf);
>>>
>>> I am not sure what is different from the indexing hdfs writer setup
here,
>>> but what I’m doing obviously is not working.
>>>
>>> Any ideas?
>>>
>>>
>>> - the branch:
>>> https://github.com/ottobackwards/incubator-metron/tree/parser_deploy
>>>
>>> I am not up to date with master.
>>>
>>


Re: [HELP!]Writing to HDFS from Storm

2017-04-12 Thread Simon Elliston Ball
I’m curious: Otto, what’s your use case for writing to HDFS from a parsers?

Simon

> On 12 Apr 2017, at 13:04, Justin Leet  wrote:
> 
> Chown it to metron:hadoop and it'll work.  Storm is in the hadoop group and
> with 775 will be able to write.
> 
> On Wed, Apr 12, 2017 at 7:56 AM, David Lyle  wrote:
> 
>> It's curious to me that you're writing directly from parsing, but I suspect
>> that your parsing topology is running as the storm user and it can't write
>> to those directories.
>> 
>> -D...
>> 
>> On Wed, Apr 12, 2017 at 7:51 AM, Otto Fowler 
>> wrote:
>> 
>>> The indexing dir is created:
>>> 
>>> self.__params.HdfsResource(self.__params.metron_apps_indexed_hdfs_dir,
>>>   type="directory",
>>>   action="create_on_execute",
>>>   owner=self.__params.metron_user,
>>>   group=self.__params.metron_group,
>>>   mode=0775,
>>>   )
>>> 
>>> 
>>> 
>>> 
>>> On April 12, 2017 at 07:49:16, Otto Fowler (ottobackwa...@gmail.com)
>>> wrote:
>>> 
>>> 
>>> I am trying to write to HDFS from ParserBolt, but I’m getting the
>> following
>>> exception:
>>> 
>>> Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied:
>>> user=storm, access=WRITE,
>>> inode="/apps/metron/extension_working/framework":metron:hdfs:drwxrwxr-x
>>>at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
>>> check(FSPermissionChecker.java:319)
>>>at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
>>> check(FSPermissionChecker.java:292)
>>>at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
>>> checkPermission(FSPermissionChecker.java:213)
>>>at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
>>> checkPermission(FSPermissionChecker.java:190)
>>>at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
>>> checkPermission(FSDirectory.java:1827)
>>>at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
>>> checkPermission(FSDirectory.java:1811)
>>>at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(
>>> FSDirectory.java:1794)
>>>at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(
>>> FSDirMkdirOp.java:71)
>>>at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(
>>> FSNamesystem.java:4011)
>>>at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
>>> mkdirs(NameNodeRpcServer.java:1102)
>>>at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
>>> deTranslatorPB.mkdirs(ClientNamenodeProtocolServerSi
>>> deTranslatorPB.java:630)
>>>at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
>>> ClientNamenodeProtocol$2.callBlockingMethod(
>> ClientNamenodeProtocolProtos.
>>> java)
>>>at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(
>>> ProtobufRpcEngine.java:640)
>>> 
>>> 
>>> The HDFS directory is created as such:
>>> 
>>> self.__params.HdfsResource(self.__params.hdfs_metron_
>>> apps_extensions_working,
>>>   type="directory",
>>>   action="create_on_execute",
>>>   owner=self.__params.metron_user,
>>>   mode=0775)
>>> 
>>> 
>>> As the hdfs write handlers I am logging in as such:
>>> 
>>> HdfsSecurityUtil.login(stormConfig, fsConf);
>>> FileSystem fileSystem = FileSystem.get(fsConf);
>>> 
>>> I am not sure what is different from the indexing hdfs writer setup here,
>>> but what I’m doing obviously is not working.
>>> 
>>> Any ideas?
>>> 
>>> 
>>> - the branch:
>>> https://github.com/ottobackwards/incubator-metron/tree/parser_deploy
>>> 
>>> I am not up to date with master.
>>> 
>> 



Re: [HELP!]Writing to HDFS from Storm

2017-04-12 Thread Otto Fowler
So group=hadoop?

How does the indexing bolt work?  it is metron:metron


On April 12, 2017 at 08:04:47, Justin Leet (justinjl...@gmail.com) wrote:

Chown it to metron:hadoop and it'll work. Storm is in the hadoop group and
with 775 will be able to write.

On Wed, Apr 12, 2017 at 7:56 AM, David Lyle  wrote:

> It's curious to me that you're writing directly from parsing, but I
suspect
> that your parsing topology is running as the storm user and it can't
write
> to those directories.
>
> -D...
>
> On Wed, Apr 12, 2017 at 7:51 AM, Otto Fowler 
> wrote:
>
> > The indexing dir is created:
> >
> > self.__params.HdfsResource(self.__params.metron_apps_indexed_hdfs_dir,
> > type="directory",
> > action="create_on_execute",
> > owner=self.__params.metron_user,
> > group=self.__params.metron_group,
> > mode=0775,
> > )
> >
> >
> >
> >
> > On April 12, 2017 at 07:49:16, Otto Fowler (ottobackwa...@gmail.com)
> > wrote:
> >
> >
> > I am trying to write to HDFS from ParserBolt, but I’m getting the
> following
> > exception:
> >
> > Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied:
> > user=storm, access=WRITE,
> > inode="/apps/metron/extension_working/framework":metron:hdfs:drwxrwxr-x
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> > check(FSPermissionChecker.java:319)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> > check(FSPermissionChecker.java:292)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> > checkPermission(FSPermissionChecker.java:213)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> > checkPermission(FSPermissionChecker.java:190)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> > checkPermission(FSDirectory.java:1827)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> > checkPermission(FSDirectory.java:1811)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(
> > FSDirectory.java:1794)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(
> > FSDirMkdirOp.java:71)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(
> > FSNamesystem.java:4011)
> > at
> > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> > mkdirs(NameNodeRpcServer.java:1102)
> > at
> > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
> > deTranslatorPB.mkdirs(ClientNamenodeProtocolServerSi
> > deTranslatorPB.java:630)
> > at
> > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
> > ClientNamenodeProtocol$2.callBlockingMethod(
> ClientNamenodeProtocolProtos.
> > java)
> > at
> > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(
> > ProtobufRpcEngine.java:640)
> >
> >
> > The HDFS directory is created as such:
> >
> > self.__params.HdfsResource(self.__params.hdfs_metron_
> > apps_extensions_working,
> > type="directory",
> > action="create_on_execute",
> > owner=self.__params.metron_user,
> > mode=0775)
> >
> >
> > As the hdfs write handlers I am logging in as such:
> >
> > HdfsSecurityUtil.login(stormConfig, fsConf);
> > FileSystem fileSystem = FileSystem.get(fsConf);
> >
> > I am not sure what is different from the indexing hdfs writer setup
here,
> > but what I’m doing obviously is not working.
> >
> > Any ideas?
> >
> >
> > - the branch:
> > https://github.com/ottobackwards/incubator-metron/tree/parser_deploy
> >
> > I am not up to date with master.
> >
>


Re: [HELP!]Writing to HDFS from Storm

2017-04-12 Thread Otto Fowler
The parsers are packaged as ‘bundles’ ( our version of NIFI NARS ) and are 
deployed
if required.  So the writes for HDFS are the unpacking the bundles ( with the 
unpacked jars being loaded into a classloader ).

If the unbundled is the same as the bundle, there is no op.  In this case, it 
is first run.

So this is the parser bolt using the HDFS backed extension loader.



On April 12, 2017 at 07:56:35, David Lyle (dlyle65...@gmail.com) wrote:

It's curious to me that you're writing directly from parsing, but I suspect  
that your parsing topology is running as the storm user and it can't write  
to those directories.  

-D...  

On Wed, Apr 12, 2017 at 7:51 AM, Otto Fowler   
wrote:  

> The indexing dir is created:  
>  
> self.__params.HdfsResource(self.__params.metron_apps_indexed_hdfs_dir,  
> type="directory",  
> action="create_on_execute",  
> owner=self.__params.metron_user,  
> group=self.__params.metron_group,  
> mode=0775,  
> )  
>  
>  
>  
>  
> On April 12, 2017 at 07:49:16, Otto Fowler (ottobackwa...@gmail.com)  
> wrote:  
>  
>  
> I am trying to write to HDFS from ParserBolt, but I’m getting the following  
> exception:  
>  
> Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied:  
> user=storm, access=WRITE,  
> inode="/apps/metron/extension_working/framework":metron:hdfs:drwxrwxr-x  
> at  
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.  
> check(FSPermissionChecker.java:319)  
> at  
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.  
> check(FSPermissionChecker.java:292)  
> at  
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.  
> checkPermission(FSPermissionChecker.java:213)  
> at  
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.  
> checkPermission(FSPermissionChecker.java:190)  
> at  
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.  
> checkPermission(FSDirectory.java:1827)  
> at  
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.  
> checkPermission(FSDirectory.java:1811)  
> at  
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(  
> FSDirectory.java:1794)  
> at  
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(  
> FSDirMkdirOp.java:71)  
> at  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(  
> FSNamesystem.java:4011)  
> at  
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.  
> mkdirs(NameNodeRpcServer.java:1102)  
> at  
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi  
> deTranslatorPB.mkdirs(ClientNamenodeProtocolServerSi  
> deTranslatorPB.java:630)  
> at  
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$  
> ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.  
> java)  
> at  
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(  
> ProtobufRpcEngine.java:640)  
>  
>  
> The HDFS directory is created as such:  
>  
> self.__params.HdfsResource(self.__params.hdfs_metron_  
> apps_extensions_working,  
> type="directory",  
> action="create_on_execute",  
> owner=self.__params.metron_user,  
> mode=0775)  
>  
>  
> As the hdfs write handlers I am logging in as such:  
>  
> HdfsSecurityUtil.login(stormConfig, fsConf);  
> FileSystem fileSystem = FileSystem.get(fsConf);  
>  
> I am not sure what is different from the indexing hdfs writer setup here,  
> but what I’m doing obviously is not working.  
>  
> Any ideas?  
>  
>  
> - the branch:  
> https://github.com/ottobackwards/incubator-metron/tree/parser_deploy  
>  
> I am not up to date with master.  
>  


Re: [HELP!]Writing to HDFS from Storm

2017-04-12 Thread Justin Leet
Chown it to metron:hadoop and it'll work.  Storm is in the hadoop group and
with 775 will be able to write.

On Wed, Apr 12, 2017 at 7:56 AM, David Lyle  wrote:

> It's curious to me that you're writing directly from parsing, but I suspect
> that your parsing topology is running as the storm user and it can't write
> to those directories.
>
> -D...
>
> On Wed, Apr 12, 2017 at 7:51 AM, Otto Fowler 
> wrote:
>
> > The indexing dir is created:
> >
> > self.__params.HdfsResource(self.__params.metron_apps_indexed_hdfs_dir,
> >type="directory",
> >action="create_on_execute",
> >owner=self.__params.metron_user,
> >group=self.__params.metron_group,
> >mode=0775,
> >)
> >
> >
> >
> >
> > On April 12, 2017 at 07:49:16, Otto Fowler (ottobackwa...@gmail.com)
> > wrote:
> >
> >
> > I am trying to write to HDFS from ParserBolt, but I’m getting the
> following
> > exception:
> >
> > Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied:
> > user=storm, access=WRITE,
> > inode="/apps/metron/extension_working/framework":metron:hdfs:drwxrwxr-x
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> > check(FSPermissionChecker.java:319)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> > check(FSPermissionChecker.java:292)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> > checkPermission(FSPermissionChecker.java:213)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> > checkPermission(FSPermissionChecker.java:190)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> > checkPermission(FSDirectory.java:1827)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> > checkPermission(FSDirectory.java:1811)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(
> > FSDirectory.java:1794)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(
> > FSDirMkdirOp.java:71)
> > at
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(
> > FSNamesystem.java:4011)
> > at
> > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> > mkdirs(NameNodeRpcServer.java:1102)
> > at
> > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
> > deTranslatorPB.mkdirs(ClientNamenodeProtocolServerSi
> > deTranslatorPB.java:630)
> > at
> > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
> > ClientNamenodeProtocol$2.callBlockingMethod(
> ClientNamenodeProtocolProtos.
> > java)
> > at
> > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(
> > ProtobufRpcEngine.java:640)
> >
> >
> > The HDFS directory is created as such:
> >
> > self.__params.HdfsResource(self.__params.hdfs_metron_
> > apps_extensions_working,
> >type="directory",
> >action="create_on_execute",
> >owner=self.__params.metron_user,
> >mode=0775)
> >
> >
> > As the hdfs write handlers I am logging in as such:
> >
> > HdfsSecurityUtil.login(stormConfig, fsConf);
> > FileSystem fileSystem = FileSystem.get(fsConf);
> >
> > I am not sure what is different from the indexing hdfs writer setup here,
> > but what I’m doing obviously is not working.
> >
> > Any ideas?
> >
> >
> > - the branch:
> > https://github.com/ottobackwards/incubator-metron/tree/parser_deploy
> >
> > I am not up to date with master.
> >
>


Re: [HELP!]Writing to HDFS from Storm

2017-04-12 Thread David Lyle
It's curious to me that you're writing directly from parsing, but I suspect
that your parsing topology is running as the storm user and it can't write
to those directories.

-D...

On Wed, Apr 12, 2017 at 7:51 AM, Otto Fowler 
wrote:

> The indexing dir is created:
>
> self.__params.HdfsResource(self.__params.metron_apps_indexed_hdfs_dir,
>type="directory",
>action="create_on_execute",
>owner=self.__params.metron_user,
>group=self.__params.metron_group,
>mode=0775,
>)
>
>
>
>
> On April 12, 2017 at 07:49:16, Otto Fowler (ottobackwa...@gmail.com)
> wrote:
>
>
> I am trying to write to HDFS from ParserBolt, but I’m getting the following
> exception:
>
> Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied:
> user=storm, access=WRITE,
> inode="/apps/metron/extension_working/framework":metron:hdfs:drwxrwxr-x
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> check(FSPermissionChecker.java:319)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> check(FSPermissionChecker.java:292)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> checkPermission(FSPermissionChecker.java:213)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.
> checkPermission(FSPermissionChecker.java:190)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> checkPermission(FSDirectory.java:1827)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> checkPermission(FSDirectory.java:1811)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(
> FSDirectory.java:1794)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(
> FSDirMkdirOp.java:71)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(
> FSNamesystem.java:4011)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> mkdirs(NameNodeRpcServer.java:1102)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
> deTranslatorPB.mkdirs(ClientNamenodeProtocolServerSi
> deTranslatorPB.java:630)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
> ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.
> java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(
> ProtobufRpcEngine.java:640)
>
>
> The HDFS directory is created as such:
>
> self.__params.HdfsResource(self.__params.hdfs_metron_
> apps_extensions_working,
>type="directory",
>action="create_on_execute",
>owner=self.__params.metron_user,
>mode=0775)
>
>
> As the hdfs write handlers I am logging in as such:
>
> HdfsSecurityUtil.login(stormConfig, fsConf);
> FileSystem fileSystem = FileSystem.get(fsConf);
>
> I am not sure what is different from the indexing hdfs writer setup here,
> but what I’m doing obviously is not working.
>
> Any ideas?
>
>
> - the branch:
> https://github.com/ottobackwards/incubator-metron/tree/parser_deploy
>
> I am not up to date with master.
>


Re: [HELP!]Writing to HDFS from Storm

2017-04-12 Thread Otto Fowler
This is in FULL_DEV vagrant


On April 12, 2017 at 07:51:22, Otto Fowler (ottobackwa...@gmail.com) wrote:

The indexing dir is created:

self.__params.HdfsResource(self.__params.metron_apps_indexed_hdfs_dir,
   type="directory",
   action="create_on_execute",
   owner=self.__params.metron_user,
   group=self.__params.metron_group,
   mode=0775,
   )




On April 12, 2017 at 07:49:16, Otto Fowler (ottobackwa...@gmail.com) wrote:


I am trying to write to HDFS from ParserBolt, but I’m getting the following
exception:

Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied:
user=storm, access=WRITE,
inode="/apps/metron/extension_working/framework":metron:hdfs:drwxrwxr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1794)
at
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4011)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1102)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)


The HDFS directory is created as such:

self.__params.HdfsResource(self.__params.hdfs_metron_apps_extensions_working,
   type="directory",
   action="create_on_execute",
   owner=self.__params.metron_user,
   mode=0775)


As the hdfs write handlers I am logging in as such:

HdfsSecurityUtil.login(stormConfig, fsConf);
FileSystem fileSystem = FileSystem.get(fsConf);

I am not sure what is different from the indexing hdfs writer setup here,
but what I’m doing obviously is not working.

Any ideas?


- the branch:
https://github.com/ottobackwards/incubator-metron/tree/parser_deploy

I am not up to date with master.


Re: [HELP!]Writing to HDFS from Storm

2017-04-12 Thread Otto Fowler
The indexing dir is created:

self.__params.HdfsResource(self.__params.metron_apps_indexed_hdfs_dir,
   type="directory",
   action="create_on_execute",
   owner=self.__params.metron_user,
   group=self.__params.metron_group,
   mode=0775,
   )




On April 12, 2017 at 07:49:16, Otto Fowler (ottobackwa...@gmail.com) wrote:


I am trying to write to HDFS from ParserBolt, but I’m getting the following
exception:

Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied:
user=storm, access=WRITE,
inode="/apps/metron/extension_working/framework":metron:hdfs:drwxrwxr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1794)
at
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4011)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1102)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)


The HDFS directory is created as such:

self.__params.HdfsResource(self.__params.hdfs_metron_apps_extensions_working,
   type="directory",
   action="create_on_execute",
   owner=self.__params.metron_user,
   mode=0775)


As the hdfs write handlers I am logging in as such:

HdfsSecurityUtil.login(stormConfig, fsConf);
FileSystem fileSystem = FileSystem.get(fsConf);

I am not sure what is different from the indexing hdfs writer setup here,
but what I’m doing obviously is not working.

Any ideas?


- the branch:
https://github.com/ottobackwards/incubator-metron/tree/parser_deploy

I am not up to date with master.


[HELP!]Writing to HDFS from Storm

2017-04-12 Thread Otto Fowler
I am trying to write to HDFS from ParserBolt, but I’m getting the following
exception:

Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied:
user=storm, access=WRITE,
inode="/apps/metron/extension_working/framework":metron:hdfs:drwxrwxr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1794)
at
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4011)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1102)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)


The HDFS directory is created as such:

self.__params.HdfsResource(self.__params.hdfs_metron_apps_extensions_working,
   type="directory",
   action="create_on_execute",
   owner=self.__params.metron_user,
   mode=0775)


As the hdfs write handlers I am logging in as such:

HdfsSecurityUtil.login(stormConfig, fsConf);
FileSystem fileSystem = FileSystem.get(fsConf);

I am not sure what is different from the indexing hdfs writer setup here,
but what I’m doing obviously is not working.

Any ideas?


- the branch:
https://github.com/ottobackwards/incubator-metron/tree/parser_deploy

I am not up to date with master.


[GitHub] incubator-metron pull request #526: Metron-846: Add E2E tests for metron man...

2017-04-12 Thread iraghumitra
GitHub user iraghumitra opened a pull request:

https://github.com/apache/incubator-metron/pull/526

Metron-846: Add E2E tests for metron management ui ( Do not merge )

## Contributor Comments
This PR adds e2e test support for management ui. We have few e2e tests in 
management ui but they do not work on quick-dev-platform. The PR makes 
significant changes to the existing e2e code so I would request to review the 
entire e2e tests in total.

- The e2e tests are written to run on quick-dev-platform
- Prerequisites for running e2e test are documented in metron-config README
- Steps to run the e2e tests are documented in metron-config README 


## Pull Request Checklist

Thank you for submitting a contribution to Apache Metron (Incubating).  
Please refer to our [Development 
Guidelines](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61332235)
 for the complete guide to follow for contributions.  
Please refer also to our [Build Verification 
Guidelines](https://cwiki.apache.org/confluence/display/METRON/Verifying+Builds?show-miniview)
 for complete smoke testing guides.  


In order to streamline the review of the contribution we ask you follow 
these guidelines and ask you to double check the following:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? If not one needs to 
be created at [Metron 
Jira](https://issues.apache.org/jira/browse/METRON/?selectedTab=com.atlassian.jira.jira-projects-plugin:summary-panel).
 
- [x] Does your PR title start with METRON- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.
- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?


### For code changes:
- [x] Have you included steps to reproduce the behavior or problem that is 
being changed or addressed?
- [x] Have you included steps or a guide to how the change may be verified 
and tested manually?
- [x] Have you ensured that the full suite of tests and checks have been 
executed in the root incubating-metron folder via:
  ```
  mvn -q clean integration-test install && build_utils/verify_licenses.sh 
  ```

- [x] Have you written or updated unit tests and or integration tests to 
verify your changes?
- [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [x] Have you verified the basic functionality of the build by building 
and running locally with Vagrant full-dev environment or the equivalent?

### For documentation related changes:
- [x] Have you ensured that format looks appropriate for the output in 
which it is rendered by building and verifying the site-book? If not then run 
the following commands and the verify changes via 
`site-book/target/site/index.html`:

  ```
  cd site-book
  bin/generate-md.sh
  mvn site:site
  ```

 Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.
It is also recommened that [travis-ci](https://travis-ci.org) is set up for 
your personal repository such that your branches are built there before 
submitting a pull request.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/iraghumitra/incubator-metron METRON-846

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-metron/pull/526.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #526


commit b7d310ede8785f411c9a4518207dfa3ef77983b3
Author: rmerriman 
Date:   2016-10-19T15:24:52Z

Initial implementation of REST service

commit 77e79aba34e992c951ee804d918aea1e70b638ec
Author: rmerriman 
Date:   2016-10-19T15:31:18Z

added newline at the end of application.yml

commit b042dfdce8e12bf320ff5dcd7b46edf68c66a302
Author: rmerriman 
Date:   2016-10-24T16:07:21Z

Added logging configuration and fixed SLF4J warnings.

commit 2012369e3094dcc0d15365b914005d035e938824
Author: rmerriman 
Date:   2016-11-18T23:11:40Z

Metron Docker implementation

commit d9ea03eeafb6971390899c93bad73ee00216b3f6
Author: rmerriman 
Date:   2016-11-18T23:19:13Z

Cleaned up comments and added newlines to the end of files

commit b73e8084ba6c9299227bea8085d34731dabcdd88
Author: rmerriman 
Date:   2016-11-21T17:34:14Z

Merge branch 'METRON-503' into middleware

commit 

[GitHub] incubator-metron pull request #516: METRON-830 Adding StringFunctions to Ste...

2017-04-12 Thread anandsubbu
Github user anandsubbu commented on a diff in the pull request:

https://github.com/apache/incubator-metron/pull/516#discussion_r111098953
  
--- Diff: 
metron-platform/metron-common/src/main/java/org/apache/metron/common/dsl/functions/StringFunctions.java
 ---
@@ -343,4 +343,89 @@ public Object apply(List args) {
   return String.format(format, formatArgs);
 }
   }
+
+  @Stellar( name="CHOP"
+  , description = "Remove the last character from a String"
+  , params = { "the String to chop last character from, may be 
null"}
+  , returns = "String without last character, null if null String 
input"
+  )
+  public static class chop extends BaseStellarFunction {
+
+@Override
+public Object apply(List strings) {
+
+  if(strings.size() == 0) {
+throw new IllegalArgumentException("[CHOP] missing argument: 
string to be chopped");
--- End diff --

Hi @mattf-horton , with the above change where we check for `strings.get(0) 
== null || strings.get(0).toString().length() `, the following test results in 
a failure:

`Assert.assertEquals("abc",  run("CHOP(msg)", ImmutableMap.of("msg", 
"abc\r\n")));`

Pardon my limited understanding, but this is what I inferred:
- The `if` condition for null and 0 length check occurs twice.
- In the case of the `msg` test case where we check for `\n\r`, the initial 
contents of the `strings` is null and the variable substitution for `msg` 
occurs subsequently. So, this would end up failing for the 1st check. 

Due to this, I will modify the checks to only verify for `strings == null` 
and `string.size() == 0`. Please let me know if there is a better way this can 
be done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-metron pull request #516: METRON-830 Adding StringFunctions to Ste...

2017-04-12 Thread anandsubbu
Github user anandsubbu commented on a diff in the pull request:

https://github.com/apache/incubator-metron/pull/516#discussion_r111075935
  
--- Diff: metron-platform/metron-common/README.md ---
@@ -167,6 +172,14 @@ The `!=` operator is the negation of the above.
 | [ `WEEK_OF_YEAR`](#week_of_year) 
  |
 | [ `YEAR`](#year) 
  |
 
+### `APPEND_IF_MISSING`
+  * Description: Appends the suffix to the end of the string if the string 
does not already end with any of the suffixes.
+  * Input:
+* string - The string to be appended.
+* suffix - The string suffix to append to the end of the string.
+* suffixes - Optional - Additional string suffixes that are valid 
terminators.
--- End diff --

Okay, I will retain additionalsuffix for now. Created METRON-847 to track 
the enhancement.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---