[jira] [Created] (METRON-2332) Enable Tuning of the Profiler's Parallelism from Ambari

2019-12-04 Thread Nick Allen (Jira)
Nick Allen created METRON-2332:
--

 Summary: Enable Tuning of the Profiler's Parallelism from Ambari
 Key: METRON-2332
 URL: https://issues.apache.org/jira/browse/METRON-2332
 Project: Metron
  Issue Type: Improvement
Reporter: Nick Allen
Assignee: Nick Allen


When running the Streaming Profiler in Storm, tuning the parallelism of each 
component requires a user to edit the Flux file. A user should be able to tune 
the parallelism from within Ambari like the other topologies.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (METRON-594) Replay Telemetry Data through Profiler

2019-12-04 Thread Nick Allen (Jira)


[ 
https://issues.apache.org/jira/browse/METRON-594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988280#comment-16988280
 ] 

Nick Allen commented on METRON-594:
---

Closing as duplicate of METRON-1699.

> Replay Telemetry Data through Profiler
> --
>
> Key: METRON-594
> URL: https://issues.apache.org/jira/browse/METRON-594
> Project: Metron
>  Issue Type: Improvement
>Reporter: Nick Allen
>Priority: Major
>
> The Profiler currently consumes live telemetry, in real-time, as it is 
> streamed through Metron.  A useful extension of this functionality would 
> allow the Profiler to also consume archived, historical telemetry.  Allowing 
> a user to selectively replay archived, historical raw telemetry through the 
> Profiler has a number of applications. The following use cases help describe 
> why this might be useful.
> Use Case 1 - Model Development
> When developing a new model, I often need a feature set of historical data on 
> which to train my model.  I can either wait days, weeks, months for the 
> Profiler to generate this based on live data or I could re-run the raw, 
> historical telemetry through the Profiler to get started immediately.  It is 
> much simpler to use the same mechanism to create this historical data set, 
> than a separate batch-driven tool to recreate something that approximates the 
> historical feature set.
> Use Case 2 - Model Deployment 
> When deploying an analytical model to a new environment, like production, on 
> day 1 there is often no historical data for the model to work with.  This 
> often leaves a gap between when the model is deployed and when that model is 
> actually useful.  If I could replay raw telemetry through the profiler a 
> historical feature set could be created as part of the deployment process.  
> This allows my model to start functioning on day 1.
> Use Case 3 - Profile Validation
> When creating a Profile, it is difficult to understand how the configured 
> profile might behave against the entire data set.  By creating the profile 
> and watching it consume real-time streaming data, I only have an 
> understanding of how it behaves on that small segment of data.  If I am able 
> to replay historical telemetry, I can instantly understand how it behaves on 
> a much larger data set; including all the  anomalies and exceptions that 
> exist in all large data sets.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (METRON-594) Replay Telemetry Data through Profiler

2019-12-04 Thread Nick Allen (Jira)


[ 
https://issues.apache.org/jira/browse/METRON-594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988278#comment-16988278
 ] 

Nick Allen commented on METRON-594:
---

This is a duplicate of METRON-1699.

> Replay Telemetry Data through Profiler
> --
>
> Key: METRON-594
> URL: https://issues.apache.org/jira/browse/METRON-594
> Project: Metron
>  Issue Type: Improvement
>Reporter: Nick Allen
>Priority: Major
>
> The Profiler currently consumes live telemetry, in real-time, as it is 
> streamed through Metron.  A useful extension of this functionality would 
> allow the Profiler to also consume archived, historical telemetry.  Allowing 
> a user to selectively replay archived, historical raw telemetry through the 
> Profiler has a number of applications. The following use cases help describe 
> why this might be useful.
> Use Case 1 - Model Development
> When developing a new model, I often need a feature set of historical data on 
> which to train my model.  I can either wait days, weeks, months for the 
> Profiler to generate this based on live data or I could re-run the raw, 
> historical telemetry through the Profiler to get started immediately.  It is 
> much simpler to use the same mechanism to create this historical data set, 
> than a separate batch-driven tool to recreate something that approximates the 
> historical feature set.
> Use Case 2 - Model Deployment 
> When deploying an analytical model to a new environment, like production, on 
> day 1 there is often no historical data for the model to work with.  This 
> often leaves a gap between when the model is deployed and when that model is 
> actually useful.  If I could replay raw telemetry through the profiler a 
> historical feature set could be created as part of the deployment process.  
> This allows my model to start functioning on day 1.
> Use Case 3 - Profile Validation
> When creating a Profile, it is difficult to understand how the configured 
> profile might behave against the entire data set.  By creating the profile 
> and watching it consume real-time streaming data, I only have an 
> understanding of how it behaves on that small segment of data.  If I am able 
> to replay historical telemetry, I can instantly understand how it behaves on 
> a much larger data set; including all the  anomalies and exceptions that 
> exist in all large data sets.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2143) Travis Build Fails to Download Maven

2019-12-04 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2143:
---
Fix Version/s: Next + 1

> Travis Build Fails to Download Maven
> 
>
> Key: METRON-2143
> URL: https://issues.apache.org/jira/browse/METRON-2143
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
> Fix For: Next + 1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code:java}
> 0.50s$ wget 
> https://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.zip
> --2019-05-22 21:24:16--  
> https://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.zip
> Resolving archive.apache.org (archive.apache.org)... 163.172.17.199
> Connecting to archive.apache.org (archive.apache.org)|163.172.17.199|:443... 
> connected.
> HTTP request sent, awaiting response... 503 Service Unavailable
> 2019-05-22 21:24:17 ERROR 503: Service Unavailable.
> The command "wget 
> https://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.zip;
>  failed and exited with 8 during .
> Your build has been stopped.{code}
> See [https://travis-ci.org/apache/metron/jobs/535942660] for an example.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2155) Cache Maven in Travis CI Builds

2019-12-04 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2155:
---
Fix Version/s: Next + 1

> Cache Maven in Travis CI Builds
> ---
>
> Key: METRON-2155
> URL: https://issues.apache.org/jira/browse/METRON-2155
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
> Fix For: Next + 1
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> In the Travis CI builds, we download Maven and even retry up to 10 times 
> should this download fail.  We continue to see some failures when downloading 
> Maven.
> [https://api.travis-ci.org/v3/job/540955869/log.txt]
>  
> This could be a problem within Travis' internal networks or this could be an 
> intermittent issue with the Apache mirrors.  We have no way of knowing and no 
> good way to resolve these sorts of problems.
> We could cache the Maven dependency to mitigate this problem.  It would not 
> truly resolve the problem, but should help mitigate it.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2168) Elasticsearch Updates Not Tested in Integration Test

2019-12-04 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2168:
---
Fix Version/s: Next + 1

> Elasticsearch Updates Not Tested in Integration Test
> 
>
> Key: METRON-2168
> URL: https://issues.apache.org/jira/browse/METRON-2168
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
> Fix For: Next + 1
>
> Attachments: test-failure.log
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The `ElasticsearchUpdateIntegrationTest` is not testing that the 
> `ElasticsearchDao` can update and retrieve values in a manner similar to what 
> would occur in production.
> h3. What?
> Within the Elasticsearch index, the test fails to define the 'guid' field to 
> be of type 'keyword', instead the type is defaulted to 'text'.  In a 
> production setting this mistake would prevent any documents from being found 
> by guid.  Unfortunately, the test passes despite this. The test needs to 
> match the behavior of what a user would experience in production.
> h3. Why? 
> These problems arise because of the way the test is setup.  Instead of 
> directly testing an `ElasticsearchDao` as you might expect this test runs 
> against a `MultiIndexDao` initialized with both an `ElasticseachDao` and an 
> `HBaseDao`. On retrievals the `MultIndexDao` will return the document from 
> whichever index responds first.
> With the current test setup, the underlying `ElasticsearchDao` will never 
> retrieve the document that the test case is expected.  In all cases where the 
> test passes, the document is actually being returned from the `HBaseDao` 
> which is actually just interacting with a mock backend.  The test needs to 
> actually test that we can update and retrieve documents from Elasticsearch.
> h3. Proof?
>  If you alter the test to run against just an `ElasticsearchDao` the test 
> will fail as follows in the attached log file.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2172) Solr Updates Not Tested in Integration Test

2019-12-04 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2172:
---
Fix Version/s: Next + 1

> Solr Updates Not Tested in Integration Test
> ---
>
> Key: METRON-2172
> URL: https://issues.apache.org/jira/browse/METRON-2172
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
> Fix For: Next + 1
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> The `SolrUpdateIntegrationTest` is not testing that the `SolrDao` can update 
> and retrieve values in a manner similar to what would occur in production.
> h3. What?
> This gap in the integration test is hiding a few existing bugs in the 
> SolrUpdateIntegrationTest.
>  # The timestamp is not being populated in the returned documents.
>  # A NullPointerException occurs if a document does not contain a sensor type.
>  # The comments are serialized and deserialized in multiple places and may be 
> stored as either a Map or JSON string.
> h3. Proof?
>  If you alter the test to run against just a `SolrDao` the integration test 
> will fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2254) Intermittent Test Failure in RestFunctionsIntegrationTest

2019-12-04 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2254:
---
Fix Version/s: Next + 1

> Intermittent Test Failure in RestFunctionsIntegrationTest
> -
>
> Key: METRON-2254
> URL: https://issues.apache.org/jira/browse/METRON-2254
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
> Fix For: Next + 1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code:java}
> Tests run: 21, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 12.368 sec 
> <<< FAILURE! - in 
> org.apache.metron.stellar.dsl.functions.RestFunctionsIntegrationTest
> restGetShouldTimeoutWithSuppliedTimeout(org.apache.metron.stellar.dsl.functions.RestFunctionsIntegrationTest)
>   Time elapsed: 0.349 sec  <<< FAILURE!
> java.lang.AssertionError: expected null, but was:<{get=success}>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:755)
>   at org.junit.Assert.assertNull(Assert.java:737)
>   at org.junit.Assert.assertNull(Assert.java:747)
>   at 
> org.apache.metron.stellar.dsl.functions.RestFunctionsIntegrationTest.restGetShouldTimeoutWithSuppliedTimeout(RestFunctionsIntegrationTest.java:279)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.mockserver.junit.MockServerRule$1.evaluate(MockServerRule.java:107)
>   at org.mockserver.junit.ProxyRule$1.evaluate(ProxyRule.java:102)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Results :
> Failed tests: 
>   RestFunctionsIntegrationTest.restGetShouldTimeoutWithSuppliedTimeout:279 
> expected null, but was:<{get=success}>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2298) "User Base DN" Missing from Security Configuration

2019-12-04 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2298:
---
Fix Version/s: Next + 1

> "User Base DN" Missing from Security Configuration
> --
>
> Key: METRON-2298
> URL: https://issues.apache.org/jira/browse/METRON-2298
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
> Fix For: Next + 1
>
> Attachments: Screen Shot 2019-10-22 at 5.28.07 PM.png, Screen Shot 
> 2019-10-22 at 5.30.12 PM.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The "User Base DN" is not visible on Metron's Security configuration tab in 
> Ambari.  This property should be visible along with the other LDAP properties.
> This property is currently only visible under "Advanced metron-security-env".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2303) Change Default HDFS Port for Batch Profiler

2019-12-04 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2303:
---
Fix Version/s: Next + 1

> Change Default HDFS Port for Batch Profiler
> ---
>
> Key: METRON-2303
> URL: https://issues.apache.org/jira/browse/METRON-2303
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
> Fix For: Next + 1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The Batch Profiler properties currently default to port 9000 for HDFS.  This 
> should be changed to port 8020 to match the default port used with HDP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2308) Fix 'Degrees' Example Profile

2019-12-04 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2308:
---
Fix Version/s: Next + 1

> Fix 'Degrees' Example Profile
> -
>
> Key: METRON-2308
> URL: https://issues.apache.org/jira/browse/METRON-2308
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
> Fix For: Next + 1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> One of the example profiles uses "in" as a variable name.  This is now a 
> reserved keyword and cannot be used as a variable name.  The example should 
> be updated to work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2164) Remove the Split-Join Enrichment Topology

2019-12-04 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2164:
---
Fix Version/s: Next + 1

> Remove the Split-Join Enrichment Topology
> -
>
> Key: METRON-2164
> URL: https://issues.apache.org/jira/browse/METRON-2164
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
> Fix For: Next + 1
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The Split-Join Enrichment topology has been deprecated since November 2018. 
> Metron defaults to using the Unified Enrichment topology. 
> Here is the original discuss thread on deprecation.
> [https://lists.apache.org/thread.html/6cfc883de28a5cb41f26d0523522d4b93272ac954e5713c80a35675e@%3Cdev.metron.apache.org%3E]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2321) Remove Legacy AWS Deployment Path

2019-12-04 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2321:
---
Fix Version/s: Next + 1

> Remove Legacy AWS Deployment Path
> -
>
> Key: METRON-2321
> URL: https://issues.apache.org/jira/browse/METRON-2321
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
> Fix For: Next + 1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The automated AWS deployment mechanism (metron-deployment/amazon-ec2) should 
> be removed.  It is no longer the preferred installation path for deploying to 
> AWS.
> There are many problems with maintaining this deployment method.
>  * It deploys an insecure cluster by default.
>  * It produces a cluster that does not survive a reboot. 
>  * It is difficult for the the community to support a code path that is not 
> free.
>  * This is no longer the preferred installation path for AWS. 
>  * It is difficult for the community to support a code path that is not free.
>  * Is a point-of-confusion for new Metron users.
> If a user wants to deploy to AWS, they should launch their EC2 nodes, install 
> Ambari, and then using the MPack to deploy Metron.  That is the preferred 
> installation path for AWS.
> This has been discussed a few times on the mailing list.
>  * 
> [https://lists.apache.org/thread.html/4ba83a634a10ecc9380d4592a6fe568dfaf6a1b5da433cc84b129e6a@%3Cdev.metron.apache.org%3E]
>  * 
> [https://lists.apache.org/thread.html/79d18eae3569fc38dce5b34839bc340c2c0f7d4b523d208e80fa2dc2@%3Cuser.metron.apache.org%3E]
>  * 
> [https://lists.apache.org/thread.html/580df44bf1bba83687db8b6af6d44ead116af43f968b2e9c5d8e86bf@%3Cuser.metron.apache.org%3E]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-885) Build RPMs in Travis

2019-12-02 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-885:
--
Issue Type: New Feature  (was: Bug)

> Build RPMs in Travis
> 
>
> Key: METRON-885
> URL: https://issues.apache.org/jira/browse/METRON-885
> Project: Metron
>  Issue Type: New Feature
>Reporter: Otto Fowler
>Priority: Major
>
> Our travis builds do not build rpms and should, although I am not sure how to 
> get that working. as our travis infrastructure is ubuntu based



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-885) Build RPMs in Travis

2019-12-02 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-885:
--
Summary: Build RPMs in Travis  (was: Metron Travis builds do not build RPMs)

> Build RPMs in Travis
> 
>
> Key: METRON-885
> URL: https://issues.apache.org/jira/browse/METRON-885
> Project: Metron
>  Issue Type: Bug
>Reporter: Otto Fowler
>Priority: Major
>
> Our travis builds do not build rpms and should, although I am not sure how to 
> get that working. as our travis infrastructure is ubuntu based



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2271) Reorganize Travis Build

2019-12-02 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2271:
---
Fix Version/s: Next + 1

> Reorganize Travis Build
> ---
>
> Key: METRON-2271
> URL: https://issues.apache.org/jira/browse/METRON-2271
> Project: Metron
>  Issue Type: Improvement
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
> Fix For: Next + 1
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The way in which we organize the Travis builds ensures that each job will 
> build all of Metron, each time.  Although we attempt to cache the local Maven 
> repository to avoid this, in practice all jobs start roughly in parallel and 
> so this cache is never used.
> We are increasingly hitting the 50 minute time limit on our integration test 
> job which causes the Travis build to fail.  By reorganizing the build, so 
> that each job only builds the modules that it needs, we should be able to 
> save some time and avoid breaching this time limit.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (METRON-2330) Document Profiler "'global'" object

2019-12-02 Thread Nick Allen (Jira)


[ 
https://issues.apache.org/jira/browse/METRON-2330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16986319#comment-16986319
 ] 

Nick Allen commented on METRON-2330:


This example just uses a Stellar expression that returns the same value for all 
messages applied to the profile.  The example just uses a String named "global" 
because the profile is building a single, global profile measurement/value.  We 
could have just as easily used an expression that returns the same value for 
all messages, like 'cheese' or 'dima'.  

Hope that helps

> Document Profiler "'global'" object
> ---
>
> Key: METRON-2330
> URL: https://issues.apache.org/jira/browse/METRON-2330
> Project: Metron
>  Issue Type: Improvement
>Affects Versions: 1.7.1
>Reporter: Dima Kovalyov
>Priority: Minor
>
> Dear Metron community,
>  
> "[Statistics and Mathematical 
> Functions|[https://metron.apache.org/current-book/metron-analytics/metron-statistics/index.html]];
>  page makes use of:
> {code:java}
> "foreach": "'global'"
> {code}
> But nowhere on the internet, I was able to find any description of what it 
> is, how it's working and how to troubleshoot it.
> The page mentions "We will capture a global statistical state for the 
> {{value}} field and we will look back for a 5 minute window when computing 
> the median." from which I can guess that 'global' represents the entire 
> message instead of any particular field.
> Can you please shed some more light on it?
> In the comments, I'll post an example of my restle with it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2326) Unable to Call ENRICHMENT_GET from Threat Triage Rule Reason Field

2019-11-21 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2326:
---
Summary: Unable to Call ENRICHMENT_GET from Threat Triage Rule Reason Field 
 (was: Unable to Call ENRICHMENT_GET from Threat Triage Rule 'Reason' Field)

> Unable to Call ENRICHMENT_GET from Threat Triage Rule Reason Field
> --
>
> Key: METRON-2326
> URL: https://issues.apache.org/jira/browse/METRON-2326
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
>
> A Threat Triage Rule's "reason" field can contain executable Stellar to 
> provide an operator context as to why a rule fired during Threat Triage.  I 
> am unable to call any function that requires a StellarContext during 
> initialization, from the 'Reason' field of a Threat Triage Rule.  For 
> example, I cannot call `ENRICHMENT_GET`.
> h3. Steps to Replicate
> 1. Create a simple file called `user.csv`.
> {code:java}
> [root@node1 ~]# cat user.csv
>  jdoe,192.168.138.2
>  jane,192.168.66.1
>  ciana,192.168.138.158
>  danixa,95.163.121.204
>  jim,192.168.66.121
> {code}
> 2 . Create a file called `user-extractor.json`.
> {code:java}
> {
>  "config": {
>  "columns": {
>  "user": 0,
>  "ip": 1
>  },
>  "indicator_column": "ip",
>  "separator": ",",
>  "type": "user"
>  },
>  "extractor": "CSV"
>  }
> {code}
> 3. Import the enrichment data.
> {code:java}
> source /etc/default/metron
>  $METRON_HOME/bin/flatfile_loader.sh -i ./user.csv -t enrichment -c t -e 
> ./user-extractor.json
> {code}
> 4. Validate that the enrichment loaded successfully.
>  {code:java}
>  [root@node1 0.7.2]# source /etc/default/metron
>  [root@node1 0.7.2]# $METRON_HOME/bin/stellar -z $ZOOKEEPER
>  
>  [Stellar]>>> ip_dst_addr := "192.168.138.2"
>  192.168.138.2
>  
>  [Stellar]>>> ENRICHMENT_GET('user', ip_dst_addr, 'enrichment', 't')
>  \{ip=192.168.138.2, user=jdoe}
> {code}
> 5. Create a threat triage rule that attempts an ENRICHMENT_GET.
> {code}
>  [Stellar]>>> conf := SHELL_EDIT()
>  {
>  "enrichment": {
>  "fieldMap": {
>  "stellar": {
>  "config": {
>  "is_alert": "true"
>  }
>  }
>  },
>  "fieldToTypeMap": {},
>  "config": {}
>  },
>  "threatIntel": {
>  "fieldMap": {},
>  "fieldToTypeMap": {},
>  "config": {},
>  "triageConfig": {
>  "riskLevelRules": [
>  {
>  "name": "Rule",
>  "comment": "This rule does not work when executing the 'reason' field.",
>  "rule": "true",
>  "reason": "FORMAT('Call to ENRICHMENT_GET=%s', ENRICHMENT_GET('user', 
> ip_dst_addr, 'enrichment', 't'))",
>  "score": "100"
>  }
>  ],
>  "aggregator": "MAX",
>  "aggregationConfig": {}
>  }
>  },
>  "configuration": {}
>  }
>  
>  [Stellar]>>> CONFIG_PUT("ENRICHMENT", conf, "snort")
> {code}
>  
> 6. The Storm worker logs for Enrichment show the following error.
>  {code:java}
>  2019-11-21 03:54:34.370 o.a.c.f.r.c.TreeCache Curator-TreeCache-4 [ERROR]
>  org.apache.metron.jackson.databind.JsonMappingException: Unable to find 
> capability GLOBAL_CONFIG; it may not be available in your context.
>  at [Source: java.io.ByteArrayInputStream@1f55bdda; line: 24, column: 11] 
> (through reference chain: 
> org.apache.metron.common.configuration.enrichment.SensorEnrichmentConfig["threatIntel"]->org.apache.metron.common.configuration.enrichment.threatintel.ThreatIntelConfig["triageConfig"]->org.apache.metron.common.configuration.enrichment.threatintel.ThreatTriageConfig["riskLevelRules"])
>  at 
> org.apache.metron.jackson.databind.JsonMappingException.from(JsonMappingException.java:262)
>  ~[stormjar.jar:?]
>  at 
> org.apache.metron.jackson.databind.deser.SettableBeanProperty._throwAsIOE(SettableBeanProperty.java:537)
>  ~[stormjar.jar:?]
>  at 
> org.apache.metron.jackson.databind.deser.SettableBeanProperty._throwAsIOE(SettableBeanProperty.java:518)
>  ~[stormjar.jar:?]
>  at 
> org.apache.metron.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99)
>  ~[stormjar.jar:?]
>  at 
> org.apache.metron.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:260)
>  ~[stormjar.jar:?]
>  at 
> org.apache.metron.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:125)
>  ~[stormjar.jar:?]
>  at 
> org.apache.metron.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:490)
>  ~[stormjar.jar:?]
>  at 
> org.apache.metron.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:95)
>  ~[stormjar.jar:?]
>  at 
> org.apache.metron.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:260)
>  ~[stormjar.jar:?]
>  at 
> org.apache.metron.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:125)
>  ~[stormjar.jar:?]
>  at 
> 

[jira] [Created] (METRON-2326) Unable to Call ENRICHMENT_GET from Threat Triage Rule 'Reason' Field

2019-11-21 Thread Nick Allen (Jira)
Nick Allen created METRON-2326:
--

 Summary: Unable to Call ENRICHMENT_GET from Threat Triage Rule 
'Reason' Field
 Key: METRON-2326
 URL: https://issues.apache.org/jira/browse/METRON-2326
 Project: Metron
  Issue Type: Bug
Reporter: Nick Allen
Assignee: Nick Allen


A Threat Triage Rule's "reason" field can contain executable Stellar to provide 
an operator context as to why a rule fired during Threat Triage.  I am unable 
to call any function that requires a StellarContext during initialization, from 
the 'Reason' field of a Threat Triage Rule.  For example, I cannot call 
`ENRICHMENT_GET`.

h3. Steps to Replicate
1. Create a simple file called `user.csv`.
{code:java}
[root@node1 ~]# cat user.csv
 jdoe,192.168.138.2
 jane,192.168.66.1
 ciana,192.168.138.158
 danixa,95.163.121.204
 jim,192.168.66.121
{code}

2 . Create a file called `user-extractor.json`.
{code:java}
{
 "config": {
 "columns": {
 "user": 0,
 "ip": 1
 },
 "indicator_column": "ip",
 "separator": ",",
 "type": "user"
 },
 "extractor": "CSV"
 }
{code}

3. Import the enrichment data.
{code:java}
source /etc/default/metron
 $METRON_HOME/bin/flatfile_loader.sh -i ./user.csv -t enrichment -c t -e 
./user-extractor.json
{code}

4. Validate that the enrichment loaded successfully.
 {code:java}
 [root@node1 0.7.2]# source /etc/default/metron
 [root@node1 0.7.2]# $METRON_HOME/bin/stellar -z $ZOOKEEPER
 
 [Stellar]>>> ip_dst_addr := "192.168.138.2"
 192.168.138.2
 
 [Stellar]>>> ENRICHMENT_GET('user', ip_dst_addr, 'enrichment', 't')
 \{ip=192.168.138.2, user=jdoe}
{code}

5. Create a threat triage rule that attempts an ENRICHMENT_GET.
{code}
 [Stellar]>>> conf := SHELL_EDIT()
 {
 "enrichment": {
 "fieldMap": {
 "stellar": {
 "config": {
 "is_alert": "true"
 }
 }
 },
 "fieldToTypeMap": {},
 "config": {}
 },
 "threatIntel": {
 "fieldMap": {},
 "fieldToTypeMap": {},
 "config": {},
 "triageConfig": {
 "riskLevelRules": [
 {
 "name": "Rule",
 "comment": "This rule does not work when executing the 'reason' field.",
 "rule": "true",
 "reason": "FORMAT('Call to ENRICHMENT_GET=%s', ENRICHMENT_GET('user', 
ip_dst_addr, 'enrichment', 't'))",
 "score": "100"
 }
 ],
 "aggregator": "MAX",
 "aggregationConfig": {}
 }
 },
 "configuration": {}
 }
 
 [Stellar]>>> CONFIG_PUT("ENRICHMENT", conf, "snort")
{code}
 
6. The Storm worker logs for Enrichment show the following error.
 {code:java}
 2019-11-21 03:54:34.370 o.a.c.f.r.c.TreeCache Curator-TreeCache-4 [ERROR]
 org.apache.metron.jackson.databind.JsonMappingException: Unable to find 
capability GLOBAL_CONFIG; it may not be available in your context.
 at [Source: java.io.ByteArrayInputStream@1f55bdda; line: 24, column: 11] 
(through reference chain: 
org.apache.metron.common.configuration.enrichment.SensorEnrichmentConfig["threatIntel"]->org.apache.metron.common.configuration.enrichment.threatintel.ThreatIntelConfig["triageConfig"]->org.apache.metron.common.configuration.enrichment.threatintel.ThreatTriageConfig["riskLevelRules"])
 at 
org.apache.metron.jackson.databind.JsonMappingException.from(JsonMappingException.java:262)
 ~[stormjar.jar:?]
 at 
org.apache.metron.jackson.databind.deser.SettableBeanProperty._throwAsIOE(SettableBeanProperty.java:537)
 ~[stormjar.jar:?]
 at 
org.apache.metron.jackson.databind.deser.SettableBeanProperty._throwAsIOE(SettableBeanProperty.java:518)
 ~[stormjar.jar:?]
 at 
org.apache.metron.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99)
 ~[stormjar.jar:?]
 at 
org.apache.metron.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:260)
 ~[stormjar.jar:?]
 at 
org.apache.metron.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:125)
 ~[stormjar.jar:?]
 at 
org.apache.metron.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:490)
 ~[stormjar.jar:?]
 at 
org.apache.metron.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:95)
 ~[stormjar.jar:?]
 at 
org.apache.metron.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:260)
 ~[stormjar.jar:?]
 at 
org.apache.metron.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:125)
 ~[stormjar.jar:?]
 at 
org.apache.metron.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:490)
 ~[stormjar.jar:?]
 at 
org.apache.metron.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:95)
 ~[stormjar.jar:?]
 at 
org.apache.metron.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:260)
 ~[stormjar.jar:?]
 at 
org.apache.metron.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:125)
 ~[stormjar.jar:?]
 at 
org.apache.metron.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3807)
 ~[stormjar.jar:?]
 at 

[jira] [Updated] (METRON-2321) Remove Legacy AWS Deployment Path

2019-11-19 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2321:
---
Summary: Remove Legacy AWS Deployment Path  (was: Remove Legacy 
"amazon-aws" Deployment Path)

> Remove Legacy AWS Deployment Path
> -
>
> Key: METRON-2321
> URL: https://issues.apache.org/jira/browse/METRON-2321
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
>
> The automated AWS deployment mechanism (metron-deployment/amazon-ec2) should 
> be removed.  It is no longer the preferred installation path for deploying to 
> AWS.
> There are many problems with maintaining this deployment method.
>  * It deploys an insecure cluster by default.
>  * It produces a cluster that does not survive a reboot. 
>  * It is difficult for the the community to support a code path that is not 
> free.
>  * This is no longer the preferred installation path for AWS. 
>  * It is difficult for the community to support a code path that is not free.
>  * Is a point-of-confusion for new Metron users.
> If a user wants to deploy to AWS, they should launch their EC2 nodes, install 
> Ambari, and then using the MPack to deploy Metron.  That is the preferred 
> installation path for AWS.
> This has been discussed a few times on the mailing list.
>  * 
> [https://lists.apache.org/thread.html/4ba83a634a10ecc9380d4592a6fe568dfaf6a1b5da433cc84b129e6a@%3Cdev.metron.apache.org%3E]
>  * 
> [https://lists.apache.org/thread.html/79d18eae3569fc38dce5b34839bc340c2c0f7d4b523d208e80fa2dc2@%3Cuser.metron.apache.org%3E]
>  * 
> [https://lists.apache.org/thread.html/580df44bf1bba83687db8b6af6d44ead116af43f968b2e9c5d8e86bf@%3Cuser.metron.apache.org%3E]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (METRON-2321) Remove Legacy "amazon-aws" Deployment Path

2019-11-19 Thread Nick Allen (Jira)
Nick Allen created METRON-2321:
--

 Summary: Remove Legacy "amazon-aws" Deployment Path
 Key: METRON-2321
 URL: https://issues.apache.org/jira/browse/METRON-2321
 Project: Metron
  Issue Type: Bug
Reporter: Nick Allen
Assignee: Nick Allen


The automated AWS deployment mechanism (metron-deployment/amazon-ec2) should be 
removed.  It is no longer the preferred installation path for deploying to AWS.

There are many problems with maintaining this deployment method.
 * It deploys an insecure cluster by default.
 * It produces a cluster that does not survive a reboot. 
 * It is difficult for the the community to support a code path that is not 
free.
 * This is no longer the preferred installation path for AWS. 
 * It is difficult for the community to support a code path that is not free.
 * Is a point-of-confusion for new Metron users.

If a user wants to deploy to AWS, they should launch their EC2 nodes, install 
Ambari, and then using the MPack to deploy Metron.  That is the preferred 
installation path for AWS.

This has been discussed a few times on the mailing list.
 * 
[https://lists.apache.org/thread.html/4ba83a634a10ecc9380d4592a6fe568dfaf6a1b5da433cc84b129e6a@%3Cdev.metron.apache.org%3E]
 * 
[https://lists.apache.org/thread.html/79d18eae3569fc38dce5b34839bc340c2c0f7d4b523d208e80fa2dc2@%3Cuser.metron.apache.org%3E]
 * 
[https://lists.apache.org/thread.html/580df44bf1bba83687db8b6af6d44ead116af43f968b2e9c5d8e86bf@%3Cuser.metron.apache.org%3E]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (METRON-2285) Batch Profiler Cannot Persist Data Sketches

2019-11-18 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen reassigned METRON-2285:
--

Assignee: Nick Allen

> Batch Profiler Cannot Persist Data Sketches
> ---
>
> Key: METRON-2285
> URL: https://issues.apache.org/jira/browse/METRON-2285
> Project: Metron
>  Issue Type: Bug
>Affects Versions: 0.7.1
>Reporter: Maxim Dashenko
>Assignee: Nick Allen
>Priority: Major
>
> Used command:
> {code}
> /usr/hdp/current/spark2-client/bin/spark-submit --class 
> org.apache.metron.profiler.spark.cli.BatchProfilerCLI --properties-file 
> /usr/hcp/current/metron/config/batch-profiler.properties 
> ~/metron-profiler-spark-0.7.1.1.9.1.0-6.jar --config 
> /usr/hcp/current/metron/config/batch-profiler.properties --profiles 
> ~/profiler.json
> {code}
>  cat /usr/hcp/current/metron/config/batch-profiler.properties
> {code}
> profiler.batch.input.path=/tmp/test_data.logs
> profiler.batch.input.format=json
> profiler.period.duration=15
> profiler.period.duration.units=MINUTES
> {code}
>  
> cat ~/profiler.json
> {code}
> {
>"profiles":[
>   {
>  "profile":"batchteststat",
>  "onlyif":"source.type == 'testsource' and devicehostname == 
> 'windows9.something.com'",
>  "foreach":"devicehostname",
>  "update":{
> "s":"STATS_ADD(s, devicehostname)"
>  },
>  "result":{
> "profile":"s"
>  }
>   }
>],
>"timestampField":"timestamp"
> }
> {code}
> cat test_data.logs
> {code}
> {"devicehostname": "windows9.something.com", "timestamp": 1567241981000, 
> "source.type": "testsource"}
> {code}
> The command raises an exception:
> {code}
> Exception in thread "main" org.apache.spark.SparkException: Job aborted due 
> to stage failure: Task 68 in stage 8.0 failed 1 times, most recent failure: 
> Lost task 68.0 in stage 8.0 (TID 274, localhost, executor driver): 
> com.esotericsoftware.kryo.KryoException: Unable to find class: 
> org.apache.metron.statistics.OnlineStatisticsProvider
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
>   at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:781)
>   at 
> org.apache.metron.common.utils.SerDeUtils.fromBytes(SerDeUtils.java:262)
>   at 
> org.apache.metron.profiler.spark.ProfileMeasurementAdapter.toProfileMeasurement(ProfileMeasurementAdapter.java:85)
>   at 
> org.apache.metron.profiler.spark.function.HBaseWriterFunction.call(HBaseWriterFunction.java:124)
>   at org.apache.spark.sql.Dataset$$anonfun$48.apply(Dataset.scala:2266)
>   at org.apache.spark.sql.Dataset$$anonfun$48.apply(Dataset.scala:2266)
>   at 
> org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6.apply(objects.scala:196)
>   at 
> org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6.apply(objects.scala:193)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>   at org.apache.spark.scheduler.Task.run(Task.scala:108)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.metron.statistics.OnlineStatisticsProvider
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at 

[jira] [Updated] (METRON-2285) Batch Profiler Cannot Persist Data Sketches

2019-11-18 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2285:
---
Summary: Batch Profiler Cannot Persist Data Sketches  (was: Metron Profiler 
for Spark - Stellar function STATS_ADD can't be used)

> Batch Profiler Cannot Persist Data Sketches
> ---
>
> Key: METRON-2285
> URL: https://issues.apache.org/jira/browse/METRON-2285
> Project: Metron
>  Issue Type: Bug
>Affects Versions: 0.7.1
>Reporter: Maxim Dashenko
>Priority: Major
>
> Used command:
> {code}
> /usr/hdp/current/spark2-client/bin/spark-submit --class 
> org.apache.metron.profiler.spark.cli.BatchProfilerCLI --properties-file 
> /usr/hcp/current/metron/config/batch-profiler.properties 
> ~/metron-profiler-spark-0.7.1.1.9.1.0-6.jar --config 
> /usr/hcp/current/metron/config/batch-profiler.properties --profiles 
> ~/profiler.json
> {code}
>  cat /usr/hcp/current/metron/config/batch-profiler.properties
> {code}
> profiler.batch.input.path=/tmp/test_data.logs
> profiler.batch.input.format=json
> profiler.period.duration=15
> profiler.period.duration.units=MINUTES
> {code}
>  
> cat ~/profiler.json
> {code}
> {
>"profiles":[
>   {
>  "profile":"batchteststat",
>  "onlyif":"source.type == 'testsource' and devicehostname == 
> 'windows9.something.com'",
>  "foreach":"devicehostname",
>  "update":{
> "s":"STATS_ADD(s, devicehostname)"
>  },
>  "result":{
> "profile":"s"
>  }
>   }
>],
>"timestampField":"timestamp"
> }
> {code}
> cat test_data.logs
> {code}
> {"devicehostname": "windows9.something.com", "timestamp": 1567241981000, 
> "source.type": "testsource"}
> {code}
> The command raises an exception:
> {code}
> Exception in thread "main" org.apache.spark.SparkException: Job aborted due 
> to stage failure: Task 68 in stage 8.0 failed 1 times, most recent failure: 
> Lost task 68.0 in stage 8.0 (TID 274, localhost, executor driver): 
> com.esotericsoftware.kryo.KryoException: Unable to find class: 
> org.apache.metron.statistics.OnlineStatisticsProvider
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
>   at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:781)
>   at 
> org.apache.metron.common.utils.SerDeUtils.fromBytes(SerDeUtils.java:262)
>   at 
> org.apache.metron.profiler.spark.ProfileMeasurementAdapter.toProfileMeasurement(ProfileMeasurementAdapter.java:85)
>   at 
> org.apache.metron.profiler.spark.function.HBaseWriterFunction.call(HBaseWriterFunction.java:124)
>   at org.apache.spark.sql.Dataset$$anonfun$48.apply(Dataset.scala:2266)
>   at org.apache.spark.sql.Dataset$$anonfun$48.apply(Dataset.scala:2266)
>   at 
> org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6.apply(objects.scala:196)
>   at 
> org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6.apply(objects.scala:193)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>   at org.apache.spark.scheduler.Task.run(Task.scala:108)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.metron.statistics.OnlineStatisticsProvider
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at 

[jira] [Created] (METRON-2314) HDFSIndexingIntegrationTests Fails with SLF4J/Log4j Delegation Loop

2019-11-14 Thread Nick Allen (Jira)
Nick Allen created METRON-2314:
--

 Summary: HDFSIndexingIntegrationTests Fails with SLF4J/Log4j 
Delegation Loop
 Key: METRON-2314
 URL: https://issues.apache.org/jira/browse/METRON-2314
 Project: Metron
  Issue Type: Bug
Reporter: Nick Allen
Assignee: Nick Allen


After merging METRON-2310 into the `feature/METRON-2088-support-hdp-3.1` 
feature branch, the `HDFSIndexingIntegrationTest` fails.  
{code:java}
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.185 sec <<< 
FAILURE! - in 
org.apache.metron.indexing.integration.HDFSIndexingIntegrationTesttest(org.apache.metron.indexing.integration.HDFSIndexingIntegrationTest)
  Time elapsed: 0.177 sec  <<< ERROR!java.lang.ExceptionInInitializerError at 
java.lang.Class.forName0(Native Method) at 
java.lang.Class.forName(Class.java:264) at 
org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:48) at 
org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72) at 
org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45) at 
org.slf4j.LoggerFactory.bind(LoggerFactory.java:150) at 
org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124) at 
org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412) at 
org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357) at 
org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383) at 
org.apache.metron.integration.components.KafkaComponent.(KafkaComponent.java:68)
 at 
org.apache.metron.integration.BaseIntegrationTest.getKafkaComponent(BaseIntegrationTest.java:29)
 at 
org.apache.metron.indexing.integration.IndexingIntegrationTest.test(IndexingIntegrationTest.java:72)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
 at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128) 
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)Caused 
by: java.lang.IllegalStateException: Detected both log4j-over-slf4j.jar AND 
slf4j-log4j12.jar on the class path, preempting StackOverflowError. See also 
http://www.slf4j.org/codes.html#log4jDelegationLoop for more details. at 
org.apache.log4j.Log4jLoggerFactory.(Log4jLoggerFactory.java:49) ... 37 
moreResults :Tests in error:  
HDFSIndexingIntegrationTest>IndexingIntegrationTest.test:72->BaseIntegrationTest.getKafkaComponent:29
 » ExceptionInInitializer
{code}
See also [https://api.travis-ci.org/v3/job/611476944/log.txt].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2284) Metron Profiler for Spark doesn't work as expected

2019-11-12 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2284:
---
Fix Version/s: Next + 1

> Metron Profiler for Spark doesn't work as expected
> --
>
> Key: METRON-2284
> URL: https://issues.apache.org/jira/browse/METRON-2284
> Project: Metron
>  Issue Type: Bug
>Affects Versions: 0.7.1
>Reporter: Maxim Dashenko
>Assignee: Nick Allen
>Priority: Major
> Fix For: Next + 1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Used command:
> {code}
> /usr/hdp/current/spark2-client/bin/spark-submit --class 
> org.apache.metron.profiler.spark.cli.BatchProfilerCLI --properties-file 
> /usr/hcp/current/metron/config/batch-profiler.properties 
> ~/metron-profiler-spark-0.7.1.1.9.1.0-6.jar --config 
> /usr/hcp/current/metron/config/batch-profiler.properties --profiles 
> ~/profiler.json
> {code}
>  cat /usr/hcp/current/metron/config/batch-profiler.properties
> {code}
> profiler.batch.input.path=/tmp/test_data.logs
> profiler.batch.input.format=json
> profiler.period.duration=15
> profiler.period.duration.units=MINUTES
> {code}
>  
> cat ~/profiler.json
> {code}
> {
>  "profiles":[
>{
>  "profile":"batchtest5",
>  "onlyif":"source.type == 'testsource' and devicehostname == 
> 'windows9.something.com'",
>  "foreach":"devicehostname",
>  "init":{
>"val":"SET_INIT()"
>  },
>  "update":{
>"val":"SET_ADD(val, IS_EMPTY(devicehostname))"
>  },
> "result":{
>"profile":"val"
> }
>}
>  ],
>  "timestampField":"timestamp"
> }
> {code}
>  cat test_data.logs
> {code}
> {"devicehostname": "windows9.something.com", "timestamp": 1567241981000, 
> "source.type": "testsource"}
> {code}
> Stellar statement
> {code}
> PROFILE_GET('batchtest5', 'windows9.something.com', PROFILE_FIXED(100, 
> 'DAYS'))
> {code}
> Returns:
> {code}
> [[true]]
> {code}
> Expected result:
> {code}
> [[false]]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (METRON-2284) Metron Profiler for Spark doesn't work as expected

2019-11-08 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen reassigned METRON-2284:
--

Assignee: Nick Allen

> Metron Profiler for Spark doesn't work as expected
> --
>
> Key: METRON-2284
> URL: https://issues.apache.org/jira/browse/METRON-2284
> Project: Metron
>  Issue Type: Bug
>Affects Versions: 0.7.1
>Reporter: Maxim Dashenko
>Assignee: Nick Allen
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Used command:
> {code}
> /usr/hdp/current/spark2-client/bin/spark-submit --class 
> org.apache.metron.profiler.spark.cli.BatchProfilerCLI --properties-file 
> /usr/hcp/current/metron/config/batch-profiler.properties 
> ~/metron-profiler-spark-0.7.1.1.9.1.0-6.jar --config 
> /usr/hcp/current/metron/config/batch-profiler.properties --profiles 
> ~/profiler.json
> {code}
>  cat /usr/hcp/current/metron/config/batch-profiler.properties
> {code}
> profiler.batch.input.path=/tmp/test_data.logs
> profiler.batch.input.format=json
> profiler.period.duration=15
> profiler.period.duration.units=MINUTES
> {code}
>  
> cat ~/profiler.json
> {code}
> {
>  "profiles":[
>{
>  "profile":"batchtest5",
>  "onlyif":"source.type == 'testsource' and devicehostname == 
> 'windows9.something.com'",
>  "foreach":"devicehostname",
>  "init":{
>"val":"SET_INIT()"
>  },
>  "update":{
>"val":"SET_ADD(val, IS_EMPTY(devicehostname))"
>  },
> "result":{
>"profile":"val"
> }
>}
>  ],
>  "timestampField":"timestamp"
> }
> {code}
>  cat test_data.logs
> {code}
> {"devicehostname": "windows9.something.com", "timestamp": 1567241981000, 
> "source.type": "testsource"}
> {code}
> Stellar statement
> {code}
> PROFILE_GET('batchtest5', 'windows9.something.com', PROFILE_FIXED(100, 
> 'DAYS'))
> {code}
> Returns:
> {code}
> [[true]]
> {code}
> Expected result:
> {code}
> [[false]]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (METRON-2284) Metron Profiler for Spark doesn't work as expected

2019-11-07 Thread Nick Allen (Jira)


[ 
https://issues.apache.org/jira/browse/METRON-2284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16969431#comment-16969431
 ] 

Nick Allen commented on METRON-2284:


This definitely looks to be a bug.  The behavior between the Profiler in the 
REPL and in the Batch Profiler in Spark should be identical, but it seems not 
to be. 

What are you trying to do with this profile?  Maybe I can help you with a 
workaround until we can fix the problem.

BTW, thank you for providing such a clear bug report with the exact steps to 
replicate.  Very helpful!

> Metron Profiler for Spark doesn't work as expected
> --
>
> Key: METRON-2284
> URL: https://issues.apache.org/jira/browse/METRON-2284
> Project: Metron
>  Issue Type: Bug
>Affects Versions: 0.7.1
>Reporter: Maxim Dashenko
>Priority: Major
>
> Used command:
> {code}
> /usr/hdp/current/spark2-client/bin/spark-submit --class 
> org.apache.metron.profiler.spark.cli.BatchProfilerCLI --properties-file 
> /usr/hcp/current/metron/config/batch-profiler.properties 
> ~/metron-profiler-spark-0.7.1.1.9.1.0-6.jar --config 
> /usr/hcp/current/metron/config/batch-profiler.properties --profiles 
> ~/profiler.json
> {code}
>  cat /usr/hcp/current/metron/config/batch-profiler.properties
> {code}
> profiler.batch.input.path=/tmp/test_data.logs
> profiler.batch.input.format=json
> profiler.period.duration=15
> profiler.period.duration.units=MINUTES
> {code}
>  
> cat ~/profiler.json
> {code}
> {
>  "profiles":[
>{
>  "profile":"batchtest5",
>  "onlyif":"source.type == 'testsource' and devicehostname == 
> 'windows9.something.com'",
>  "foreach":"devicehostname",
>  "init":{
>"val":"SET_INIT()"
>  },
>  "update":{
>"val":"SET_ADD(val, IS_EMPTY(devicehostname))"
>  },
> "result":{
>"profile":"val"
> }
>}
>  ],
>  "timestampField":"timestamp"
> }
> {code}
>  cat test_data.logs
> {code}
> {"devicehostname": "windows9.something.com", "timestamp": 1567241981000, 
> "source.type": "testsource"}
> {code}
> Stellar statement
> {code}
> PROFILE_GET('batchtest5', 'windows9.something.com', PROFILE_FIXED(100, 
> 'DAYS'))
> {code}
> Returns:
> {code}
> [[true]]
> {code}
> Expected result:
> {code}
> [[false]]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (METRON-2285) Metron Profiler for Spark - Stellar function STATS_ADD can't be used

2019-11-07 Thread Nick Allen (Jira)


[ 
https://issues.apache.org/jira/browse/METRON-2285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16969427#comment-16969427
 ] 

Nick Allen commented on METRON-2285:


Great.  I agree there is a bug here.  Hopefully we can get you a solid 
workaround until this can be fixed.

> Metron Profiler for Spark - Stellar function STATS_ADD can't be used
> 
>
> Key: METRON-2285
> URL: https://issues.apache.org/jira/browse/METRON-2285
> Project: Metron
>  Issue Type: Bug
>Affects Versions: 0.7.1
>Reporter: Maxim Dashenko
>Priority: Major
>
> Used command:
> {code}
> /usr/hdp/current/spark2-client/bin/spark-submit --class 
> org.apache.metron.profiler.spark.cli.BatchProfilerCLI --properties-file 
> /usr/hcp/current/metron/config/batch-profiler.properties 
> ~/metron-profiler-spark-0.7.1.1.9.1.0-6.jar --config 
> /usr/hcp/current/metron/config/batch-profiler.properties --profiles 
> ~/profiler.json
> {code}
>  cat /usr/hcp/current/metron/config/batch-profiler.properties
> {code}
> profiler.batch.input.path=/tmp/test_data.logs
> profiler.batch.input.format=json
> profiler.period.duration=15
> profiler.period.duration.units=MINUTES
> {code}
>  
> cat ~/profiler.json
> {code}
> {
>"profiles":[
>   {
>  "profile":"batchteststat",
>  "onlyif":"source.type == 'testsource' and devicehostname == 
> 'windows9.something.com'",
>  "foreach":"devicehostname",
>  "update":{
> "s":"STATS_ADD(s, devicehostname)"
>  },
>  "result":{
> "profile":"s"
>  }
>   }
>],
>"timestampField":"timestamp"
> }
> {code}
> cat test_data.logs
> {code}
> {"devicehostname": "windows9.something.com", "timestamp": 1567241981000, 
> "source.type": "testsource"}
> {code}
> The command raises an exception:
> {code}
> Exception in thread "main" org.apache.spark.SparkException: Job aborted due 
> to stage failure: Task 68 in stage 8.0 failed 1 times, most recent failure: 
> Lost task 68.0 in stage 8.0 (TID 274, localhost, executor driver): 
> com.esotericsoftware.kryo.KryoException: Unable to find class: 
> org.apache.metron.statistics.OnlineStatisticsProvider
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
>   at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:781)
>   at 
> org.apache.metron.common.utils.SerDeUtils.fromBytes(SerDeUtils.java:262)
>   at 
> org.apache.metron.profiler.spark.ProfileMeasurementAdapter.toProfileMeasurement(ProfileMeasurementAdapter.java:85)
>   at 
> org.apache.metron.profiler.spark.function.HBaseWriterFunction.call(HBaseWriterFunction.java:124)
>   at org.apache.spark.sql.Dataset$$anonfun$48.apply(Dataset.scala:2266)
>   at org.apache.spark.sql.Dataset$$anonfun$48.apply(Dataset.scala:2266)
>   at 
> org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6.apply(objects.scala:196)
>   at 
> org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6.apply(objects.scala:193)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>   at org.apache.spark.scheduler.Task.run(Task.scala:108)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.metron.statistics.OnlineStatisticsProvider
>   at 

[jira] [Commented] (METRON-2285) Metron Profiler for Spark - Stellar function STATS_ADD can't be used

2019-11-07 Thread Nick Allen (Jira)


[ 
https://issues.apache.org/jira/browse/METRON-2285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16969357#comment-16969357
 ] 

Nick Allen commented on METRON-2285:


Also, there is a problem in your example that I just wanted to point out, just 
as a heads-up, so that it doesn't cause you problems.

In your example, "devicehostname" is a String like "windows9.something.com".  
It does not make sense to add a String to STATS_ADD in your update expression.  
You will want to pass a numeric value to the data sketch that you are building.

> Metron Profiler for Spark - Stellar function STATS_ADD can't be used
> 
>
> Key: METRON-2285
> URL: https://issues.apache.org/jira/browse/METRON-2285
> Project: Metron
>  Issue Type: Bug
>Affects Versions: 0.7.1
>Reporter: Maxim Dashenko
>Priority: Major
>
> Used command:
> {code}
> /usr/hdp/current/spark2-client/bin/spark-submit --class 
> org.apache.metron.profiler.spark.cli.BatchProfilerCLI --properties-file 
> /usr/hcp/current/metron/config/batch-profiler.properties 
> ~/metron-profiler-spark-0.7.1.1.9.1.0-6.jar --config 
> /usr/hcp/current/metron/config/batch-profiler.properties --profiles 
> ~/profiler.json
> {code}
>  cat /usr/hcp/current/metron/config/batch-profiler.properties
> {code}
> profiler.batch.input.path=/tmp/test_data.logs
> profiler.batch.input.format=json
> profiler.period.duration=15
> profiler.period.duration.units=MINUTES
> {code}
>  
> cat ~/profiler.json
> {code}
> {
>"profiles":[
>   {
>  "profile":"batchteststat",
>  "onlyif":"source.type == 'testsource' and devicehostname == 
> 'windows9.something.com'",
>  "foreach":"devicehostname",
>  "update":{
> "s":"STATS_ADD(s, devicehostname)"
>  },
>  "result":{
> "profile":"s"
>  }
>   }
>],
>"timestampField":"timestamp"
> }
> {code}
> cat test_data.logs
> {code}
> {"devicehostname": "windows9.something.com", "timestamp": 1567241981000, 
> "source.type": "testsource"}
> {code}
> The command raises an exception:
> {code}
> Exception in thread "main" org.apache.spark.SparkException: Job aborted due 
> to stage failure: Task 68 in stage 8.0 failed 1 times, most recent failure: 
> Lost task 68.0 in stage 8.0 (TID 274, localhost, executor driver): 
> com.esotericsoftware.kryo.KryoException: Unable to find class: 
> org.apache.metron.statistics.OnlineStatisticsProvider
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
>   at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:781)
>   at 
> org.apache.metron.common.utils.SerDeUtils.fromBytes(SerDeUtils.java:262)
>   at 
> org.apache.metron.profiler.spark.ProfileMeasurementAdapter.toProfileMeasurement(ProfileMeasurementAdapter.java:85)
>   at 
> org.apache.metron.profiler.spark.function.HBaseWriterFunction.call(HBaseWriterFunction.java:124)
>   at org.apache.spark.sql.Dataset$$anonfun$48.apply(Dataset.scala:2266)
>   at org.apache.spark.sql.Dataset$$anonfun$48.apply(Dataset.scala:2266)
>   at 
> org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6.apply(objects.scala:196)
>   at 
> org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6.apply(objects.scala:193)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>   at org.apache.spark.scheduler.Task.run(Task.scala:108)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> 

[jira] [Comment Edited] (METRON-2285) Metron Profiler for Spark - Stellar function STATS_ADD can't be used

2019-11-07 Thread Nick Allen (Jira)


[ 
https://issues.apache.org/jira/browse/METRON-2285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16969352#comment-16969352
 ] 

Nick Allen edited comment on METRON-2285 at 11/7/19 3:35 PM:
-

While you are able to use data sketches (like STATS_ADD, STATS_COUNT, etc) in 
your profiles, you cannot persist those in HBase right now.  This is a bug that 
needs addressed. 

As a work around can you just store a numeric value in HBase, instead of 
attempting to store the data sketch itself?  For example, if you want a mean, 
just store that.
{code:java}
{
  "profiles": [
{
  "profile": "batchteststat",
  "onlyif": "source.type == 'testsource' and devicehostname == 
'windows9.something.com'",
  "foreach": "devicehostname",
  "update": {
"s": "STATS_ADD(s, devicehostname)"
  },
  "result": {
"profile": "STATS_MEAN(s)"
  }
}
  ],
  "timestampField": "timestamp"
}
{code}
 


was (Author: nickwallen):
While you are able to use data sketches (like STATS_ADD, STATS_COUNT, etc) in 
your profiles, you cannot persist those in HBase right now.  This is a bug that 
needs addressed. 

As a work around can you just store a numeric value in HBase, instead of 
attempting to store the data sketch itself?  For example, if you want a mean, 
just store that.
{   "profiles":[
  { "profile":"batchteststat", "onlyif":"source.type == 
'testsource' and devicehostname == 'windows9.something.com'", 
"foreach":"devicehostname", "update":{"s":"STATS_ADD(s, 
devicehostname)" }, "result":{
"profile":"STATS_MEAN(s)" }
  }
   ],   "timestampField":"timestamp"}

> Metron Profiler for Spark - Stellar function STATS_ADD can't be used
> 
>
> Key: METRON-2285
> URL: https://issues.apache.org/jira/browse/METRON-2285
> Project: Metron
>  Issue Type: Bug
>Affects Versions: 0.7.1
>Reporter: Maxim Dashenko
>Priority: Major
>
> Used command:
> {code}
> /usr/hdp/current/spark2-client/bin/spark-submit --class 
> org.apache.metron.profiler.spark.cli.BatchProfilerCLI --properties-file 
> /usr/hcp/current/metron/config/batch-profiler.properties 
> ~/metron-profiler-spark-0.7.1.1.9.1.0-6.jar --config 
> /usr/hcp/current/metron/config/batch-profiler.properties --profiles 
> ~/profiler.json
> {code}
>  cat /usr/hcp/current/metron/config/batch-profiler.properties
> {code}
> profiler.batch.input.path=/tmp/test_data.logs
> profiler.batch.input.format=json
> profiler.period.duration=15
> profiler.period.duration.units=MINUTES
> {code}
>  
> cat ~/profiler.json
> {code}
> {
>"profiles":[
>   {
>  "profile":"batchteststat",
>  "onlyif":"source.type == 'testsource' and devicehostname == 
> 'windows9.something.com'",
>  "foreach":"devicehostname",
>  "update":{
> "s":"STATS_ADD(s, devicehostname)"
>  },
>  "result":{
> "profile":"s"
>  }
>   }
>],
>"timestampField":"timestamp"
> }
> {code}
> cat test_data.logs
> {code}
> {"devicehostname": "windows9.something.com", "timestamp": 1567241981000, 
> "source.type": "testsource"}
> {code}
> The command raises an exception:
> {code}
> Exception in thread "main" org.apache.spark.SparkException: Job aborted due 
> to stage failure: Task 68 in stage 8.0 failed 1 times, most recent failure: 
> Lost task 68.0 in stage 8.0 (TID 274, localhost, executor driver): 
> com.esotericsoftware.kryo.KryoException: Unable to find class: 
> org.apache.metron.statistics.OnlineStatisticsProvider
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
>   at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:781)
>   at 
> org.apache.metron.common.utils.SerDeUtils.fromBytes(SerDeUtils.java:262)
>   at 
> org.apache.metron.profiler.spark.ProfileMeasurementAdapter.toProfileMeasurement(ProfileMeasurementAdapter.java:85)
>   at 
> org.apache.metron.profiler.spark.function.HBaseWriterFunction.call(HBaseWriterFunction.java:124)
>   at org.apache.spark.sql.Dataset$$anonfun$48.apply(Dataset.scala:2266)
>   at org.apache.spark.sql.Dataset$$anonfun$48.apply(Dataset.scala:2266)
>   at 
> org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6.apply(objects.scala:196)
>   at 
> org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6.apply(objects.scala:193)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
>   at 
> 

[jira] [Commented] (METRON-2285) Metron Profiler for Spark - Stellar function STATS_ADD can't be used

2019-11-07 Thread Nick Allen (Jira)


[ 
https://issues.apache.org/jira/browse/METRON-2285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16969352#comment-16969352
 ] 

Nick Allen commented on METRON-2285:


While you are able to use data sketches (like STATS_ADD, STATS_COUNT, etc) in 
your profiles, you cannot persist those in HBase right now.  This is a bug that 
needs addressed. 

As a work around can you just store a numeric value in HBase, instead of 
attempting to store the data sketch itself?  For example, if you want a mean, 
just store that.
{   "profiles":[
  { "profile":"batchteststat", "onlyif":"source.type == 
'testsource' and devicehostname == 'windows9.something.com'", 
"foreach":"devicehostname", "update":{"s":"STATS_ADD(s, 
devicehostname)" }, "result":{
"profile":"STATS_MEAN(s)" }
  }
   ],   "timestampField":"timestamp"}

> Metron Profiler for Spark - Stellar function STATS_ADD can't be used
> 
>
> Key: METRON-2285
> URL: https://issues.apache.org/jira/browse/METRON-2285
> Project: Metron
>  Issue Type: Bug
>Affects Versions: 0.7.1
>Reporter: Maxim Dashenko
>Priority: Major
>
> Used command:
> {code}
> /usr/hdp/current/spark2-client/bin/spark-submit --class 
> org.apache.metron.profiler.spark.cli.BatchProfilerCLI --properties-file 
> /usr/hcp/current/metron/config/batch-profiler.properties 
> ~/metron-profiler-spark-0.7.1.1.9.1.0-6.jar --config 
> /usr/hcp/current/metron/config/batch-profiler.properties --profiles 
> ~/profiler.json
> {code}
>  cat /usr/hcp/current/metron/config/batch-profiler.properties
> {code}
> profiler.batch.input.path=/tmp/test_data.logs
> profiler.batch.input.format=json
> profiler.period.duration=15
> profiler.period.duration.units=MINUTES
> {code}
>  
> cat ~/profiler.json
> {code}
> {
>"profiles":[
>   {
>  "profile":"batchteststat",
>  "onlyif":"source.type == 'testsource' and devicehostname == 
> 'windows9.something.com'",
>  "foreach":"devicehostname",
>  "update":{
> "s":"STATS_ADD(s, devicehostname)"
>  },
>  "result":{
> "profile":"s"
>  }
>   }
>],
>"timestampField":"timestamp"
> }
> {code}
> cat test_data.logs
> {code}
> {"devicehostname": "windows9.something.com", "timestamp": 1567241981000, 
> "source.type": "testsource"}
> {code}
> The command raises an exception:
> {code}
> Exception in thread "main" org.apache.spark.SparkException: Job aborted due 
> to stage failure: Task 68 in stage 8.0 failed 1 times, most recent failure: 
> Lost task 68.0 in stage 8.0 (TID 274, localhost, executor driver): 
> com.esotericsoftware.kryo.KryoException: Unable to find class: 
> org.apache.metron.statistics.OnlineStatisticsProvider
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
>   at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:781)
>   at 
> org.apache.metron.common.utils.SerDeUtils.fromBytes(SerDeUtils.java:262)
>   at 
> org.apache.metron.profiler.spark.ProfileMeasurementAdapter.toProfileMeasurement(ProfileMeasurementAdapter.java:85)
>   at 
> org.apache.metron.profiler.spark.function.HBaseWriterFunction.call(HBaseWriterFunction.java:124)
>   at org.apache.spark.sql.Dataset$$anonfun$48.apply(Dataset.scala:2266)
>   at org.apache.spark.sql.Dataset$$anonfun$48.apply(Dataset.scala:2266)
>   at 
> org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6.apply(objects.scala:196)
>   at 
> org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6.apply(objects.scala:193)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>  

[jira] [Created] (METRON-2308) Fix 'Degrees' Example Profile

2019-11-06 Thread Nick Allen (Jira)
Nick Allen created METRON-2308:
--

 Summary: Fix 'Degrees' Example Profile
 Key: METRON-2308
 URL: https://issues.apache.org/jira/browse/METRON-2308
 Project: Metron
  Issue Type: Bug
Reporter: Nick Allen
Assignee: Nick Allen


One of the example profiles uses "in" as a variable name.  This is now a 
reserved keyword and cannot be used as a variable name.  The example should be 
updated to work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2305) Unable to Add Enrichment Coprocessor with Kerberos

2019-10-30 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2305:
---
Summary: Unable to Add Enrichment Coprocessor with Kerberos  (was: provide 
hbase acl to metron user before altering enrichment table and adding 
coprocessor)

> Unable to Add Enrichment Coprocessor with Kerberos
> --
>
> Key: METRON-2305
> URL: https://issues.apache.org/jira/browse/METRON-2305
> Project: Metron
>  Issue Type: Bug
>Reporter: Mohan Venkateshaiah
>Assignee: Mohan Venkateshaiah
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> metron enrichment topology fails to star on pre kerberized cluster with 
> insufficent permission to load the hbase coprocessor,  
> {code:java}
> 2019-10-30 12:18:29,579 - Loading HBase coprocessor for enrichments
> 2019-10-30 12:18:29,580 - See 
> https://hbase.apache.org/2.0/book.html#load_coprocessor_in_shell for more 
> detail
> 2019-10-30 12:18:29,580 - HBase coprocessor setup - first disabling the 
> enrichments HBase table.
> 2019-10-30 12:18:29,580 - Executing command echo "disable 'enrichment'" | 
> hbase shell -n
> 2019-10-30 12:18:29,580 - Execute['echo "disable 'enrichment'" | hbase shell 
> -n'] {'logoutput': True, 'tries': 1, 'user': 'metron'}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.4.0-315/phoenix/phoenix-5.0.0.3.1.4.0-315-server.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.4.0-315/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Took 1.4908 secondsERROR RuntimeError: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions (user=met...@hwx.site, scope=default:enrichment, 
> params=[table=default:enrichment],action=CREATE)
> 2019-10-30 12:18:38,833 - Skipping stack-select on METRON because it does not 
> exist in the stack-select package structure.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (METRON-2303) Change Default HDFS Port for Batch Profiler

2019-10-29 Thread Nick Allen (Jira)
Nick Allen created METRON-2303:
--

 Summary: Change Default HDFS Port for Batch Profiler
 Key: METRON-2303
 URL: https://issues.apache.org/jira/browse/METRON-2303
 Project: Metron
  Issue Type: Bug
Reporter: Nick Allen
Assignee: Nick Allen


The Batch Profiler properties currently default to port 9000 for HDFS.  This 
should be changed to port 8020 to match the default port used with HDP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2223) Reconcile Versions in Root POM with HDP 3.1

2019-10-29 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2223:
---
Summary: Reconcile Versions in Root POM with HDP 3.1  (was: Reconcile Maven 
versions)

> Reconcile Versions in Root POM with HDP 3.1
> ---
>
> Key: METRON-2223
> URL: https://issues.apache.org/jira/browse/METRON-2223
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Ryan Merriman
>Assignee: Nick Allen
>Priority: Major
>
> It may be useful to upgrade a version independently of other components in 
> HDP 3.1.  We will need to reconcile these versions before the feature branch 
> is merged into master.  This task ensures all Maven profile dependency 
> versions match the correct HDP version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (METRON-2223) Reconcile Maven versions

2019-10-29 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen reassigned METRON-2223:
--

Assignee: Nick Allen

> Reconcile Maven versions
> 
>
> Key: METRON-2223
> URL: https://issues.apache.org/jira/browse/METRON-2223
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Ryan Merriman
>Assignee: Nick Allen
>Priority: Major
>
> It may be useful to upgrade a version independently of other components in 
> HDP 3.1.  We will need to reconcile these versions before the feature branch 
> is merged into master.  This task ensures all Maven profile dependency 
> versions match the correct HDP version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2222) Remove Overrides for Storm 1.0.x

2019-10-28 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-:
---
Description: The `metron-storm-kafka-overrides` project contains some 
overridden `storm-kafka` classes that were necessary to address a performance 
issues in older versions of `storm-kafka`.  Now that we have upgraded to Storm 
1.2.1 these fixes are included in the code base and the overrides are no longer 
needed.  (was: There isn't much to this task other than calling out the major 
Storm version deprecation once this upgrade makes its way to master. We should 
double-check that Upgrading.md reflects the current state of changes from this 
feature branch.)

> Remove Overrides for Storm 1.0.x
> 
>
> Key: METRON-
> URL: https://issues.apache.org/jira/browse/METRON-
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Michael Miklavcic
>Assignee: Nick Allen
>Priority: Major
>
> The `metron-storm-kafka-overrides` project contains some overridden 
> `storm-kafka` classes that were necessary to address a performance issues in 
> older versions of `storm-kafka`.  Now that we have upgraded to Storm 1.2.1 
> these fixes are included in the code base and the overrides are no longer 
> needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2222) Remove Overrides for Storm 1.0.x

2019-10-28 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-:
---
Summary: Remove Overrides for Storm 1.0.x  (was: Deprecate Storm 1.0.x)

> Remove Overrides for Storm 1.0.x
> 
>
> Key: METRON-
> URL: https://issues.apache.org/jira/browse/METRON-
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Michael Miklavcic
>Assignee: Nick Allen
>Priority: Major
>
> There isn't much to this task other than calling out the major Storm version 
> deprecation once this upgrade makes its way to master. We should double-check 
> that Upgrading.md reflects the current state of changes from this feature 
> branch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (METRON-2222) Deprecate Storm 1.0.x

2019-10-25 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen reassigned METRON-:
--

Assignee: Nick Allen

> Deprecate Storm 1.0.x
> -
>
> Key: METRON-
> URL: https://issues.apache.org/jira/browse/METRON-
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Michael Miklavcic
>Assignee: Nick Allen
>Priority: Major
>
> There isn't much to this task other than calling out the major Storm version 
> deprecation once this upgrade makes its way to master. We should double-check 
> that Upgrading.md reflects the current state of changes from this feature 
> branch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (METRON-2301) Building Against Wrong Storm Flux Version

2019-10-25 Thread Nick Allen (Jira)
Nick Allen created METRON-2301:
--

 Summary: Building Against Wrong Storm Flux Version
 Key: METRON-2301
 URL: https://issues.apache.org/jira/browse/METRON-2301
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


When we upgraded to Storm 1.2.1, we did not update the version of Storm's Flux 
library that we use.  We are still building against 
`org.apache.storm:flux-core:1.0.1`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (METRON-2233) Deprecate CentOS 6 Development Environment

2019-10-24 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2233:
---
Summary: Deprecate CentOS 6 Development Environment  (was: Deprecate Centos 
6)

> Deprecate CentOS 6 Development Environment
> --
>
> Key: METRON-2233
> URL: https://issues.apache.org/jira/browse/METRON-2233
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Michael Miklavcic
>Assignee: Nick Allen
>Priority: Major
>
> Centos 6 will no longer work for with HDP 3.1. We have a Centos 7 setup that 
> should replace it entirely. We will need to migrate and update any related 
> documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (METRON-2233) Deprecate Centos 6

2019-10-24 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen reassigned METRON-2233:
--

Assignee: Nick Allen

> Deprecate Centos 6
> --
>
> Key: METRON-2233
> URL: https://issues.apache.org/jira/browse/METRON-2233
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Michael Miklavcic
>Assignee: Nick Allen
>Priority: Major
>
> Centos 6 will no longer work for with HDP 3.1. We have a Centos 7 setup that 
> should replace it entirely. We will need to migrate and update any related 
> documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (METRON-2298) "User Base DN" Missing from Security Configuration

2019-10-22 Thread Nick Allen (Jira)
Nick Allen created METRON-2298:
--

 Summary: "User Base DN" Missing from Security Configuration
 Key: METRON-2298
 URL: https://issues.apache.org/jira/browse/METRON-2298
 Project: Metron
  Issue Type: Bug
Reporter: Nick Allen
Assignee: Nick Allen
 Attachments: Screen Shot 2019-10-22 at 5.28.07 PM.png, Screen Shot 
2019-10-22 at 5.30.12 PM.png

The "User Base DN" is not visible on Metron's Security configuration tab in 
Ambari.  This property should be visible along with the other LDAP properties.

This property is currently only visible under "Advanced metron-security-env".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (METRON-2297) Enrichment Topology Unable to Load Geo IP Data from HDFS

2019-10-22 Thread Nick Allen (Jira)


[ 
https://issues.apache.org/jira/browse/METRON-2297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957360#comment-16957360
 ] 

Nick Allen commented on METRON-2297:


* When using Kerberos authentication no topology is able to access HDFS, which 
primarily impacts Enrichment and Batch Indexing.
 * If the local cache on a Storm worker contains a ticket for the 'metron' 
user, the topology is able to access HDFS.
 * When using Kerberos authentication, all topologies are able to access Kafka.

 

> Enrichment Topology Unable to Load Geo IP Data from HDFS
> 
>
> Key: METRON-2297
> URL: https://issues.apache.org/jira/browse/METRON-2297
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
>
> On the `feature/METRON-2088-support-hdp-3.1` feature branch, the Enrichment 
> topology is unable to load the GeoIP data from HDFS when using Kerberos 
> authentication.  The Enrichment topology shows this error.
> {code:java}
> 2019-10-03 18:23:18.545 o.a.h.i.Client Curator-TreeCache-0 [WARN] Exception 
> encountered while connecting to the server : 
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]
> 2019-10-03 18:23:18.552 o.a.m.e.a.m.MaxMindDbUtilities Curator-TreeCache-0 
> [ERROR] Unable to open new database file 
> /apps/metron/geo/default/GeoLite2-City.tar.gz
> java.io.IOException: DestHost:destPort metrong-1.openstacklocal:8020 , 
> LocalHost:localPort metrong-7/172.22.74.121:0. Failed on local exception: 
> java.io.IOException: org.apache.hadoop.security.AccessControlException: 
> Client cannot authenticate via:[TOKEN, KERBEROS]
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method) ~[?:1.8.0_112]
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  ~[?:1.8.0_112]
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  ~[?:1.8.0_112]
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
> ~[?:1.8.0_112]
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) 
> ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806) 
> ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1502) 
> ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1444) 
> ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
>   at org.apache.hadoop.ipc.Client.call(Client.java:1354) 
> ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
>  ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>  ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
>   at com.sun.proxy.$Proxy55.getBlockLocations(Unknown Source) ~[?:?]
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:317)
>  ~[hadoop-hdfs-client-3.1.1.3.1.4.0-315.jar:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>  ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>  ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>  ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>  ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>  ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
>   at com.sun.proxy.$Proxy56.getBlockLocations(Unknown Source) ~[?:?]
>   at 
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:862) 
> ~[hadoop-hdfs-client-3.1.1.3.1.4.0-315.jar:?]
>   at 
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:851) 
> ~[hadoop-hdfs-client-3.1.1.3.1.4.0-315.jar:?]
>   at 
> 

[jira] [Created] (METRON-2297) Enrichment Topology Unable to Load Geo IP Data from HDFS

2019-10-22 Thread Nick Allen (Jira)
Nick Allen created METRON-2297:
--

 Summary: Enrichment Topology Unable to Load Geo IP Data from HDFS
 Key: METRON-2297
 URL: https://issues.apache.org/jira/browse/METRON-2297
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


On the `feature/METRON-2088-support-hdp-3.1` feature branch, the Enrichment 
topology is unable to load the GeoIP data from HDFS when using Kerberos 
authentication.  The Enrichment topology shows this error.
{code:java}
2019-10-03 18:23:18.545 o.a.h.i.Client Curator-TreeCache-0 [WARN] Exception 
encountered while connecting to the server : 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS]
2019-10-03 18:23:18.552 o.a.m.e.a.m.MaxMindDbUtilities Curator-TreeCache-0 
[ERROR] Unable to open new database file 
/apps/metron/geo/default/GeoLite2-City.tar.gz
java.io.IOException: DestHost:destPort metrong-1.openstacklocal:8020 , 
LocalHost:localPort metrong-7/172.22.74.121:0. Failed on local exception: 
java.io.IOException: org.apache.hadoop.security.AccessControlException: Client 
cannot authenticate via:[TOKEN, KERBEROS]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
Method) ~[?:1.8.0_112]
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 ~[?:1.8.0_112]
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 ~[?:1.8.0_112]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
~[?:1.8.0_112]
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) 
~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806) 
~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1502) 
~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1444) 
~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
at org.apache.hadoop.ipc.Client.call(Client.java:1354) 
~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
 ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
 ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
at com.sun.proxy.$Proxy55.getBlockLocations(Unknown Source) ~[?:?]
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:317)
 ~[hadoop-hdfs-client-3.1.1.3.1.4.0-315.jar:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[?:1.8.0_112]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_112]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_112]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
 ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
 ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
 ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
 ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
 ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
at com.sun.proxy.$Proxy56.getBlockLocations(Unknown Source) ~[?:?]
at 
org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:862) 
~[hadoop-hdfs-client-3.1.1.3.1.4.0-315.jar:?]
at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:851) 
~[hadoop-hdfs-client-3.1.1.3.1.4.0-315.jar:?]
at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:840) 
~[hadoop-hdfs-client-3.1.1.3.1.4.0-315.jar:?]
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1004) 
~[hadoop-hdfs-client-3.1.1.3.1.4.0-315.jar:?]
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:320)
 ~[hadoop-hdfs-client-3.1.1.3.1.4.0-315.jar:?]
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
 ~[hadoop-hdfs-client-3.1.1.3.1.4.0-315.jar:?]
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 ~[hadoop-common-3.1.1.3.1.4.0-315.jar:?]
at 

[jira] [Assigned] (METRON-2295) [UI] Displaying "No Data" message in the Alerts UI screen

2019-10-18 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen reassigned METRON-2295:
--

Assignee: Subhash Jha

> [UI] Displaying "No Data" message in the Alerts UI screen
> -
>
> Key: METRON-2295
> URL: https://issues.apache.org/jira/browse/METRON-2295
> Project: Metron
>  Issue Type: Improvement
>Reporter: Subhash Jha
>Assignee: Subhash Jha
>Priority: Minor
>
> Currently, there is no data being displayed when the user visits the main 
> page and doesn't click the search button in the Alert UI. It would be better 
> if the user gets to see some message being displayed instead of nothing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (METRON-2279) Unable to Index to Solr with Kerberos

2019-10-09 Thread Nick Allen (Jira)
Nick Allen created METRON-2279:
--

 Summary: Unable to Index to Solr with Kerberos
 Key: METRON-2279
 URL: https://issues.apache.org/jira/browse/METRON-2279
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


 
On the HDP 3.1 feature branch the indexing topology is unable to index 
telemetry to Solr when running in a secure/kerberized environment.
{code:java}
2019-10-09 06:32:33.039 o.a.s.util Thread-7-indexingBolt-executor[3 3] [ERROR] 
Async loop died!
java.lang.NoSuchMethodError: 
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.setValidateAfterInactivity(I)V
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:277)
 ~[stormjar.jar:?]
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:328)
 ~[stormjar.jar:?]
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:266)
 ~[stormjar.jar:?]
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:253)
 ~[stormjar.jar:?]
at 
org.apache.metron.solr.writer.SolrClientFactory.create(SolrClientFactory.java:49)
 ~[stormjar.jar:?]
at org.apache.metron.solr.writer.SolrWriter.init(SolrWriter.java:171) 
~[stormjar.jar:?]
at 
org.apache.metron.writer.bolt.BulkMessageWriterBolt.prepare(BulkMessageWriterBolt.java:239)
 ~[stormjar.jar:?]
at 
org.apache.storm.daemon.executor$fn__10219$fn__10232.invoke(executor.clj:810) 
~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:482) 
[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
2019-10-09 06:32:33.166 o.a.s.d.executor Thread-7-indexingBolt-executor[3 3] 
[ERROR]
java.lang.NoSuchMethodError: 
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.setValidateAfterInactivity(I)V
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:277)
 ~[stormjar.jar:?]
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:328)
 ~[stormjar.jar:?]
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:266)
 ~[stormjar.jar:?]
at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:253)
 ~[stormjar.jar:?]
at 
org.apache.metron.solr.writer.SolrClientFactory.create(SolrClientFactory.java:49)
 ~[stormjar.jar:?]
at org.apache.metron.solr.writer.SolrWriter.init(SolrWriter.java:171) 
~[stormjar.jar:?]
at 
org.apache.metron.writer.bolt.BulkMessageWriterBolt.prepare(BulkMessageWriterBolt.java:239)
 ~[stormjar.jar:?]
at 
org.apache.storm.daemon.executor$fn__10219$fn__10232.invoke(executor.clj:810) 
~[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:482) 
[storm-core-1.2.1.3.1.4.0-315.jar:1.2.1.3.1.4.0-315]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (METRON-2275) Solr Indexing Topology Fails to Start on Secure Cluster with HDP 3.1

2019-10-03 Thread Nick Allen (Jira)
Nick Allen created METRON-2275:
--

 Summary: Solr Indexing Topology Fails to Start on Secure Cluster 
with HDP 3.1
 Key: METRON-2275
 URL: https://issues.apache.org/jira/browse/METRON-2275
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


Many thanks to [~anandsubbu] for finding this bug. 

The Solr indexing topology will not start on a secured/kerberized cluster 
running HDP 3.1.
{code:java}
2019-10-03 10:25:30,948 - 
Execute['/usr/hcp/1.9.2.0-94/metron/bin/start_solr_topology.sh'] {'logoutput': 
True, 'tries': 3, 'user': 'metron', 'try_sleep': 5}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/3.1.4.0-315/storm/lib/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/3.1.4.0-315/storm/contrib/storm-autocreds/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/3.1.4.0-315/storm/lib/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/3.1.4.0-315/storm/contrib/storm-autocreds/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/3.1.4.0-315/storm/lib/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/3.1.4.0-315/storm/contrib/storm-autocreds/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Running: /usr/jdk64/jdk1.8.0_112/bin/java -server -Ddaemon.name= 
-Dstorm.options= -Dstorm.home=/grid/0/hdp/3.1.4.0-315/storm 
-Dstorm.log.dir=/var/log/storm 
-Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dstorm.conf.file= 
-cp 
/grid/0/hdp/3.1.4.0-315/storm/*:/grid/0/hdp/3.1.4.0-315/storm/lib/*:/grid/0/hdp/3.1.4.0-315/storm/extlib/*:/usr/hdp/current/storm-supervisor/external/storm-autocreds/*
 org.apache.storm.daemon.ClientJarTransformerRunner 
org.apache.storm.hack.StormShadeTransformer 
/usr/hcp/1.9.2.0-94/metron/lib/metron-solr-storm-0.7.1.1.9.2.0-94-uber.jar 
/tmp/222f7eaae5c811e9937dfa163ea52b42.jar
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/3.1.4.0-315/storm/lib/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/3.1.4.0-315/storm/contrib/storm-autocreds/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/3.1.4.0-315/storm/lib/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:çlog4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/3.1.4.0-315/storm/lib/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/grid/0/hdp/3.1.4.0-315/storm/contrib/storm-autocreds/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Running: /usr/jdk64/jdk1.8.0_112/bin/java -Ddaemon.name= -Dstorm.options= 
-Dstorm.home=/grid/0/hdp/3.1.4.0-315/storm -Dstorm.log.dir=/var/log/storm 
-Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dstorm.conf.file= 
-cp 
/grid/0/hdp/3.1.4.0-315/storm/*:/grid/0/hdp/3.1.4.0-315/storm/lib/*:/grid/0/hdp/3.1.4.0-315/storm/extlib/*:/usr/hdp/current/storm-supervisor/external/storm-autocreds/*:/tmp/222f7eaae5c811e9937dfa163ea52b42.jar:/home/metron/.storm:/grid/0/hdp/3.1.4.0-315/storm/bin:/usr/hcp/1.9.2.0-94/metron/lib/stellar-common-0.7.1.1.9.2.0-94-uber.jar
 -Dstorm.jar=/tmp/222f7eaae5c811e9937dfa163ea52b42.jar 

[jira] [Updated] (METRON-2271) Reorganize Travis Build

2019-10-01 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2271:
---
Description: 
The way in which we organize the Travis builds ensures that each job will build 
all of Metron, each time.  Although we attempt to cache the local Maven 
repository to avoid this, in practice all jobs start roughly in parallel and so 
this cache is never used.

We are increasingly hitting the 50 minute time limit on our integration test 
job which causes the Travis build to fail.  By reorganizing the build, so that 
each job only builds the modules that it needs, we should be able to save some 
time and avoid breaching this time limit.

 

> Reorganize Travis Build
> ---
>
> Key: METRON-2271
> URL: https://issues.apache.org/jira/browse/METRON-2271
> Project: Metron
>  Issue Type: Improvement
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
>
> The way in which we organize the Travis builds ensures that each job will 
> build all of Metron, each time.  Although we attempt to cache the local Maven 
> repository to avoid this, in practice all jobs start roughly in parallel and 
> so this cache is never used.
> We are increasingly hitting the 50 minute time limit on our integration test 
> job which causes the Travis build to fail.  By reorganizing the build, so 
> that each job only builds the modules that it needs, we should be able to 
> save some time and avoid breaching this time limit.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (METRON-2271) Reorganize Travis Build

2019-10-01 Thread Nick Allen (Jira)
Nick Allen created METRON-2271:
--

 Summary: Reorganize Travis Build
 Key: METRON-2271
 URL: https://issues.apache.org/jira/browse/METRON-2271
 Project: Metron
  Issue Type: Improvement
Reporter: Nick Allen
Assignee: Nick Allen






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (METRON-2264) Upgrade metron-hbase-client to HBase 2.0.2

2019-09-24 Thread Nick Allen (Jira)


[ 
https://issues.apache.org/jira/browse/METRON-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936840#comment-16936840
 ] 

Nick Allen commented on METRON-2264:


See 
https://github.com/apache/metron/blob/92220ced649b14d90fd7bc27b66a3d2eca651475/metron-platform/metron-hbase-client/pom.xml#L39-L49

> Upgrade metron-hbase-client to HBase 2.0.2
> --
>
> Key: METRON-2264
> URL: https://issues.apache.org/jira/browse/METRON-2264
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
>
> The metron-hbase-client has some special handling for pulling in a shaded 
> HBase client library.  This pulls in a different version (1.1.2) than the 
> global HBase version to work around a known HBase bug.  After the upgrade to 
> 2.0.2, this work around is no longer needed and can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (METRON-2264) Upgrade metron-hbase-client to HBase 2.0.2

2019-09-24 Thread Nick Allen (Jira)
Nick Allen created METRON-2264:
--

 Summary: Upgrade metron-hbase-client to HBase 2.0.2
 Key: METRON-2264
 URL: https://issues.apache.org/jira/browse/METRON-2264
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


The metron-hbase-client has some special handling for pulling in a shaded HBase 
client library.  This pulls in a different version (1.1.2) than the global 
HBase version to work around a known HBase bug.  After the upgrade to 2.0.2, 
this work around is no longer needed and can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (METRON-2262) Upgrade to Curator 4.2.0

2019-09-19 Thread Nick Allen (Jira)
Nick Allen created METRON-2262:
--

 Summary: Upgrade to Curator 4.2.0
 Key: METRON-2262
 URL: https://issues.apache.org/jira/browse/METRON-2262
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen


Upgrade to Curator 4.2.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (METRON-2261) Isolate Curator Dependencies

2019-09-19 Thread Nick Allen (Jira)
Nick Allen created METRON-2261:
--

 Summary: Isolate Curator Dependencies
 Key: METRON-2261
 URL: https://issues.apache.org/jira/browse/METRON-2261
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


As part of the HDP 3.1 upgrade, we need to upgrade to Curator 4.x.  [There was 
a discuss thread 
|[https://lists.apache.org/thread.html/265a76f6071229af662135b92df6db652d8ee2b17dbec8af7e8bf1ba@%3Cdev.metron.apache.org%3E]]
 covering the need for this.

Currently, Curator is being pulled in as a transitive of Hadoop, HBase and 
Storm.  At the current versions this does not cause a problem.  But when 
upgrading Hadoop, different conflicting versions of Curator will get pulled in. 
 We need to ensure that only a single version of Curator is pulled in after 
upgrading Hadoop.  This change will maintain the current versions in use 
including Curator 2.7.1 and Zookeeper 3.4.6.

In upgrading to Curator 4.x, there is a breaking change that causes the 
MaasIntegrationTest to fail.  The fix for that, plus the actual upgrade to 
Curator 4.x will be performed under a separate Jira to aid the review process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (METRON-2254) Intermittent Test Failure in RestFunctionsIntegrationTest

2019-09-12 Thread Nick Allen (Jira)
Nick Allen created METRON-2254:
--

 Summary: Intermittent Test Failure in RestFunctionsIntegrationTest
 Key: METRON-2254
 URL: https://issues.apache.org/jira/browse/METRON-2254
 Project: Metron
  Issue Type: Bug
Reporter: Nick Allen
Assignee: Nick Allen


{code:java}
Tests run: 21, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 12.368 sec <<< 
FAILURE! - in 
org.apache.metron.stellar.dsl.functions.RestFunctionsIntegrationTest
restGetShouldTimeoutWithSuppliedTimeout(org.apache.metron.stellar.dsl.functions.RestFunctionsIntegrationTest)
  Time elapsed: 0.349 sec  <<< FAILURE!
java.lang.AssertionError: expected null, but was:<{get=success}>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotNull(Assert.java:755)
at org.junit.Assert.assertNull(Assert.java:737)
at org.junit.Assert.assertNull(Assert.java:747)
at 
org.apache.metron.stellar.dsl.functions.RestFunctionsIntegrationTest.restGetShouldTimeoutWithSuppliedTimeout(RestFunctionsIntegrationTest.java:279)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.mockserver.junit.MockServerRule$1.evaluate(MockServerRule.java:107)
at org.mockserver.junit.ProxyRule$1.evaluate(ProxyRule.java:102)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


Results :

Failed tests: 
  RestFunctionsIntegrationTest.restGetShouldTimeoutWithSuppliedTimeout:279 
expected null, but was:<{get=success}>

{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (METRON-2252) PcapTopologyIntegrationTest Intermittent Failures

2019-09-11 Thread Nick Allen (Jira)
Nick Allen created METRON-2252:
--

 Summary: PcapTopologyIntegrationTest Intermittent Failures
 Key: METRON-2252
 URL: https://issues.apache.org/jira/browse/METRON-2252
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


On the feature/METRON-2088-support-hdp-3.1 feature branch, the 
PcapTopologyIntegrationTest is failing intermittently.
{code:java}
---
 T E S T S
---
Running org.apache.metron.pcap.integration.PcapTopologyIntegrationTest
Setting up test components
Formatting using clusterid: testClusterID
Sent pcap data: 20
Tearing down test infrastructure
Stopping runner
Done stopping runner
Clearing output directories
Finished
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 20.894 sec <<< 
FAILURE! - in org.apache.metron.pcap.integration.PcapTopologyIntegrationTest
org.apache.metron.pcap.integration.PcapTopologyIntegrationTest  Time elapsed: 
20.893 sec  <<< FAILURE!
java.lang.AssertionError: expected:<20> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.metron.pcap.integration.PcapTopologyIntegrationTest$4.send(PcapTopologyIntegrationTest.java:165)
at 
org.apache.metron.pcap.integration.PcapTopologyIntegrationTest.setupTopology(PcapTopologyIntegrationTest.java:269)
at 
org.apache.metron.pcap.integration.PcapTopologyIntegrationTest.setupAll(PcapTopologyIntegrationTest.java:150)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)


Results :

Failed tests: 
  PcapTopologyIntegrationTest.setupAll:150->setupTopology:269 expected:<20> but 
was:<1>



Tests run: 1, Failures: 1, Errors: 0, Skipped: 0

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Metron . SUCCESS [  0.648 s]
[INFO] metron-stellar . SUCCESS [  0.198 s]
[INFO] stellar-common . SUCCESS [ 17.301 s]
[INFO] metron-analytics ... SUCCESS [  0.071 s]
[INFO] metron-maas-common . SUCCESS [  7.721 s]
[INFO] metron-platform  SUCCESS [  0.050 s]
[INFO] metron-zookeeper ... SUCCESS [  0.046 s]
[INFO] metron-test-utilities .. SUCCESS [  0.516 s]
[INFO] metron-integration-test  SUCCESS [  0.401 s]
[INFO] metron-maas-service  SUCCESS [01:01 min]
[INFO] metron-common .. SUCCESS [ 56.376 s]
[INFO] metron-statistics .. SUCCESS [  2.104 s]
[INFO] metron-common-streaming  SUCCESS [  0.013 s]
[INFO] metron-common-storm  SUCCESS [  0.049 s]
[INFO] metron-hbase ... SUCCESS [  0.014 s]
[INFO] metron-hbase-common  SUCCESS [  3.544 s]
[INFO] metron-enrichment .. SUCCESS [  0.079 s]
[INFO] 

[jira] [Updated] (METRON-2188) Upgrade to HBase 2.0.2

2019-09-10 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2188:
---
Description: Upgrade Metron to function with HBase 2.0.2.  (was: Upgrade 
the enrichment coprocessor to support HBase 2.0.2)

> Upgrade to HBase 2.0.2
> --
>
> Key: METRON-2188
> URL: https://issues.apache.org/jira/browse/METRON-2188
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
>
> Upgrade Metron to function with HBase 2.0.2.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (METRON-2188) Upgrade to HBase 2.0.2

2019-09-10 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2188:
---
Summary: Upgrade to HBase 2.0.2  (was: Upgrade HBase Coprocessor for HBase 
2.0.2)

> Upgrade to HBase 2.0.2
> --
>
> Key: METRON-2188
> URL: https://issues.apache.org/jira/browse/METRON-2188
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
>
> Upgrade the enrichment coprocessor to support HBase 2.0.2



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (METRON-2248) Merge Master into Feature Branch

2019-09-05 Thread Nick Allen (Jira)
Nick Allen created METRON-2248:
--

 Summary: Merge Master into Feature Branch
 Key: METRON-2248
 URL: https://issues.apache.org/jira/browse/METRON-2248
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


Merge the latest from master into the feature branch.  The most significant 
changes impacting the feature branch including METRON-2217 and METRON-2149.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (METRON-2232) Upgrade to Hadoop 3.1.1

2019-09-04 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2232:
---
Summary: Upgrade to Hadoop 3.1.1  (was: Upgrade Hadoop)

> Upgrade to Hadoop 3.1.1
> ---
>
> Key: METRON-2232
> URL: https://issues.apache.org/jira/browse/METRON-2232
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Michael Miklavcic
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (METRON-2241) Profiler Integration Test Fails After Storm 1.2.1 Upgrade

2019-08-29 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2241:
---
Description: 
The ProfilerIntegrationTest fails after the upgrade to Storm 1.2.1. This is the 
only integration test that starts/stops multiple topologies during testing.
{code:java}
---
 T E S T S
---
Running org.apache.metron.profiler.storm.integration.ProfilerIntegrationTest
2019-08-29 16:35:49 ERROR util:0 - Async loop died!
java.lang.RuntimeException: java.lang.IllegalStateException: This consumer has 
already been closed.
at 
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:522)
at 
org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:487)
at 
org.apache.storm.utils.DisruptorQueue.consumeBatch(DisruptorQueue.java:477)
at org.apache.storm.disruptor$consume_batch.invoke(disruptor.clj:70)
at 
org.apache.storm.daemon.executor$fn__4975$fn__4990$fn__5021.invoke(executor.clj:634)
at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: This consumer has already been 
closed.
at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2202)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1984)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1963)
at 
org.apache.storm.kafka.spout.metrics.KafkaOffsetMetric.getValueAndReset(KafkaOffsetMetric.java:79)
at 
org.apache.storm.daemon.executor$metrics_tick$fn__4899.invoke(executor.clj:345)
at clojure.core$map$fn__4553.invoke(core.clj:2622)
at clojure.lang.LazySeq.sval(LazySeq.java:40)
at clojure.lang.LazySeq.seq(LazySeq.java:49)
at clojure.lang.RT.seq(RT.java:507)
at clojure.core$seq__4128.invoke(core.clj:137)
at clojure.core$filter$fn__4580.invoke(core.clj:2679)
at clojure.lang.LazySeq.sval(LazySeq.java:40)
at clojure.lang.LazySeq.seq(LazySeq.java:49)
at clojure.lang.Cons.next(Cons.java:39)
at clojure.lang.RT.next(RT.java:674)
at clojure.core$next__4112.invoke(core.clj:64)
at clojure.core.protocols$fn__6523.invoke(protocols.clj:170)
at 
clojure.core.protocols$fn__6478$G__6473__6487.invoke(protocols.clj:19)
at clojure.core.protocols$seq_reduce.invoke(protocols.clj:31)
at clojure.core.protocols$fn__6506.invoke(protocols.clj:101)
at 
clojure.core.protocols$fn__6452$G__6447__6465.invoke(protocols.clj:13)
at clojure.core$reduce.invoke(core.clj:6519)
at clojure.core$into.invoke(core.clj:6600)
at 
org.apache.storm.daemon.executor$metrics_tick.invoke(executor.clj:349)
at 
org.apache.storm.daemon.executor$fn__4975$tuple_action_fn__4981.invoke(executor.clj:522)
at 
org.apache.storm.daemon.executor$mk_task_receiver$fn__4964.invoke(executor.clj:471)
at 
org.apache.storm.disruptor$clojure_handler$reify__4475.onEvent(disruptor.clj:41)
at 
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:509)
... 7 more
2019-08-29 16:35:49 ERROR executor:0 - 
java.lang.RuntimeException: java.lang.IllegalStateException: This consumer has 
already been closed.
at 
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:522)
at 
org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:487)
at 
org.apache.storm.utils.DisruptorQueue.consumeBatch(DisruptorQueue.java:477)
at org.apache.storm.disruptor$consume_batch.invoke(disruptor.clj:70)
at 
org.apache.storm.daemon.executor$fn__4975$fn__4990$fn__5021.invoke(executor.clj:634)
at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: This consumer has already been 
closed.
at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2202)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1984)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1963)
at 
org.apache.storm.kafka.spout.metrics.KafkaOffsetMetric.getValueAndReset(KafkaOffsetMetric.java:79)
at 
org.apache.storm.daemon.executor$metrics_tick$fn__4899.invoke(executor.clj:345)
at clojure.core$map$fn__4553.invoke(core.clj:2622)
at clojure.lang.LazySeq.sval(LazySeq.java:40)
at 

[jira] [Created] (METRON-2241) Profiler Integration Test Fails After Storm 1.2.1 Upgrade

2019-08-29 Thread Nick Allen (Jira)
Nick Allen created METRON-2241:
--

 Summary: Profiler Integration Test Fails After Storm 1.2.1 Upgrade
 Key: METRON-2241
 URL: https://issues.apache.org/jira/browse/METRON-2241
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


The ProfilerIntegrationTest fails after the upgrade to Storm 1.2.1. This is the 
only integration test that starts/stops multiple topologies during testing.
{code:java}
---
 T E S T S
---
Running org.apache.metron.profiler.storm.integration.ProfilerIntegrationTest
2019-08-29 16:35:49 ERROR util:0 - Async loop died!
java.lang.RuntimeException: java.lang.IllegalStateException: This consumer has 
already been closed.
at 
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:522)
at 
org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:487)
at 
org.apache.storm.utils.DisruptorQueue.consumeBatch(DisruptorQueue.java:477)
at org.apache.storm.disruptor$consume_batch.invoke(disruptor.clj:70)
at 
org.apache.storm.daemon.executor$fn__4975$fn__4990$fn__5021.invoke(executor.clj:634)
at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: This consumer has already been 
closed.
at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2202)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1984)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1963)
at 
org.apache.storm.kafka.spout.metrics.KafkaOffsetMetric.getValueAndReset(KafkaOffsetMetric.java:79)
at 
org.apache.storm.daemon.executor$metrics_tick$fn__4899.invoke(executor.clj:345)
at clojure.core$map$fn__4553.invoke(core.clj:2622)
at clojure.lang.LazySeq.sval(LazySeq.java:40)
at clojure.lang.LazySeq.seq(LazySeq.java:49)
at clojure.lang.RT.seq(RT.java:507)
at clojure.core$seq__4128.invoke(core.clj:137)
at clojure.core$filter$fn__4580.invoke(core.clj:2679)
at clojure.lang.LazySeq.sval(LazySeq.java:40)
at clojure.lang.LazySeq.seq(LazySeq.java:49)
at clojure.lang.Cons.next(Cons.java:39)
at clojure.lang.RT.next(RT.java:674)
at clojure.core$next__4112.invoke(core.clj:64)
at clojure.core.protocols$fn__6523.invoke(protocols.clj:170)
at 
clojure.core.protocols$fn__6478$G__6473__6487.invoke(protocols.clj:19)
at clojure.core.protocols$seq_reduce.invoke(protocols.clj:31)
at clojure.core.protocols$fn__6506.invoke(protocols.clj:101)
at 
clojure.core.protocols$fn__6452$G__6447__6465.invoke(protocols.clj:13)
at clojure.core$reduce.invoke(core.clj:6519)
at clojure.core$into.invoke(core.clj:6600)
at 
org.apache.storm.daemon.executor$metrics_tick.invoke(executor.clj:349)
at 
org.apache.storm.daemon.executor$fn__4975$tuple_action_fn__4981.invoke(executor.clj:522)
at 
org.apache.storm.daemon.executor$mk_task_receiver$fn__4964.invoke(executor.clj:471)
at 
org.apache.storm.disruptor$clojure_handler$reify__4475.onEvent(disruptor.clj:41)
at 
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:509)
... 7 more
2019-08-29 16:35:49 ERROR executor:0 - 
java.lang.RuntimeException: java.lang.IllegalStateException: This consumer has 
already been closed.
at 
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:522)
at 
org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:487)
at 
org.apache.storm.utils.DisruptorQueue.consumeBatch(DisruptorQueue.java:477)
at org.apache.storm.disruptor$consume_batch.invoke(disruptor.clj:70)
at 
org.apache.storm.daemon.executor$fn__4975$fn__4990$fn__5021.invoke(executor.clj:634)
at org.apache.storm.util$async_loop$fn__557.invoke(util.clj:484)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: This consumer has already been 
closed.
at 
org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2202)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1984)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1963)
at 
org.apache.storm.kafka.spout.metrics.KafkaOffsetMetric.getValueAndReset(KafkaOffsetMetric.java:79)
at 

[jira] [Assigned] (METRON-2231) Revert METRON-2175, METRON-2176, METRON-2177 in HDP 3.1 upgrade feature branch

2019-08-27 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen reassigned METRON-2231:
--

Assignee: Nick Allen

> Revert METRON-2175, METRON-2176, METRON-2177 in HDP 3.1 upgrade feature branch
> --
>
> Key: METRON-2231
> URL: https://issues.apache.org/jira/browse/METRON-2231
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Michael Miklavcic
>Assignee: Nick Allen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (METRON-2169) Upgrade to Kafka 2.0.0 and Storm 1.2.1

2019-08-23 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2169:
---
Summary: Upgrade to Kafka 2.0.0 and Storm 1.2.1  (was: Upgrade Kafka/Storm)

> Upgrade to Kafka 2.0.0 and Storm 1.2.1
> --
>
> Key: METRON-2169
> URL: https://issues.apache.org/jira/browse/METRON-2169
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Ryan Merriman
>Assignee: Ryan Merriman
>Priority: Major
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> To support HDP 3.1, we need to upgrade Kafka to version 2.0.0 and Storm to 
> version 1.2.1.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (METRON-2225) Upgrade to Solr 7.4.0

2019-08-23 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2225:
---
Summary: Upgrade to Solr 7.4.0  (was: Upgrade Solr)

> Upgrade to Solr 7.4.0
> -
>
> Key: METRON-2225
> URL: https://issues.apache.org/jira/browse/METRON-2225
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Ryan Merriman
>Assignee: Ryan Merriman
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We need to upgrade Solr from 6.6.2 to 7.4.0



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (METRON-2224) Upgrade to Zeppelin 0.8.0

2019-08-23 Thread Nick Allen (Jira)


 [ 
https://issues.apache.org/jira/browse/METRON-2224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2224:
---
Summary: Upgrade to Zeppelin 0.8.0  (was: Upgrade Zeppelin)

> Upgrade to Zeppelin 0.8.0
> -
>
> Key: METRON-2224
> URL: https://issues.apache.org/jira/browse/METRON-2224
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Ryan Merriman
>Assignee: Ryan Merriman
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We need to upgrade the Zeppelin dependency from 0.7.3 to 0.8.0.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Deleted] (METRON-2193) Upgrade Enrichments for HBase 2.0.2

2019-08-16 Thread Nick Allen (JIRA)


 [ 
https://issues.apache.org/jira/browse/METRON-2193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen deleted METRON-2193:
---


> Upgrade Enrichments for HBase 2.0.2
> ---
>
> Key: METRON-2193
> URL: https://issues.apache.org/jira/browse/METRON-2193
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Upgrade all Enrichment related components to work with HBase 2.0.2.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (METRON-2220) Upgrade Streaming Enrichments for HBase 2.0.2

2019-08-16 Thread Nick Allen (JIRA)
Nick Allen created METRON-2220:
--

 Summary: Upgrade Streaming Enrichments for HBase 2.0.2
 Key: METRON-2220
 URL: https://issues.apache.org/jira/browse/METRON-2220
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


Upgrade Streaming Enrichments to work with HBase 2.0.2



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (METRON-2219) Remove Legacy HBase Client

2019-08-16 Thread Nick Allen (JIRA)
Nick Allen created METRON-2219:
--

 Summary: Remove Legacy HBase Client
 Key: METRON-2219
 URL: https://issues.apache.org/jira/browse/METRON-2219
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


This change will remove the Legacy HBase Client, which was left in-place to 
allow the code base to continue to function as components were upgraded to 
HBase 2.0.2. 

This also removes the use of the Legacy HBase Client from the following two 
components which are used by the Enrichment Coprocessor.
 * SensorEnrichmentConfigController
 * HBaseCacheWriter



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (METRON-2218) Upgrade Data Management for HBase 2.0.2

2019-08-16 Thread Nick Allen (JIRA)
Nick Allen created METRON-2218:
--

 Summary: Upgrade Data Management for HBase 2.0.2
 Key: METRON-2218
 URL: https://issues.apache.org/jira/browse/METRON-2218
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


Upgrade the data management tools for HBase 2.0.2



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (METRON-2216) Upgrade Core Enrichments for HBase 2.0.2

2019-08-14 Thread Nick Allen (JIRA)
Nick Allen created METRON-2216:
--

 Summary: Upgrade Core Enrichments for HBase 2.0.2
 Key: METRON-2216
 URL: https://issues.apache.org/jira/browse/METRON-2216
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


This change upgrades the core Enrichment components to work with HBase 2.0.2.  
This includes the legacy HBase enrichment adapters and the enrichment Stellar 
functions.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (METRON-2204) PROFILE_VERBOSE Should React to Global Config Changes

2019-08-06 Thread Nick Allen (JIRA)
Nick Allen created METRON-2204:
--

 Summary: PROFILE_VERBOSE Should React to Global Config Changes
 Key: METRON-2204
 URL: https://issues.apache.org/jira/browse/METRON-2204
 Project: Metron
  Issue Type: Bug
Reporter: Nick Allen


Within the same Stellar session, the PROFILE_GET function will alter its 
behavior based on changed made to the global configuration during that session.

For example, open a REPL session.  Execute the PROFILE_GET function.  Change a 
value in the global configuration, like the period duration or salt divisor.  
Executing the PROFILE_GET function will use the altered value.  

The PROFILE_VERBOSE function does not exhibit the same behavior. The 
{{PROFILE_VERBOSE}} function has lazy initialization, but will not respond to 
changes in the global config like {{PROFILE_GET}} does. The Stellar session 
needs to be restarted for PROFILE_VERBOSE to respond to these changes.

The PROFILE_VERBOSE function should exhibit similar behavior to PROFILE_GET in 
this regards.

 

See [this conversation for more 
details|https://github.com/apache/metron/pull/1458#discussion_r311098015].



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (METRON-2193) Upgrade Enrichments for HBase 2.0.2

2019-07-25 Thread Nick Allen (JIRA)
Nick Allen created METRON-2193:
--

 Summary: Upgrade Enrichments for HBase 2.0.2
 Key: METRON-2193
 URL: https://issues.apache.org/jira/browse/METRON-2193
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


Upgrade all Enrichment related components to work with HBase 2.0.2.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (METRON-2188) Upgrade HBase Coprocessor for HBase 2.0.2

2019-07-22 Thread Nick Allen (JIRA)
Nick Allen created METRON-2188:
--

 Summary: Upgrade HBase Coprocessor for HBase 2.0.2
 Key: METRON-2188
 URL: https://issues.apache.org/jira/browse/METRON-2188
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


Upgrade the enrichment coprocessor to support HBase 2.0.2



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (METRON-2177) Upgrade Profiler for HBase 2.0.2

2019-07-04 Thread Nick Allen (JIRA)
Nick Allen created METRON-2177:
--

 Summary: Upgrade Profiler for HBase 2.0.2
 Key: METRON-2177
 URL: https://issues.apache.org/jira/browse/METRON-2177
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


Upgrade the Profiler for HBase 2.0.2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (METRON-2175) Introduce HBase Connection Abstractions for HBase 2.0.2

2019-07-03 Thread Nick Allen (JIRA)


 [ 
https://issues.apache.org/jira/browse/METRON-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen reassigned METRON-2175:
--

Assignee: Nick Allen

> Introduce HBase Connection Abstractions for HBase 2.0.2
> ---
>
> Key: METRON-2175
> URL: https://issues.apache.org/jira/browse/METRON-2175
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The major change with HBase in migrating Metron to 2.0.2 is that connection 
> management must now be managed by the client.  Previously this was managed 
> internally within the HBase client.
> More specifically, previously you could just instantiate a Table whenever you 
> like and not worry about closing it.  Now you must first establish a 
> Connection and from that Connection, you can create a Table. Both of those 
> need to be closed after use.  A Connection can be shared by multiple threads, 
> but a Table cannot be shared.
> Most of our existing abstractions were not designed to handle connection 
> management, specifically closing the Connection and Table resources after 
> use.  This is why the HBase upgrade is quite involved.
> This Jira will introduce the new abstractions that allow Metron to interact 
> with HBase 2.0.2, but not remove the existing ones.  I will then introduce 
> additional issues to start using these new abstractions in the Metron code 
> base.  Finally, the existing abstractions that do not work with HBase 2.0.2 
> will be removed as a final step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (METRON-2176) Upgrade REST for HBase 2.0.2

2019-07-03 Thread Nick Allen (JIRA)
Nick Allen created METRON-2176:
--

 Summary: Upgrade REST for HBase 2.0.2
 Key: METRON-2176
 URL: https://issues.apache.org/jira/browse/METRON-2176
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


Upgrade `metron-interface/metron-rest` to work with HBase 2.0.2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (METRON-2175) Introduce HBase Connection Abstractions for HBase 2.0.2

2019-07-03 Thread Nick Allen (JIRA)
Nick Allen created METRON-2175:
--

 Summary: Introduce HBase Connection Abstractions for HBase 2.0.2
 Key: METRON-2175
 URL: https://issues.apache.org/jira/browse/METRON-2175
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen


The major change with HBase in migrating Metron to 2.0.2 is that connection 
management must now be managed by the client.  Previously this was managed 
internally within the HBase client.

More specifically, previously you could just instantiate a Table whenever you 
like and not worry about closing it.  Now you must first establish a Connection 
and from that Connection, you can create a Table. Both of those need to be 
closed after use.  A Connection can be shared by multiple threads, but a Table 
cannot be shared.

Most of our existing abstractions were not designed to handle connection 
management, specifically closing the Connection and Table resources after use.  
This is why the HBase upgrade is quite involved.

This Jira will introduce the new abstractions that allow Metron to interact 
with HBase 2.0.2, but not remove the existing ones.  I will then introduce 
additional issues to start using these new abstractions in the Metron code 
base.  Finally, the existing abstractions that do not work with HBase 2.0.2 
will be removed as a final step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (METRON-2172) Solr Updates Not Tested in Integration Test

2019-07-02 Thread Nick Allen (JIRA)
Nick Allen created METRON-2172:
--

 Summary: Solr Updates Not Tested in Integration Test
 Key: METRON-2172
 URL: https://issues.apache.org/jira/browse/METRON-2172
 Project: Metron
  Issue Type: Bug
Reporter: Nick Allen
Assignee: Nick Allen


The `SolrUpdateIntegrationTest` is not testing that the `SolrDao` can update 
and retrieve values in a manner similar to what would occur in production.
h3. What?

This gap in the integration test is hiding a few existing bugs in the 
SolrUpdateIntegrationTest.
 # The timestamp is not being populated in the returned documents.
 # A NullPointerException occurs if a document does not contain a sensor type.
 # The comments are serialized and deserialized in multiple places and may be 
stored as either a Map or JSON string.

h3. Proof?

 If you alter the test to run against just a `SolrDao` the integration test 
will fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (METRON-2168) Elasticsearch Updates Not Tested in Integration Test

2019-06-28 Thread Nick Allen (JIRA)


 [ 
https://issues.apache.org/jira/browse/METRON-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2168:
---
Description: 
The `ElasticsearchUpdateIntegrationTest` is not testing that the 
`ElasticsearchDao` can update and retrieve values in a manner similar to what 
would occur in production.
h3. What?

Within the Elasticsearch index, the test fails to define the 'guid' field to be 
of type 'keyword', instead the type is defaulted to 'text'.  In a production 
setting this mistake would prevent any documents from being found by guid.  
Unfortunately, the test passes despite this. The test needs to match the 
behavior of what a user would experience in production.
h3. Why? 

These problems arise because of the way the test is setup.  Instead of directly 
testing an `ElasticsearchDao` as you might expect this test runs against a 
`MultiIndexDao` initialized with both an `ElasticseachDao` and an `HBaseDao`. 
On retrievals the `MultIndexDao` will return the document from whichever index 
responds first.

With the current test setup, the underlying `ElasticsearchDao` will never 
retrieve the document that the test case is expected.  In all cases where the 
test passes, the document is actually being returned from the `HBaseDao` which 
is actually just interacting with a mock backend.  The test needs to actually 
test that we can update and retrieve documents from Elasticsearch.
h3. Proof?

 If you alter the test to run against just an `ElasticsearchDao` the test will 
fail as follows in the attached log file.

 

  was:
The `ElasticsearchUpdateIntegrationTest` is not testing that the 
`ElasticsearchDao` can update and retrieve values in a manner similar to what 
would occur in production.
h3. What?

Within the Elasticsearch index, the test fails to define the 'guid' field to be 
of type 'keyword', instead the type is defaulted to 'text'.  In a production 
setting this mistake would prevent any documents from being found by guid.  
Unfortunately, the test passes despite this. The test needs to match the 
behavior of what a user would experience in production.
h3. Why? 

These problems arise because of the way the test is setup.  Instead of directly 
testing an `ElasticsearchDao` as you might expect this test runs against a 
`MultiIndexDao` initialized with both an `ElasticseachDao` and an `HBaseDao`. 
On retrievals the `MultIndexDao` will return the document from whichever index 
responds first.

With the current test setup, the underlying `ElasticsearchDao` will never 
retrieve the document that the test case is expected.  In all cases where the 
test passes, the document is actually being returned from the `HBaseDao` which 
is actually just interacting with a mock backend.  The test needs to actually 
test that we can update and retrieve documents from Elasticsearch.

 

Proof?

 If you alter the test to run against just an `ElasticsearchDao` the test will 
fail as follows in the attached log file.

 


> Elasticsearch Updates Not Tested in Integration Test
> 
>
> Key: METRON-2168
> URL: https://issues.apache.org/jira/browse/METRON-2168
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
> Attachments: test-failure.log
>
>
> The `ElasticsearchUpdateIntegrationTest` is not testing that the 
> `ElasticsearchDao` can update and retrieve values in a manner similar to what 
> would occur in production.
> h3. What?
> Within the Elasticsearch index, the test fails to define the 'guid' field to 
> be of type 'keyword', instead the type is defaulted to 'text'.  In a 
> production setting this mistake would prevent any documents from being found 
> by guid.  Unfortunately, the test passes despite this. The test needs to 
> match the behavior of what a user would experience in production.
> h3. Why? 
> These problems arise because of the way the test is setup.  Instead of 
> directly testing an `ElasticsearchDao` as you might expect this test runs 
> against a `MultiIndexDao` initialized with both an `ElasticseachDao` and an 
> `HBaseDao`. On retrievals the `MultIndexDao` will return the document from 
> whichever index responds first.
> With the current test setup, the underlying `ElasticsearchDao` will never 
> retrieve the document that the test case is expected.  In all cases where the 
> test passes, the document is actually being returned from the `HBaseDao` 
> which is actually just interacting with a mock backend.  The test needs to 
> actually test that we can update and retrieve documents from Elasticsearch.
> h3. Proof?
>  If you alter the test to run against just an `ElasticsearchDao` the test 
> will fail as follows in the attached log file.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (METRON-2168) Elasticsearch Updates Not Tested in Integration Test

2019-06-28 Thread Nick Allen (JIRA)


 [ 
https://issues.apache.org/jira/browse/METRON-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2168:
---
Description: 
The `ElasticsearchUpdateIntegrationTest` is not testing that the 
`ElasticsearchDao` can update and retrieve values in a manner similar to what 
would occur in production.

Within the Elasticsearch index, the test fails to define the 'guid' field to be 
of type 'keyword', instead the type is defaulted to 'text'.  In a production 
setting this mistake would prevent any documents from being found by guid.  
Unfortunately, the test passes despite this. The test needs to match the 
behavior of what a user would experience in production.
h3. Why? 

These problems arise because of the way the test is setup.  Instead of directly 
testing an `ElasticsearchDao` as you might expect this test runs against a 
`MultiIndexDao` initialized with both an `ElasticseachDao` and an `HBaseDao`. 
On retrievals the `MultIndexDao` will return the document from whichever index 
responds first.

With the current test setup, the underlying `ElasticsearchDao` will never 
retrieve the document that the test case is expected.  In all cases where the 
test passes, the document is actually being returned from the `HBaseDao` which 
is actually just interacting with a mock backend.  The test needs to actually 
test that we can update and retrieve documents from Elasticsearch.

 If you alter the test to run against just an `ElasticsearchDao` the test will 
fail as follows in the attached log file.

 

  was:
The `ElasticsearchUpdateIntegrationTest` is not testing that the 
`ElasticsearchDao` can update and retrieve values in a manner similar to what 
would occur in production.

Within the Elasticsearch index, the test fails to define the 'guid' field to be 
of type 'keyword', instead the type is defaulted to 'text'.  In a production 
setting this mistake would prevent any documents from being found by guid.  
Unfortunately, the test passes despite this. The test needs to match the 
behavior of what a user would experience in production.
h3. Why? 

These problems arise because of the way the test is setup.  Instead of directly 
testing an `ElasticsearchDao` as you might expect this test runs against a 
`MultiIndexDao` initialized with both an `ElasticseachDao` and an `HBaseDao`. 
On retrievals the `MultIndexDao` will return the document from whichever index 
responds first.

With the current test setup, the underlying `ElasticsearchDao` will never 
retrieve the document that the test case is expected.  In all cases where the 
test passes, the document is actually being returned from the `HBaseDao` which 
is actually just interacting with a mock backend.

The test needs to actually test that we can update and retrieve documents from 
Elasticsearch.

 If you alter the test to run against just an `ElasticsearchDao` the test will 
fail as follows in the attached log file.

 


> Elasticsearch Updates Not Tested in Integration Test
> 
>
> Key: METRON-2168
> URL: https://issues.apache.org/jira/browse/METRON-2168
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
> Attachments: test-failure.log
>
>
> The `ElasticsearchUpdateIntegrationTest` is not testing that the 
> `ElasticsearchDao` can update and retrieve values in a manner similar to what 
> would occur in production.
> Within the Elasticsearch index, the test fails to define the 'guid' field to 
> be of type 'keyword', instead the type is defaulted to 'text'.  In a 
> production setting this mistake would prevent any documents from being found 
> by guid.  Unfortunately, the test passes despite this. The test needs to 
> match the behavior of what a user would experience in production.
> h3. Why? 
> These problems arise because of the way the test is setup.  Instead of 
> directly testing an `ElasticsearchDao` as you might expect this test runs 
> against a `MultiIndexDao` initialized with both an `ElasticseachDao` and an 
> `HBaseDao`. On retrievals the `MultIndexDao` will return the document from 
> whichever index responds first.
> With the current test setup, the underlying `ElasticsearchDao` will never 
> retrieve the document that the test case is expected.  In all cases where the 
> test passes, the document is actually being returned from the `HBaseDao` 
> which is actually just interacting with a mock backend.  The test needs to 
> actually test that we can update and retrieve documents from Elasticsearch.
>  If you alter the test to run against just an `ElasticsearchDao` the test 
> will fail as follows in the attached log file.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (METRON-2168) Elasticsearch Updates Not Tested in Integration Test

2019-06-28 Thread Nick Allen (JIRA)


 [ 
https://issues.apache.org/jira/browse/METRON-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2168:
---
Description: 
The `ElasticsearchUpdateIntegrationTest` is not testing that the 
`ElasticsearchDao` can update and retrieve values in a manner similar to what 
would occur in production.
h3. What?

Within the Elasticsearch index, the test fails to define the 'guid' field to be 
of type 'keyword', instead the type is defaulted to 'text'.  In a production 
setting this mistake would prevent any documents from being found by guid.  
Unfortunately, the test passes despite this. The test needs to match the 
behavior of what a user would experience in production.
h3. Why? 

These problems arise because of the way the test is setup.  Instead of directly 
testing an `ElasticsearchDao` as you might expect this test runs against a 
`MultiIndexDao` initialized with both an `ElasticseachDao` and an `HBaseDao`. 
On retrievals the `MultIndexDao` will return the document from whichever index 
responds first.

With the current test setup, the underlying `ElasticsearchDao` will never 
retrieve the document that the test case is expected.  In all cases where the 
test passes, the document is actually being returned from the `HBaseDao` which 
is actually just interacting with a mock backend.  The test needs to actually 
test that we can update and retrieve documents from Elasticsearch.

 

Proof?

 If you alter the test to run against just an `ElasticsearchDao` the test will 
fail as follows in the attached log file.

 

  was:
The `ElasticsearchUpdateIntegrationTest` is not testing that the 
`ElasticsearchDao` can update and retrieve values in a manner similar to what 
would occur in production.

Within the Elasticsearch index, the test fails to define the 'guid' field to be 
of type 'keyword', instead the type is defaulted to 'text'.  In a production 
setting this mistake would prevent any documents from being found by guid.  
Unfortunately, the test passes despite this. The test needs to match the 
behavior of what a user would experience in production.
h3. Why? 

These problems arise because of the way the test is setup.  Instead of directly 
testing an `ElasticsearchDao` as you might expect this test runs against a 
`MultiIndexDao` initialized with both an `ElasticseachDao` and an `HBaseDao`. 
On retrievals the `MultIndexDao` will return the document from whichever index 
responds first.

With the current test setup, the underlying `ElasticsearchDao` will never 
retrieve the document that the test case is expected.  In all cases where the 
test passes, the document is actually being returned from the `HBaseDao` which 
is actually just interacting with a mock backend.  The test needs to actually 
test that we can update and retrieve documents from Elasticsearch.

 If you alter the test to run against just an `ElasticsearchDao` the test will 
fail as follows in the attached log file.

 


> Elasticsearch Updates Not Tested in Integration Test
> 
>
> Key: METRON-2168
> URL: https://issues.apache.org/jira/browse/METRON-2168
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
> Attachments: test-failure.log
>
>
> The `ElasticsearchUpdateIntegrationTest` is not testing that the 
> `ElasticsearchDao` can update and retrieve values in a manner similar to what 
> would occur in production.
> h3. What?
> Within the Elasticsearch index, the test fails to define the 'guid' field to 
> be of type 'keyword', instead the type is defaulted to 'text'.  In a 
> production setting this mistake would prevent any documents from being found 
> by guid.  Unfortunately, the test passes despite this. The test needs to 
> match the behavior of what a user would experience in production.
> h3. Why? 
> These problems arise because of the way the test is setup.  Instead of 
> directly testing an `ElasticsearchDao` as you might expect this test runs 
> against a `MultiIndexDao` initialized with both an `ElasticseachDao` and an 
> `HBaseDao`. On retrievals the `MultIndexDao` will return the document from 
> whichever index responds first.
> With the current test setup, the underlying `ElasticsearchDao` will never 
> retrieve the document that the test case is expected.  In all cases where the 
> test passes, the document is actually being returned from the `HBaseDao` 
> which is actually just interacting with a mock backend.  The test needs to 
> actually test that we can update and retrieve documents from Elasticsearch.
>  
> Proof?
>  If you alter the test to run against just an `ElasticsearchDao` the test 
> will fail as follows in the attached log file.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (METRON-2168) Elasticsearch Updates Not Tested in Integration Test

2019-06-28 Thread Nick Allen (JIRA)


 [ 
https://issues.apache.org/jira/browse/METRON-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2168:
---
Description: 
The `ElasticsearchUpdateIntegrationTest` is not testing that the 
`ElasticsearchDao` can update and retrieve values in a manner similar to what 
would occur in production.

Within the Elasticsearch index, the test fails to define the 'guid' field to be 
of type 'keyword', instead the type is defaulted to 'text'.  In a production 
setting this mistake would prevent any documents from being found by guid.  
Unfortunately, the test passes despite this. The test needs to match the 
behavior of what a user would experience in production.
h3. Why? 

These problems arise because of the way the test is setup.  Instead of directly 
testing an `ElasticsearchDao` as you might expect this test runs against a 
`MultiIndexDao` initialized with both an `ElasticseachDao` and an `HBaseDao`. 
On retrievals the `MultIndexDao` will return the document from whichever index 
responds first.

With the current test setup, the underlying `ElasticsearchDao` will never 
retrieve the document that the test case is expected.  In all cases where the 
test passes, the document is actually being returned from the `HBaseDao` which 
is actually just interacting with a mock backend.

The test needs to actually test that we can update and retrieve documents from 
Elasticsearch.

 If you alter the test to run against just an `ElasticsearchDao` the test will 
fail as follows in the attached log file.

 

  was:
The `ElasticsearchUpdateIntegrationTest` is not testing that the 
`ElasticsearchDao` can update and retrieve values in a manner similar to what 
would occur in production.

Within the Elasticsearch index, the test fails to define the 'guid' field to be 
of type 'keyword', instead the type is defaulted to 'text'.  In a production 
setting this mistake would prevent any documents from being found by guid.  
Unfortunately, the test passes despite this. The test needs to match the 
behavior of what a user would experience in production.
h3. Why?

 

These problems arise because of the way the test is setup.  Instead of directly 
testing an `ElasticsearchDao` as you might expect this test runs against a 
`MultiIndexDao` initialized with both an `ElasticseachDao` and an `HBaseDao`. 
On retrievals the `MultIndexDao` will return the document from whichever index 
responds first.

With the current test setup, the underlying ElasticsearchDao will never 
retrieve the document that the test case is expected.  In all cases where the 
test passes, the document is actually being returned from the HBaseDao which is 
actually just interacting with a mock backend.

The test needs to actually test that we can update and retrieve documents from 
Elasticsearch.

 

If you alter the test to run against just an `ElasticsearchDao` the test will 
fail as follows in the attached log file.

 


> Elasticsearch Updates Not Tested in Integration Test
> 
>
> Key: METRON-2168
> URL: https://issues.apache.org/jira/browse/METRON-2168
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
> Attachments: test-failure.log
>
>
> The `ElasticsearchUpdateIntegrationTest` is not testing that the 
> `ElasticsearchDao` can update and retrieve values in a manner similar to what 
> would occur in production.
> Within the Elasticsearch index, the test fails to define the 'guid' field to 
> be of type 'keyword', instead the type is defaulted to 'text'.  In a 
> production setting this mistake would prevent any documents from being found 
> by guid.  Unfortunately, the test passes despite this. The test needs to 
> match the behavior of what a user would experience in production.
> h3. Why? 
> These problems arise because of the way the test is setup.  Instead of 
> directly testing an `ElasticsearchDao` as you might expect this test runs 
> against a `MultiIndexDao` initialized with both an `ElasticseachDao` and an 
> `HBaseDao`. On retrievals the `MultIndexDao` will return the document from 
> whichever index responds first.
> With the current test setup, the underlying `ElasticsearchDao` will never 
> retrieve the document that the test case is expected.  In all cases where the 
> test passes, the document is actually being returned from the `HBaseDao` 
> which is actually just interacting with a mock backend.
> The test needs to actually test that we can update and retrieve documents 
> from Elasticsearch.
>  If you alter the test to run against just an `ElasticsearchDao` the test 
> will fail as follows in the attached log file.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (METRON-2168) Elasticsearch Updates Not Tested in Integration Test

2019-06-28 Thread Nick Allen (JIRA)
Nick Allen created METRON-2168:
--

 Summary: Elasticsearch Updates Not Tested in Integration Test
 Key: METRON-2168
 URL: https://issues.apache.org/jira/browse/METRON-2168
 Project: Metron
  Issue Type: Bug
Reporter: Nick Allen
Assignee: Nick Allen
 Attachments: test-failure.log

The `ElasticsearchUpdateIntegrationTest` is not testing that the 
`ElasticsearchDao` can update and retrieve values in a manner similar to what 
would occur in production.

Within the Elasticsearch index, the test fails to define the 'guid' field to be 
of type 'keyword', instead the type is defaulted to 'text'.  In a production 
setting this mistake would prevent any documents from being found by guid.  
Unfortunately, the test passes despite this. The test needs to match the 
behavior of what a user would experience in production.
h3. Why?

 

These problems arise because of the way the test is setup.  Instead of directly 
testing an `ElasticsearchDao` as you might expect this test runs against a 
`MultiIndexDao` initialized with both an `ElasticseachDao` and an `HBaseDao`. 
On retrievals the `MultIndexDao` will return the document from whichever index 
responds first.

With the current test setup, the underlying ElasticsearchDao will never 
retrieve the document that the test case is expected.  In all cases where the 
test passes, the document is actually being returned from the HBaseDao which is 
actually just interacting with a mock backend.

The test needs to actually test that we can update and retrieve documents from 
Elasticsearch.

 

If you alter the test to run against just an `ElasticsearchDao` the test will 
fail as follows in the attached log file.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (METRON-2166) FileFilterUtilTest.test_getPaths_leftEdge:116 expected:<1> but was:<2>

2019-06-27 Thread Nick Allen (JIRA)
Nick Allen created METRON-2166:
--

 Summary: FileFilterUtilTest.test_getPaths_leftEdge:116 
expected:<1> but was:<2>
 Key: METRON-2166
 URL: https://issues.apache.org/jira/browse/METRON-2166
 Project: Metron
  Issue Type: Bug
Reporter: Nick Allen


This seems to be an intermittent test failure.  For example, see 
[https://api.travis-ci.org/v3/job/551324929/log.txt]
{code:java}
---
 T E S T S
---
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.298 sec - in 
org.apache.metron.enrichment.converter.EnrichmentConverterTest
Running org.apache.metron.enrichment.stellar.ObjectGetTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.95 sec - in 
org.apache.metron.pcap.PcapHelperTest
Running org.apache.metron.pcap.mr.PcapJobTest
Running org.apache.metron.profiler.client.HBaseProfilerClientTest
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.33 sec - in 
org.apache.metron.pcap.mr.PcapJobTest
Running org.apache.metron.pcap.mr.OutputDirFormatterTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec - in 
org.apache.metron.pcap.mr.OutputDirFormatterTest
Running org.apache.metron.pcap.mr.FileFilterUtilTest
Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.114 sec <<< 
FAILURE! - in org.apache.metron.pcap.mr.FileFilterUtilTest
test_getPaths_leftEdge(org.apache.metron.pcap.mr.FileFilterUtilTest)  Time 
elapsed: 0.026 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.metron.pcap.mr.FileFilterUtilTest.test_getPaths_leftEdge(FileFilterUtilTest.java:116)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

Running org.apache.metron.pcap.PcapPackerComparatorTest
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.112 sec - in 
org.apache.metron.pcap.PcapPackerComparatorTest
Running org.apache.metron.pcap.pattern.ByteArrayMatchingUtilTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.819 sec - in 
org.apache.metron.profiler.client.HBaseProfilerClientTest
Running org.apache.metron.profiler.client.stellar.IntervalPredicateTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.024 sec - in 
org.apache.metron.profiler.client.stellar.IntervalPredicateTest
Running org.apache.metron.profiler.client.stellar.WindowLookbackTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.367 sec - in 

[jira] [Created] (METRON-2164) Remove the Split-Join Enrichment Topology

2019-06-25 Thread Nick Allen (JIRA)
Nick Allen created METRON-2164:
--

 Summary: Remove the Split-Join Enrichment Topology
 Key: METRON-2164
 URL: https://issues.apache.org/jira/browse/METRON-2164
 Project: Metron
  Issue Type: Bug
Reporter: Nick Allen
Assignee: Nick Allen


The Split-Join Enrichment topology has been deprecated since November 2018. 
Metron defaults to using the Unified Enrichment topology. 

Here is the original discuss thread on deprecation.

[https://lists.apache.org/thread.html/6cfc883de28a5cb41f26d0523522d4b93272ac954e5713c80a35675e@%3Cdev.metron.apache.org%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (METRON-2162) Ambari client exception occurred: No JSON object could be decoded

2019-06-25 Thread Nick Allen (JIRA)


 [ 
https://issues.apache.org/jira/browse/METRON-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2162:
---
Description: 
When attempting to deploy Metron in Ambari 2.7.3 with HDP 3.1 the following 
error occurs when deploying Metron on CentOS 7.
{code:java}
$ cd metron-deployment/development/centos7/
$ vagrant up

...

TASK [ambari_config : Deploy cluster with Ambari; http://node1:8080] ***

The full traceback is:
File "/tmp/ansible_LTBkrV/ansible_module_ambari_cluster_state.py", line 211, in 
main
if not blueprint_exists(ambari_url, username, password, blueprint_name):
File "/tmp/ansible_LTBkrV/ansible_module_ambari_cluster_state.py", line 334, in 
blueprint_exists
blueprints = get_blueprints(ambari_url, user, password)
File "/tmp/ansible_LTBkrV/ansible_module_ambari_cluster_state.py", line 315, in 
get_blueprints
services = json.loads(r.content)
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode
raise ValueError("No JSON object could be decoded")

fatal: [node1]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"blueprint_name": "metron_blueprint",
"blueprint_var": {
"groups": [
{
"cardinality": 1,
"components": [
{
"name": "NAMENODE"
},
{
"name": "SECONDARY_NAMENODE"
},
{
"name": "RESOURCEMANAGER"
},
{
"name": "HISTORYSERVER"
},
{
"name": "ZOOKEEPER_SERVER"
},
{
"name": "NIMBUS"
},
{
"name": "STORM_UI_SERVER"
},
{
"name": "DRPC_SERVER"
},
{
"name": "HBASE_MASTER"
},
{
"name": "HBASE_CLIENT"
},
{
"name": "APP_TIMELINE_SERVER"
},
{
"name": "DATANODE"
},
{
"name": "HDFS_CLIENT"
},
{
"name": "NODEMANAGER"
},
{
"name": "YARN_CLIENT"
},
{
"name": "MAPREDUCE2_CLIENT"
},
{
"name": "ZOOKEEPER_CLIENT"
},
{
"name": "SUPERVISOR"
},
{
"name": "KAFKA_BROKER"
},
{
"name": "HBASE_REGIONSERVER"
},
{
"name": "KIBANA_MASTER"
},
{
"name": "METRON_INDEXING"
},
{
"name": "METRON_PROFILER"
},
{
"name": "METRON_PCAP"
},
{
"name": "METRON_ENRICHMENT_MASTER"
},
{
"name": "METRON_PARSERS"
},
{
"name": "METRON_REST"
},
{
"name": "METRON_MANAGEMENT_UI"
},
{
"name": "METRON_ALERTS_UI"
},
{
"name": "ES_MASTER"
}
],
"configurations": [],
"name": "host_group_1"
}
],
"required_configurations": [
{
"metron-env": {
"es_hosts": "node1",
"solr_zookeeper_url": "node1:9983",
"storm_rest_addr": "http://node1:8744;,
"zeppelin_server_url": "node1:9995"
}
},
{
"metron-rest-env": {
"metron_jdbc_driver": "org.h2.Driver",
"metron_jdbc_password": "root",
"metron_jdbc_platform": "h2",
"metron_jdbc_url": "jdbc:h2:file:~/metrondb",
"metron_jdbc_username": "root"
}
},
{
"kibana-env": {
"kibana_default_application": "dashboard/AV-YpDmwdXwc6Ua9Muh9",
"kibana_es_url": "http://node1:9200;,
"kibana_log_dir": "/var/log/kibana",
"kibana_pid_dir": "/var/run/kibana",
"kibana_server_host": "0.0.0.0",
"kibana_server_port": 5000
}
}
],
"stack_name": "HDP",
"stack_version": "3.1"
},
"cluster_name": "metron_cluster",
"cluster_state": "present",
"configurations": [
{
"zoo.cfg": {
"dataDir": "/hadoop/zookeeper"
}
},
{
"hadoop-env": {
"dtnode_heapsize": 512,
"hadoop_heapsize": 1024,
"namenode_heapsize": 2048,
"namenode_opt_permsize": "128m"
}
},
{
"hbase-env": {
"hbase_master_heapsize": 512,
"hbase_regionserver_heapsize": 512,
"hbase_regionserver_xmn_max": 512
}
},
{
"hdfs-site": {
"dfs.datanode.data.dir": "/hadoop/hdfs/data",
"dfs.journalnode.edits.dir": "/hadoop/hdfs/journalnode",
"dfs.namenode.checkpoint.dir": "/hadoop/hdfs/namesecondary",
"dfs.namenode.name.dir": "/hadoop/hdfs/namenode",
"dfs.replication": 1
}
},
{
"yarn-env": {
"apptimelineserver_heapsize": 512,
"min_user_id": 500,
"nodemanager_heapsize": 512,
"resourcemanager_heapsize": 1024,
"yarn_heapsize": 512
}
},
{
"mapred-env": {
"jobhistory_heapsize": 256
}
},
{
"mapred-site": {
"mapreduce.jobhistory.recovery.store.leveldb.path": "/hadoop/mapreduce/jhs",
"mapreduce.map.java.opts": "-Xmx1024m",
"mapreduce.map.memory.mb": 1229,
"mapreduce.reduce.java.opts": "-Xmx1024m",
"mapreduce.reduce.memory.mb": 1229
}
},
{
"yarn-site": {
"yarn.nodemanager.local-dirs": "/hadoop/yarn/local",
"yarn.nodemanager.log-dirs": "/hadoop/yarn/log",
"yarn.nodemanager.resource.memory-mb": 4096,
"yarn.timeline-service.leveldb-state-store.path": "/hadoop/yarn/timeline",
"yarn.timeline-service.leveldb-timeline-store.path": "/hadoop/yarn/timeline"
}
},
{
"storm-site": {
"nimbus.childopts": "-Xmx1024m _JAAS_PLACEHOLDER",
"storm.cluster.metrics.consumer.register": "[{\"class\": 
\"org.apache.storm.metric.LoggingMetricsConsumer\"}]",
"storm.local.dir": "/hadoop/storm",
"supervisor.childopts": "-Xmx256m _JAAS_PLACEHOLDER",
"supervisor.slots.ports": "[6700, 6701, 6702, 6703, 6704, 6705]",
"topology.classpath": "/etc/hbase/conf:/etc/hadoop/conf",
"topology.metrics.consumer.register": "[{\"class\": 

[jira] [Comment Edited] (METRON-2162) Ambari client exception occurred: No JSON object could be decoded

2019-06-25 Thread Nick Allen (JIRA)


[ 
https://issues.apache.org/jira/browse/METRON-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872390#comment-16872390
 ] 

Nick Allen edited comment on METRON-2162 at 6/25/19 2:17 PM:
-

I have seen this issue previously and it was caused by the python-requests 
library.  The CentOS 7 node is running Python Requests 2.6.1, which is the last 
known working version of the library.

This was also not a problem when the original PR went in for METRON-2097 as 
[https://github.com/apache/metron/pull/1397].  I am not sure why this is a 
problem now.  Something might have changed with the underlying CentOS 7 
development VM setup.

Query Ambari using curl shows Ambari is responding correctly.
{code:java}
[root@node1 ~]# curl -u admin:admin -H "X-Requested-By: ambari" -i -X GET -k 
http://node1:8080/api/v1/blueprints
HTTP/1.1 200 OK
Date: Tue, 25 Jun 2019 13:56:19 GMT
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Cache-Control: no-store
Pragma: no-cache
Set-Cookie: AMBARISESSIONID=node0jsb8b6skdylu71zzy5l1qfv2.node0;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
User: admin
Content-Type: text/plain;charset=utf-8
X-Content-Type-Options: nosniff
Vary: Accept-Encoding, User-Agent
Transfer-Encoding: chunked

{
"href" : "http://node1:8080/api/v1/blueprints;,
"items" : [ ]
}{code}
Query Ambari using Python requests highlights the problem.
{code:java}
>>> import requests
>>> print requests.__version__
2.6.1
>>> r = requests.get("http://node1:8080/api/v1/blueprints;, 
>>> auth=("admin","admin"))
>>> r

>>> r.encoding
'utf-8'
>>> r.content
'\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\xab\xe6RPP\xca(JMSR\xb0\x022JJ\n\xac\xf4\xf5\xf3\xf2SR\r\xad,\x0c,\x0c\xf4\x13\x0b2\xf5\xcb\x0c\xf5\x93rJS\x0b\x8a2\xf3J\x8a\x95t@:2KRs\x8bAZ\xa2\x15b\xb9j\x01OR\xd0\x9bE\x00\x00\x00'
>>> r.text
u'\x1f\ufffd\x08\x00\x00\x00\x00\x00\x00\x00\ufffd\ufffdRPP\ufffd(JMSR\ufffd\x022JJ\n\ufffd\ufffd\ufffd\ufffd\ufffdSR\r\ufffd,\x0c,\x0c\ufffd\x13\x0b2\ufffd\ufffd\x0c\ufffd\ufffdrJS\x0b\ufffd2\ufffdJ\ufffd\ufffdt@:2KRs\ufffdAZ\ufffd\x15b\ufffdj\x01OR\u041bE\x00\x00\x00'
>>> r.json()
Traceback (most recent call last):
File "", line 1, in 
File "/usr/lib/python2.7/site-packages/requests/models.py", line 819, in json
return json.loads(self.text, **kwargs)
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded{code}
 

 


was (Author: nickwallen):
I have seen this issue previously and it was caused by the python-requests 
library.  The CentOS 7 node is running Python Requests 2.6.1, which is the last 
known working version of the library.

This was also not a problem when the original PR went in for METRON-2097 as 
[https://github.com/apache/metron/pull/1397].  I am not sure why this is a 
problem now.  Something might have changed with the underlying CentOS 7 
development VM setup.

Query Ambari using curl shows Ambari is responding correctly.
{code:java}
[root@node1 ~]# curl -u admin:admin -H "X-Requested-By: ambari" -i -X GET -k 
http://node1:8080/api/v1/blueprints
HTTP/1.1 200 OK
Date: Tue, 25 Jun 2019 13:56:19 GMT
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Cache-Control: no-store
Pragma: no-cache
Set-Cookie: AMBARISESSIONID=node0jsb8b6skdylu71zzy5l1qfv2.node0;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
User: admin
Content-Type: text/plain;charset=utf-8
X-Content-Type-Options: nosniff
Vary: Accept-Encoding, User-Agent
Transfer-Encoding: chunked

{
"href" : "http://node1:8080/api/v1/blueprints;,
"items" : [ ]
}{code}
Query Ambari using Python requests highlights the problem.
{code:java}
>>> import requests
>>> print requests.__version__
2.6.1

>>> r = requests.get("http://node1:8080/api/v1/blueprints;, 
>>> auth=("admin","admin"))
>>> r


>>> r.encoding
'utf-8'

>>> r.content
'\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\xab\xe6RPP\xca(JMSR\xb0\x022JJ\n\xac\xf4\xf5\xf3\xf2SR\r\xad,\x0c,\x0c\xf4\x13\x0b2\xf5\xcb\x0c\xf5\x93rJS\x0b\x8a2\xf3J\x8a\x95t@:2KRs\x8bAZ\xa2\x15b\xb9j\x01OR\xd0\x9bE\x00\x00\x00'

>>> r.text
u'\x1f\ufffd\x08\x00\x00\x00\x00\x00\x00\x00\ufffd\ufffdRPP\ufffd(JMSR\ufffd\x022JJ\n\ufffd\ufffd\ufffd\ufffd\ufffdSR\r\ufffd,\x0c,\x0c\ufffd\x13\x0b2\ufffd\ufffd\x0c\ufffd\ufffdrJS\x0b\ufffd2\ufffdJ\ufffd\ufffdt@:2KRs\ufffdAZ\ufffd\x15b\ufffdj\x01OR\u041bE\x00\x00\x00'

>>> r.json()
Traceback (most recent call last):
File "", line 1, in 
File "/usr/lib/python2.7/site-packages/requests/models.py", line 819, in json
return json.loads(self.text, **kwargs)
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return 

[jira] [Commented] (METRON-2162) Ambari client exception occurred: No JSON object could be decoded

2019-06-25 Thread Nick Allen (JIRA)


[ 
https://issues.apache.org/jira/browse/METRON-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872390#comment-16872390
 ] 

Nick Allen commented on METRON-2162:


I have seen this issue previously and it was caused by the python-requests 
library.  The CentOS 7 node is running Python Requests 2.6.1, which is the last 
known working version of the library.

This was also not a problem when the original PR went in for METRON-2097 as 
[https://github.com/apache/metron/pull/1397].  I am not sure why this is a 
problem now.  Something might have changed with the underlying CentOS 7 
development VM setup.

Query Ambari using curl shows Ambari is responding correctly.
{code:java}
[root@node1 ~]# curl -u admin:admin -H "X-Requested-By: ambari" -i -X GET -k 
http://node1:8080/api/v1/blueprints
HTTP/1.1 200 OK
Date: Tue, 25 Jun 2019 13:56:19 GMT
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Cache-Control: no-store
Pragma: no-cache
Set-Cookie: AMBARISESSIONID=node0jsb8b6skdylu71zzy5l1qfv2.node0;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
User: admin
Content-Type: text/plain;charset=utf-8
X-Content-Type-Options: nosniff
Vary: Accept-Encoding, User-Agent
Transfer-Encoding: chunked

{
"href" : "http://node1:8080/api/v1/blueprints;,
"items" : [ ]
}{code}
Query Ambari using Python requests highlights the problem.
{code:java}
>>> import requests
>>> print requests.__version__
2.6.1

>>> r = requests.get("http://node1:8080/api/v1/blueprints;, 
>>> auth=("admin","admin"))
>>> r


>>> r.encoding
'utf-8'

>>> r.content
'\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00\xab\xe6RPP\xca(JMSR\xb0\x022JJ\n\xac\xf4\xf5\xf3\xf2SR\r\xad,\x0c,\x0c\xf4\x13\x0b2\xf5\xcb\x0c\xf5\x93rJS\x0b\x8a2\xf3J\x8a\x95t@:2KRs\x8bAZ\xa2\x15b\xb9j\x01OR\xd0\x9bE\x00\x00\x00'

>>> r.text
u'\x1f\ufffd\x08\x00\x00\x00\x00\x00\x00\x00\ufffd\ufffdRPP\ufffd(JMSR\ufffd\x022JJ\n\ufffd\ufffd\ufffd\ufffd\ufffdSR\r\ufffd,\x0c,\x0c\ufffd\x13\x0b2\ufffd\ufffd\x0c\ufffd\ufffdrJS\x0b\ufffd2\ufffdJ\ufffd\ufffdt@:2KRs\ufffdAZ\ufffd\x15b\ufffdj\x01OR\u041bE\x00\x00\x00'

>>> r.json()
Traceback (most recent call last):
File "", line 1, in 
File "/usr/lib/python2.7/site-packages/requests/models.py", line 819, in json
return json.loads(self.text, **kwargs)
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded{code}
 

 

> Ambari client exception occurred: No JSON object could be decoded
> -
>
> Key: METRON-2162
> URL: https://issues.apache.org/jira/browse/METRON-2162
> Project: Metron
>  Issue Type: Sub-task
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
>
> When attempting to deploy Metron in Ambari 2.7.3 with HDP 3.1 the following 
> error occurs when deploying Metron on CentOS 7.
> {code:java}
>  {code}
> {code:java}
> $ cd metron-deployment/development/centos7/
> $ vagrant up
> ...
> TASK [ambari_config : Deploy cluster with Ambari; http://node1:8080] 
> ***
> The full traceback is:
> File "/tmp/ansible_LTBkrV/ansible_module_ambari_cluster_state.py", line 211, 
> in main
> if not blueprint_exists(ambari_url, username, password, blueprint_name):
> File "/tmp/ansible_LTBkrV/ansible_module_ambari_cluster_state.py", line 334, 
> in blueprint_exists
> blueprints = get_blueprints(ambari_url, user, password)
> File "/tmp/ansible_LTBkrV/ansible_module_ambari_cluster_state.py", line 315, 
> in get_blueprints
> services = json.loads(r.content)
> File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
> return _default_decoder.decode(s)
> File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
> obj, end = self.raw_decode(s, idx=_w(s, 0).end())
> File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode
> raise ValueError("No JSON object could be decoded")
> fatal: [node1]: FAILED! => {
> "changed": false,
> "invocation": {
> "module_args": {
> "blueprint_name": "metron_blueprint",
> "blueprint_var": {
> "groups": [
> {
> "cardinality": 1,
> "components": [
> {
> "name": "NAMENODE"
> },
> {
> "name": "SECONDARY_NAMENODE"
> },
> {
> "name": "RESOURCEMANAGER"
> },
> {
> "name": "HISTORYSERVER"
> },
> {
> "name": "ZOOKEEPER_SERVER"
> },
> {
> "name": "NIMBUS"
> },
> {
> "name": "STORM_UI_SERVER"
> },
> {
> "name": "DRPC_SERVER"
> },
> {
> "name": "HBASE_MASTER"
> },
> {
> "name": "HBASE_CLIENT"
> },
> {
> "name": "APP_TIMELINE_SERVER"
> },
> {
> "name": "DATANODE"
> },
> {
> "name": "HDFS_CLIENT"
> },
> {
> "name": "NODEMANAGER"
> },
> {
> "name": 

[jira] [Updated] (METRON-2162) Ambari client exception occurred: No JSON object could be decoded

2019-06-25 Thread Nick Allen (JIRA)


 [ 
https://issues.apache.org/jira/browse/METRON-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2162:
---
Description: 
When attempting to deploy Metron in Ambari 2.7.3 with HDP 3.1 the following 
error occurs when deploying Metron on CentOS 7.
{code:java}
 {code}
{code:java}
$ cd metron-deployment/development/centos7/
$ vagrant up

...

TASK [ambari_config : Deploy cluster with Ambari; http://node1:8080] ***

The full traceback is:
File "/tmp/ansible_LTBkrV/ansible_module_ambari_cluster_state.py", line 211, in 
main
if not blueprint_exists(ambari_url, username, password, blueprint_name):
File "/tmp/ansible_LTBkrV/ansible_module_ambari_cluster_state.py", line 334, in 
blueprint_exists
blueprints = get_blueprints(ambari_url, user, password)
File "/tmp/ansible_LTBkrV/ansible_module_ambari_cluster_state.py", line 315, in 
get_blueprints
services = json.loads(r.content)
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode
raise ValueError("No JSON object could be decoded")

fatal: [node1]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"blueprint_name": "metron_blueprint",
"blueprint_var": {
"groups": [
{
"cardinality": 1,
"components": [
{
"name": "NAMENODE"
},
{
"name": "SECONDARY_NAMENODE"
},
{
"name": "RESOURCEMANAGER"
},
{
"name": "HISTORYSERVER"
},
{
"name": "ZOOKEEPER_SERVER"
},
{
"name": "NIMBUS"
},
{
"name": "STORM_UI_SERVER"
},
{
"name": "DRPC_SERVER"
},
{
"name": "HBASE_MASTER"
},
{
"name": "HBASE_CLIENT"
},
{
"name": "APP_TIMELINE_SERVER"
},
{
"name": "DATANODE"
},
{
"name": "HDFS_CLIENT"
},
{
"name": "NODEMANAGER"
},
{
"name": "YARN_CLIENT"
},
{
"name": "MAPREDUCE2_CLIENT"
},
{
"name": "ZOOKEEPER_CLIENT"
},
{
"name": "SUPERVISOR"
},
{
"name": "KAFKA_BROKER"
},
{
"name": "HBASE_REGIONSERVER"
},
{
"name": "KIBANA_MASTER"
},
{
"name": "METRON_INDEXING"
},
{
"name": "METRON_PROFILER"
},
{
"name": "METRON_PCAP"
},
{
"name": "METRON_ENRICHMENT_MASTER"
},
{
"name": "METRON_PARSERS"
},
{
"name": "METRON_REST"
},
{
"name": "METRON_MANAGEMENT_UI"
},
{
"name": "METRON_ALERTS_UI"
},
{
"name": "ES_MASTER"
}
],
"configurations": [],
"name": "host_group_1"
}
],
"required_configurations": [
{
"metron-env": {
"es_hosts": "node1",
"solr_zookeeper_url": "node1:9983",
"storm_rest_addr": "http://node1:8744;,
"zeppelin_server_url": "node1:9995"
}
},
{
"metron-rest-env": {
"metron_jdbc_driver": "org.h2.Driver",
"metron_jdbc_password": "root",
"metron_jdbc_platform": "h2",
"metron_jdbc_url": "jdbc:h2:file:~/metrondb",
"metron_jdbc_username": "root"
}
},
{
"kibana-env": {
"kibana_default_application": "dashboard/AV-YpDmwdXwc6Ua9Muh9",
"kibana_es_url": "http://node1:9200;,
"kibana_log_dir": "/var/log/kibana",
"kibana_pid_dir": "/var/run/kibana",
"kibana_server_host": "0.0.0.0",
"kibana_server_port": 5000
}
}
],
"stack_name": "HDP",
"stack_version": "3.1"
},
"cluster_name": "metron_cluster",
"cluster_state": "present",
"configurations": [
{
"zoo.cfg": {
"dataDir": "/hadoop/zookeeper"
}
},
{
"hadoop-env": {
"dtnode_heapsize": 512,
"hadoop_heapsize": 1024,
"namenode_heapsize": 2048,
"namenode_opt_permsize": "128m"
}
},
{
"hbase-env": {
"hbase_master_heapsize": 512,
"hbase_regionserver_heapsize": 512,
"hbase_regionserver_xmn_max": 512
}
},
{
"hdfs-site": {
"dfs.datanode.data.dir": "/hadoop/hdfs/data",
"dfs.journalnode.edits.dir": "/hadoop/hdfs/journalnode",
"dfs.namenode.checkpoint.dir": "/hadoop/hdfs/namesecondary",
"dfs.namenode.name.dir": "/hadoop/hdfs/namenode",
"dfs.replication": 1
}
},
{
"yarn-env": {
"apptimelineserver_heapsize": 512,
"min_user_id": 500,
"nodemanager_heapsize": 512,
"resourcemanager_heapsize": 1024,
"yarn_heapsize": 512
}
},
{
"mapred-env": {
"jobhistory_heapsize": 256
}
},
{
"mapred-site": {
"mapreduce.jobhistory.recovery.store.leveldb.path": "/hadoop/mapreduce/jhs",
"mapreduce.map.java.opts": "-Xmx1024m",
"mapreduce.map.memory.mb": 1229,
"mapreduce.reduce.java.opts": "-Xmx1024m",
"mapreduce.reduce.memory.mb": 1229
}
},
{
"yarn-site": {
"yarn.nodemanager.local-dirs": "/hadoop/yarn/local",
"yarn.nodemanager.log-dirs": "/hadoop/yarn/log",
"yarn.nodemanager.resource.memory-mb": 4096,
"yarn.timeline-service.leveldb-state-store.path": "/hadoop/yarn/timeline",
"yarn.timeline-service.leveldb-timeline-store.path": "/hadoop/yarn/timeline"
}
},
{
"storm-site": {
"nimbus.childopts": "-Xmx1024m _JAAS_PLACEHOLDER",
"storm.cluster.metrics.consumer.register": "[{\"class\": 
\"org.apache.storm.metric.LoggingMetricsConsumer\"}]",
"storm.local.dir": "/hadoop/storm",
"supervisor.childopts": "-Xmx256m _JAAS_PLACEHOLDER",
"supervisor.slots.ports": "[6700, 6701, 6702, 6703, 6704, 6705]",
"topology.classpath": "/etc/hbase/conf:/etc/hadoop/conf",
"topology.metrics.consumer.register": 

[jira] [Created] (METRON-2162) Ambari client exception occurred: No JSON object could be decoded

2019-06-25 Thread Nick Allen (JIRA)
Nick Allen created METRON-2162:
--

 Summary: Ambari client exception occurred: No JSON object could be 
decoded
 Key: METRON-2162
 URL: https://issues.apache.org/jira/browse/METRON-2162
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


{code:java}
TASK [ambari_config : Deploy cluster with Ambari; http://node1:8080] ***

The full traceback is:
File "/tmp/ansible_LTBkrV/ansible_module_ambari_cluster_state.py", line 211, in 
main
if not blueprint_exists(ambari_url, username, password, blueprint_name):
File "/tmp/ansible_LTBkrV/ansible_module_ambari_cluster_state.py", line 334, in 
blueprint_exists
blueprints = get_blueprints(ambari_url, user, password)
File "/tmp/ansible_LTBkrV/ansible_module_ambari_cluster_state.py", line 315, in 
get_blueprints
services = json.loads(r.content)
File "/usr/lib64/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode
raise ValueError("No JSON object could be decoded")

fatal: [node1]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"blueprint_name": "metron_blueprint",
"blueprint_var": {
"groups": [
{
"cardinality": 1,
"components": [
{
"name": "NAMENODE"
},
{
"name": "SECONDARY_NAMENODE"
},
{
"name": "RESOURCEMANAGER"
},
{
"name": "HISTORYSERVER"
},
{
"name": "ZOOKEEPER_SERVER"
},
{
"name": "NIMBUS"
},
{
"name": "STORM_UI_SERVER"
},
{
"name": "DRPC_SERVER"
},
{
"name": "HBASE_MASTER"
},
{
"name": "HBASE_CLIENT"
},
{
"name": "APP_TIMELINE_SERVER"
},
{
"name": "DATANODE"
},
{
"name": "HDFS_CLIENT"
},
{
"name": "NODEMANAGER"
},
{
"name": "YARN_CLIENT"
},
{
"name": "MAPREDUCE2_CLIENT"
},
{
"name": "ZOOKEEPER_CLIENT"
},
{
"name": "SUPERVISOR"
},
{
"name": "KAFKA_BROKER"
},
{
"name": "HBASE_REGIONSERVER"
},
{
"name": "KIBANA_MASTER"
},
{
"name": "METRON_INDEXING"
},
{
"name": "METRON_PROFILER"
},
{
"name": "METRON_PCAP"
},
{
"name": "METRON_ENRICHMENT_MASTER"
},
{
"name": "METRON_PARSERS"
},
{
"name": "METRON_REST"
},
{
"name": "METRON_MANAGEMENT_UI"
},
{
"name": "METRON_ALERTS_UI"
},
{
"name": "ES_MASTER"
}
],
"configurations": [],
"name": "host_group_1"
}
],
"required_configurations": [
{
"metron-env": {
"es_hosts": "node1",
"solr_zookeeper_url": "node1:9983",
"storm_rest_addr": "http://node1:8744;,
"zeppelin_server_url": "node1:9995"
}
},
{
"metron-rest-env": {
"metron_jdbc_driver": "org.h2.Driver",
"metron_jdbc_password": "root",
"metron_jdbc_platform": "h2",
"metron_jdbc_url": "jdbc:h2:file:~/metrondb",
"metron_jdbc_username": "root"
}
},
{
"kibana-env": {
"kibana_default_application": "dashboard/AV-YpDmwdXwc6Ua9Muh9",
"kibana_es_url": "http://node1:9200;,
"kibana_log_dir": "/var/log/kibana",
"kibana_pid_dir": "/var/run/kibana",
"kibana_server_host": "0.0.0.0",
"kibana_server_port": 5000
}
}
],
"stack_name": "HDP",
"stack_version": "3.1"
},
"cluster_name": "metron_cluster",
"cluster_state": "present",
"configurations": [
{
"zoo.cfg": {
"dataDir": "/hadoop/zookeeper"
}
},
{
"hadoop-env": {
"dtnode_heapsize": 512,
"hadoop_heapsize": 1024,
"namenode_heapsize": 2048,
"namenode_opt_permsize": "128m"
}
},
{
"hbase-env": {
"hbase_master_heapsize": 512,
"hbase_regionserver_heapsize": 512,
"hbase_regionserver_xmn_max": 512
}
},
{
"hdfs-site": {
"dfs.datanode.data.dir": "/hadoop/hdfs/data",
"dfs.journalnode.edits.dir": "/hadoop/hdfs/journalnode",
"dfs.namenode.checkpoint.dir": "/hadoop/hdfs/namesecondary",
"dfs.namenode.name.dir": "/hadoop/hdfs/namenode",
"dfs.replication": 1
}
},
{
"yarn-env": {
"apptimelineserver_heapsize": 512,
"min_user_id": 500,
"nodemanager_heapsize": 512,
"resourcemanager_heapsize": 1024,
"yarn_heapsize": 512
}
},
{
"mapred-env": {
"jobhistory_heapsize": 256
}
},
{
"mapred-site": {
"mapreduce.jobhistory.recovery.store.leveldb.path": "/hadoop/mapreduce/jhs",
"mapreduce.map.java.opts": "-Xmx1024m",
"mapreduce.map.memory.mb": 1229,
"mapreduce.reduce.java.opts": "-Xmx1024m",
"mapreduce.reduce.memory.mb": 1229
}
},
{
"yarn-site": {
"yarn.nodemanager.local-dirs": "/hadoop/yarn/local",
"yarn.nodemanager.log-dirs": "/hadoop/yarn/log",
"yarn.nodemanager.resource.memory-mb": 4096,
"yarn.timeline-service.leveldb-state-store.path": "/hadoop/yarn/timeline",
"yarn.timeline-service.leveldb-timeline-store.path": "/hadoop/yarn/timeline"
}
},
{
"storm-site": {
"nimbus.childopts": "-Xmx1024m _JAAS_PLACEHOLDER",
"storm.cluster.metrics.consumer.register": "[{\"class\": 
\"org.apache.storm.metric.LoggingMetricsConsumer\"}]",
"storm.local.dir": "/hadoop/storm",
"supervisor.childopts": "-Xmx256m _JAAS_PLACEHOLDER",
"supervisor.slots.ports": "[6700, 6701, 6702, 6703, 6704, 6705]",
"topology.classpath": "/etc/hbase/conf:/etc/hadoop/conf",
"topology.metrics.consumer.register": "[{\"class\": 

[jira] [Commented] (METRON-2155) Cache Maven in Travis CI Builds

2019-06-04 Thread Nick Allen (JIRA)


[ 
https://issues.apache.org/jira/browse/METRON-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855793#comment-16855793
 ] 

Nick Allen commented on METRON-2155:


(1) One way would be to cache the Maven zip file.  Travis says this won't help 
performance, but at least it might help with the build failures.

[https://docs.travis-ci.com/user/caching/#things-not-to-cache]

(2) Another way would be to cache the directory containing the unzipped maven 
binaries.  But how does Travis know not to re-download the Maven binary in that 
case?

(3) Would using something like mvnw help? 
[https://github.com/takari/maven-wrapper]  This would make Maven installation 
simpler for both users and Travis.  I am not sure if this would help the 
current build failures that we are seeing though.

 

 

> Cache Maven in Travis CI Builds
> ---
>
> Key: METRON-2155
> URL: https://issues.apache.org/jira/browse/METRON-2155
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
>
> In the Travis CI builds, we download Maven and even retry up to 10 times 
> should this download fail.  We continue to see some failures when downloading 
> Maven.
> [https://api.travis-ci.org/v3/job/540955869/log.txt]
>  
> This could be a problem within Travis' internal networks or this could be an 
> intermittent issue with the Apache mirrors.  We have no way of knowing and no 
> good way to resolve these sorts of problems.
> We could cache the Maven dependency to mitigate this problem.  It would not 
> truly resolve the problem, but should help mitigate it.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (METRON-2155) Cache Maven in Travis CI Builds

2019-06-04 Thread Nick Allen (JIRA)


[ 
https://issues.apache.org/jira/browse/METRON-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855739#comment-16855739
 ] 

Nick Allen commented on METRON-2155:


This suggestion originally came from [~justinleet], I believe.

> Cache Maven in Travis CI Builds
> ---
>
> Key: METRON-2155
> URL: https://issues.apache.org/jira/browse/METRON-2155
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
>
> In the Travis CI builds, we download Maven and even retry up to 10 times 
> should this download fail.  We continue to see some failures when downloading 
> Maven.
> [https://api.travis-ci.org/v3/job/540955869/log.txt]
>  
> This could be a problem within Travis' internal networks or this could be an 
> intermittent issue with the Apache mirrors.  We have no way of knowing and no 
> good way to resolve these sorts of problems.
> We could cache the Maven dependency to mitigate this problem.  It would not 
> truly resolve the problem, but should help mitigate it.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (METRON-2155) Cache Maven in Travis CI Builds

2019-06-04 Thread Nick Allen (JIRA)
Nick Allen created METRON-2155:
--

 Summary: Cache Maven in Travis CI Builds
 Key: METRON-2155
 URL: https://issues.apache.org/jira/browse/METRON-2155
 Project: Metron
  Issue Type: Bug
Reporter: Nick Allen
Assignee: Nick Allen


In the Travis CI builds, we download Maven and even retry up to 10 times should 
this download fail.  We continue to see some failures when downloading Maven.

[https://api.travis-ci.org/v3/job/540955869/log.txt]

 

This could be a problem within Travis' internal networks or this could be an 
intermittent issue with the Apache mirrors.  We have no way of knowing and no 
good way to resolve these sorts of problems.

We could cache the Maven dependency to mitigate this problem.  It would not 
truly resolve the problem, but should help mitigate it.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (METRON-2143) Travis Build Fails to Download Maven

2019-05-24 Thread Nick Allen (JIRA)


[ 
https://issues.apache.org/jira/browse/METRON-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847672#comment-16847672
 ] 

Nick Allen commented on METRON-2143:


Wget will retry, by default, 20 times for many error conditions.  It will not 
retry for some error conditions like a 503.  There are options to do so though.
{code:java}
--retry-connrefused
Consider "connection refused" a transient error and try again. Normally Wget 
gives up on a URL when it is unable to connect to the site because failure to
connect is taken as a sign that the server is not running at all and that 
retries would not help. This option is for mirroring unreliable sites whose 
servers tend to disappear for short periods of time.{code}
 

> Travis Build Fails to Download Maven
> 
>
> Key: METRON-2143
> URL: https://issues.apache.org/jira/browse/METRON-2143
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
>
> {code:java}
> 0.50s$ wget 
> https://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.zip
> --2019-05-22 21:24:16--  
> https://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.zip
> Resolving archive.apache.org (archive.apache.org)... 163.172.17.199
> Connecting to archive.apache.org (archive.apache.org)|163.172.17.199|:443... 
> connected.
> HTTP request sent, awaiting response... 503 Service Unavailable
> 2019-05-22 21:24:17 ERROR 503: Service Unavailable.
> The command "wget 
> https://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.zip;
>  failed and exited with 8 during .
> Your build has been stopped.{code}
> See [https://travis-ci.org/apache/metron/jobs/535942660] for an example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (METRON-2143) Travis Build Fails to Download Maven

2019-05-24 Thread Nick Allen (JIRA)


[ 
https://issues.apache.org/jira/browse/METRON-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16847672#comment-16847672
 ] 

Nick Allen edited comment on METRON-2143 at 5/24/19 4:05 PM:
-

Wget will retry, by default, 20 times for many error conditions.  It will not 
retry for some error conditions like a 503.  There are options to do so though.
{code:java}
--retry-connrefused
Consider "connection refused" a transient error and try again. Normally Wget 
gives up on a URL when it is unable to connect to the site because failure to 
connect is taken as a sign that the server is not running at all and that 
retries would not help. This option is for mirroring unreliable sites whose 
servers tend to disappear for short periods of time.{code}
 


was (Author: nickwallen):
Wget will retry, by default, 20 times for many error conditions.  It will not 
retry for some error conditions like a 503.  There are options to do so though.
{code:java}
--retry-connrefused
Consider "connection refused" a transient error and try again. Normally Wget 
gives up on a URL when it is unable to connect to the site because failure to
connect is taken as a sign that the server is not running at all and that 
retries would not help. This option is for mirroring unreliable sites whose 
servers tend to disappear for short periods of time.{code}
 

> Travis Build Fails to Download Maven
> 
>
> Key: METRON-2143
> URL: https://issues.apache.org/jira/browse/METRON-2143
> Project: Metron
>  Issue Type: Bug
>Reporter: Nick Allen
>Assignee: Nick Allen
>Priority: Major
>
> {code:java}
> 0.50s$ wget 
> https://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.zip
> --2019-05-22 21:24:16--  
> https://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.zip
> Resolving archive.apache.org (archive.apache.org)... 163.172.17.199
> Connecting to archive.apache.org (archive.apache.org)|163.172.17.199|:443... 
> connected.
> HTTP request sent, awaiting response... 503 Service Unavailable
> 2019-05-22 21:24:17 ERROR 503: Service Unavailable.
> The command "wget 
> https://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.zip;
>  failed and exited with 8 during .
> Your build has been stopped.{code}
> See [https://travis-ci.org/apache/metron/jobs/535942660] for an example.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (METRON-2143) Travis Build Fails to Download Maven

2019-05-24 Thread Nick Allen (JIRA)


 [ 
https://issues.apache.org/jira/browse/METRON-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Allen updated METRON-2143:
---
Description: 
{code:java}
0.50s$ wget 
https://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.zip
--2019-05-22 21:24:16--  
https://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.zip
Resolving archive.apache.org (archive.apache.org)... 163.172.17.199
Connecting to archive.apache.org (archive.apache.org)|163.172.17.199|:443... 
connected.
HTTP request sent, awaiting response... 503 Service Unavailable
2019-05-22 21:24:17 ERROR 503: Service Unavailable.
The command "wget 
https://archive.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.zip;
 failed and exited with 8 during .
Your build has been stopped.{code}
See [https://travis-ci.org/apache/metron/jobs/535942660] for an example.

  was:
{code:java}
Worker information
hostname: 
6723f0a5-d84f-4851-92bf-6d22d3630257@1.production-2-worker-org-gce-zp0l
version: v6.2.0 
https://github.com/travis-ci/worker/tree/5e5476e01646095f48eec13196fdb3faf8f5cbf7
instance: travis-job-46feb077-ec35-4506-b1c4-f91a3d58e859 
travis-ci-garnet-trusty-1512502259-986baf0 (via amqp)
startup: 7.065725114s
system_info
Build system information
Build language: java
Build group: stable
Build dist: trusty
Build id: 535942655
Job id: 535942660
Runtime kernel version: 4.4.0-101-generic
travis-build version: 0696e6115
Build image provisioning date and time
Tue Dec  5 19:58:13 UTC 2017
Operating System Details
Distributor ID: Ubuntu
Description:Ubuntu 14.04.5 LTS
Release:14.04
Codename:   trusty
Cookbooks Version
7c2c6a6 https://github.com/travis-ci/travis-cookbooks/tree/7c2c6a6
git version
git version 2.15.1
bash version
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
gcc version
gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
docker version
Client:
 Version:  17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:Tue Sep 26 22:42:38 2017
 OS/Arch:  linux/amd64
Server:
 Version:  17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:Tue Sep 26 22:41:20 2017
 OS/Arch:  linux/amd64
 Experimental: false
clang version
clang version 5.0.0 (tags/RELEASE_500/final)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /usr/local/clang-5.0.0/bin
jq version
jq-1.5
bats version
Bats 0.4.0
shellcheck version
0.4.6
shfmt version
v2.0.0
ccache version
ccache version 3.1.9
Copyright (C) 2002-2007 Andrew Tridgell
Copyright (C) 2009-2011 Joel Rosdahl
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; either version 3 of the License, or (at your option) any later
version.
cmake version
cmake version 3.9.2
CMake suite maintained and supported by Kitware (kitware.com/cmake).
heroku version
heroku-cli/6.14.39-addc925 (linux-x64) node-v9.2.0
imagemagick version
Version: ImageMagick 6.7.7-10 2017-07-31 Q16 http://www.imagemagick.org
md5deep version
4.2
mercurial version
Mercurial Distributed SCM (version 4.2.2)
(see https://mercurial-scm.org for more information)
Copyright (C) 2005-2017 Matt Mackall and others
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
mysql version
mysql  Ver 14.14 Distrib 5.6.33, for debian-linux-gnu (x86_64) using  EditLine 
wrapper
openssl version
OpenSSL 1.0.1f 6 Jan 2014
packer version
Packer v1.0.2
Your version of Packer is out of date! The latest version
is 1.1.2. You can update by downloading from www.packer.io
postgresql client version
psql (PostgreSQL) 9.6.6
ragel version
Ragel State Machine Compiler version 6.8 Feb 2013
Copyright (c) 2001-2009 by Adrian Thurston
subversion version
svn, version 1.8.8 (r1568071)
   compiled Aug 10 2017, 17:20:39 on x86_64-pc-linux-gnu
Copyright (C) 2013 The Apache Software Foundation.
This software consists of contributions made by many people;
see the NOTICE file for more information.
Subversion is open source software, see http://subversion.apache.org/
The following repository access (RA) modules are available:
* ra_svn : Module for accessing a repository using the svn network protocol.
  - with Cyrus SASL authentication
  - handles 'svn' scheme
* ra_local : Module for accessing a repository on local disk.
  - handles 'file' scheme
* ra_serf : Module for accessing a repository via WebDAV protocol using serf.
  - using serf 1.3.3
  - handles 'http' scheme
  - handles 'https' scheme
sudo version
Sudo version 1.8.9p5

[jira] [Created] (METRON-2143) Travis Build Fails to Download Maven

2019-05-24 Thread Nick Allen (JIRA)
Nick Allen created METRON-2143:
--

 Summary: Travis Build Fails to Download Maven
 Key: METRON-2143
 URL: https://issues.apache.org/jira/browse/METRON-2143
 Project: Metron
  Issue Type: Bug
Reporter: Nick Allen
Assignee: Nick Allen


{code:java}
Worker information
hostname: 
6723f0a5-d84f-4851-92bf-6d22d3630257@1.production-2-worker-org-gce-zp0l
version: v6.2.0 
https://github.com/travis-ci/worker/tree/5e5476e01646095f48eec13196fdb3faf8f5cbf7
instance: travis-job-46feb077-ec35-4506-b1c4-f91a3d58e859 
travis-ci-garnet-trusty-1512502259-986baf0 (via amqp)
startup: 7.065725114s
system_info
Build system information
Build language: java
Build group: stable
Build dist: trusty
Build id: 535942655
Job id: 535942660
Runtime kernel version: 4.4.0-101-generic
travis-build version: 0696e6115
Build image provisioning date and time
Tue Dec  5 19:58:13 UTC 2017
Operating System Details
Distributor ID: Ubuntu
Description:Ubuntu 14.04.5 LTS
Release:14.04
Codename:   trusty
Cookbooks Version
7c2c6a6 https://github.com/travis-ci/travis-cookbooks/tree/7c2c6a6
git version
git version 2.15.1
bash version
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
gcc version
gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
docker version
Client:
 Version:  17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:Tue Sep 26 22:42:38 2017
 OS/Arch:  linux/amd64
Server:
 Version:  17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:Tue Sep 26 22:41:20 2017
 OS/Arch:  linux/amd64
 Experimental: false
clang version
clang version 5.0.0 (tags/RELEASE_500/final)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /usr/local/clang-5.0.0/bin
jq version
jq-1.5
bats version
Bats 0.4.0
shellcheck version
0.4.6
shfmt version
v2.0.0
ccache version
ccache version 3.1.9
Copyright (C) 2002-2007 Andrew Tridgell
Copyright (C) 2009-2011 Joel Rosdahl
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; either version 3 of the License, or (at your option) any later
version.
cmake version
cmake version 3.9.2
CMake suite maintained and supported by Kitware (kitware.com/cmake).
heroku version
heroku-cli/6.14.39-addc925 (linux-x64) node-v9.2.0
imagemagick version
Version: ImageMagick 6.7.7-10 2017-07-31 Q16 http://www.imagemagick.org
md5deep version
4.2
mercurial version
Mercurial Distributed SCM (version 4.2.2)
(see https://mercurial-scm.org for more information)
Copyright (C) 2005-2017 Matt Mackall and others
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
mysql version
mysql  Ver 14.14 Distrib 5.6.33, for debian-linux-gnu (x86_64) using  EditLine 
wrapper
openssl version
OpenSSL 1.0.1f 6 Jan 2014
packer version
Packer v1.0.2
Your version of Packer is out of date! The latest version
is 1.1.2. You can update by downloading from www.packer.io
postgresql client version
psql (PostgreSQL) 9.6.6
ragel version
Ragel State Machine Compiler version 6.8 Feb 2013
Copyright (c) 2001-2009 by Adrian Thurston
subversion version
svn, version 1.8.8 (r1568071)
   compiled Aug 10 2017, 17:20:39 on x86_64-pc-linux-gnu
Copyright (C) 2013 The Apache Software Foundation.
This software consists of contributions made by many people;
see the NOTICE file for more information.
Subversion is open source software, see http://subversion.apache.org/
The following repository access (RA) modules are available:
* ra_svn : Module for accessing a repository using the svn network protocol.
  - with Cyrus SASL authentication
  - handles 'svn' scheme
* ra_local : Module for accessing a repository on local disk.
  - handles 'file' scheme
* ra_serf : Module for accessing a repository via WebDAV protocol using serf.
  - using serf 1.3.3
  - handles 'http' scheme
  - handles 'https' scheme
sudo version
Sudo version 1.8.9p5
Configure options: --prefix=/usr -v --with-all-insults --with-pam --with-fqdn 
--with-logging=syslog --with-logfac=authpriv --with-env-editor 
--with-editor=/usr/bin/editor --with-timeout=15 --with-password-timeout=0 
--with-passprompt=[sudo] password for %p:  --without-lecture --with-tty-tickets 
--disable-root-mailer --enable-admin-flag --with-sendmail=/usr/sbin/sendmail 
--with-timedir=/var/lib/sudo --mandir=/usr/share/man --libexecdir=/usr/lib/sudo 
--with-sssd --with-sssd-lib=/usr/lib/x86_64-linux-gnu --with-selinux
Sudoers policy plugin version 1.8.9p5
Sudoers file grammar version 43
Sudoers path: 

[jira] [Commented] (METRON-885) Metron Travis builds do not build RPMs

2019-05-22 Thread Nick Allen (JIRA)


[ 
https://issues.apache.org/jira/browse/METRON-885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16846012#comment-16846012
 ] 

Nick Allen commented on METRON-885:
---

I am in favor of doing this.  I am really hoping we can leverage Travis for 
this rather than bringing in an alternate CI mechanism.  See recent dev list 
thread...

[https://lists.apache.org/thread.html/5a8e810804321162d06b7a5d38020b48dd47822751495234527e9338@%3Cdev.metron.apache.org%3E]

> Metron Travis builds do not build RPMs
> --
>
> Key: METRON-885
> URL: https://issues.apache.org/jira/browse/METRON-885
> Project: Metron
>  Issue Type: Bug
>Reporter: Otto Fowler
>Priority: Major
>
> Our travis builds do not build rpms and should, although I am not sure how to 
> get that working. as our travis infrastructure is ubuntu based



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (METRON-2097) Install Metron MPack in Ambari 2.7.3.0

2019-04-29 Thread Nick Allen (JIRA)
Nick Allen created METRON-2097:
--

 Summary: Install Metron MPack in Ambari 2.7.3.0
 Key: METRON-2097
 URL: https://issues.apache.org/jira/browse/METRON-2097
 Project: Metron
  Issue Type: Sub-task
Reporter: Nick Allen
Assignee: Nick Allen


We need the Metron MPack to work with Ambari 2.7.3.0 so that our users can 
install Metron via the Mpack on HDP 3.1.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   6   7   8   >