Thats great Nick, thank you!
You were right, by default it is set to:
"es.ip": "{{ es_url }}" which expands to
http://<elasticsearch_master_hostname>:9300
I have changed it to:
"es.ip": "<elasticsearch_master_hostname>",
"es.port": "9300",
I have tested end2end briefly and Metron is working with HDP 2.5 on
bare-metal.
However there are multiple issues I've encountered during installation
that are listed below.
I wonder, do any of them look worth fixing and what can I do to help fix
them?
a) For Centos 6 I had to add increase limits for storm in
/etc/security/limits.conf:
storm soft nproc 257597
storm hard nproc 257597
Otherwise topologies would just crash.
b) I had to disable IPv6 on my node because by default Zookeeper prefers
to it over IPv6 and rest of the services are failing to connect to it:
echo -e '\n# Disabling IPv6\nnet.ipv6.conf.all.disable_ipv6 =
1\nnet.ipv6.conf.default.disable_ipv6 = 1' >> /etc/sysctl.conf
echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6
echo 1 > /proc/sys/net/ipv6/conf/default/disable_ipv6
c) Kibana requires fix in file
/var/lib/ambari-server/resources/mpacks/metron-ambari.mpack-1.0.0.0/common-services/KIBANA/4.5.1/package/scripts/kibana_master.py
from
File("{}/kibana.yml".format(params.conf_dir),
to
File("{0}/kibana.yml".format(params.conf_dir),
As it was fails during start with (probably the above written and tested
for python 3 which is not supported by python 2.6):
ValueError: zero length field name in format
d) Correted path to elasticsearch in /etc/sysconfig/elasticsearch as it was:
/var/log/elasticsearchelasticsearch_gc.log
should be:
/var/log/elasticsearch/elasticsearch_gc.log
And again, thank you for prompt help!
- Dima
On 11/09/2016 04:27 PM, Nick Allen wrote:
> I would guess that your global configuration value for 'es.ip' looks
> something like "http://..." which is incorrect. It should just be the
> hostname or IP address with no protocol specifier.
>
> For example, by default the global properties look like the following for
> the Quick Dev environment.
>
> {
> "es.clustername": "metron",
> "es.ip": "node1",
> "es.port": "9300",
> "es.date.format": "yyyy.MM.dd.HH"
> }
>
>
> On Wed, Nov 9, 2016 at 8:18 AM, Dima Kovalyov <[email protected]>
> wrote:
>
>> Thank you Jon,
>>
>> I have resolved it by increasing "max user processes" for user storm using:
>> # su - storm
>> $ ulimit -u 257597
>>
>> Topologies are working without crashes now, however in Indexing topology
>> indexingBolt now gives me this error:
>> [ERROR] Async loop died!
>> java.lang.RuntimeException: java.lang.RuntimeException:
>> java.net.UnknownHostException: http: unknown error
>> ...
>> [ERROR] Halting process: ("Worker died")
>> java.lang.RuntimeException: ("Worker died")
>>
>> And this one is a graveyard because there is nothing about it in google.
>> I have attached worker.log.
>>
>> Data is not appearing in ElasticSearch. I wonder, maybe it is caused by
>> ElasticSearch poorly configured?
>>
>> Please assist.
>> Thank you.
>>
>> - Dima
>>
>> On 11/08/2016 11:20 PM, [email protected] wrote:
>>> Hi Dima,
>>>
>>> You probably want to increase the -Xmx setting in "worker.childopts",
>> which
>>> is available in ambari under $Server:8080/#/main/services/STORM/configs.
>>>
>>> Jon
>>>
>>> On Tue, Nov 8, 2016 at 2:47 PM DimaKovalyov <[email protected]> wrote:
>>>
>>>> Github user DimaKovalyov commented on the issue:
>>>>
>>>> https://github.com/apache/incubator-metron/pull/318
>>>>
>>>> Thank you James,
>>>>
>>>> > Once you have data in your kafka queue this should go away.
>>>> That is true! Once I create a topic and stream data through it the
>>>> error is gone.
>>>>
>>>> My data is now going to enrichment and both bolts and spouts (all of
>>>> them) are having this weird error:
>>>> `java.lang.OutOfMemoryError: unable to create new native thread at
>>>> java.lang.Thread.start0(Native Method) at
>>>> java.lang.Thread.start(Thread.java:714) at
>>>> org.apache.zookeeper.ClientCnxn.start(ClientCnxn.java:417) at
>>>> org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:450) at
>>>> ...
>>>> java.lang.Thread.run(Thread.java:745)`
>>>> `
>>>>
>>>> And supervisor crashes also after 5-10 minutes with:
>>>> ```
>>>> 2016-11-08 14:25:56.125 o.a.s.event [ERROR] Error when processing
>> event
>>>> java.lang.OutOfMemoryError: unable to create new native thread
>>>> at java.lang.Thread.start0(Native Method)
>>>> at java.lang.Thread.start(Thread.java:714)
>>>> at
>>>> java.util.concurrent.ThreadPoolExecutor.addWorker(
>> ThreadPoolExecutor.java:950)
>>>> at
>>>> java.util.concurrent.ThreadPoolExecutor.execute(
>> ThreadPoolExecutor.java:1368)
>>>> at java.lang.UNIXProcess.initStreams(UNIXProcess.java:289)
>>>> at java.lang.UNIXProcess.lambda$new$2(UNIXProcess.java:259)
>>>> at java.security.AccessController.doPrivileged(Native
>> Method)
>>>> at java.lang.UNIXProcess.<init>(UNIXProcess.java:258)
>>>> at java.lang.ProcessImpl.start(ProcessImpl.java:134)
>>>> at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
>>>> at java.lang.Runtime.exec(Runtime.java:620)
>>>> at org.apache.storm.shade.org
>>>> .apache.commons.exec.launcher.Java13CommandLauncher.exec(
>> Java13CommandLauncher.java:58)
>>>> at org.apache.storm.shade.org
>>>> .apache.commons.exec.DefaultExecutor.launch(DefaultExecutor.java:254)
>>>> at org.apache.storm.shade.org
>>>> .apache.commons.exec.DefaultExecutor.executeInternal(
>> DefaultExecutor.java:319)
>>>> at org.apache.storm.shade.org
>>>> .apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:160)
>>>> at org.apache.storm.shade.org
>>>> .apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:147)
>>>> at
>>>> org.apache.storm.util$exec_command_BANG_.invoke(util.clj:402)
>>>> at
>>>> org.apache.storm.util$send_signal_to_process.invoke(util.clj:429)
>>>> at
>>>> org.apache.storm.util$kill_process_with_sig_term.invoke(util.clj:454)
>>>> at
>>>> org.apache.storm.daemon.supervisor$shutdown_worker.
>> invoke(supervisor.clj:290)
>>>> at
>>>> org.apache.storm.daemon.supervisor$sync_processes.
>> invoke(supervisor.clj:435)
>>>> at clojure.core$partial$fn__4527.invoke(core.clj:2492)
>>>> at
>>>> org.apache.storm.event$event_manager$fn__7248.invoke(event.clj:40)
>>>> at clojure.lang.AFn.run(AFn.java:22)
>>>> at java.lang.Thread.run(Thread.java:745)
>>>>
>>>> ```
>>>> Even though I have more than 30 GB RAM available. Do I need to tune
>>>> Storm for better memory usage?
>>>> Please advise.
>>>>
>>>> - Dima
>>>>
>>>>
>>>> ---
>>>> If your project is set up for it, you can reply to this email and have
>> your
>>>> reply appear on GitHub as well. If your project does not have this
>> feature
>>>> enabled and wishes so, or if the feature is enabled but not working,
>> please
>>>> contact infrastructure at [email protected] or file a JIRA
>> ticket
>>>> with INFRA.
>>>> ---
>>>>
>>
>