Fresh start, So I decided it made sense to start from scratch, downloaded the latest OVA and imported into my Vmware environment.
Modified the networking and followed the instructions here : https://tom-henderson.github.io/2015/04/15/graylog.html It's not rocket science right? The moment I create the first input I get this error : Input 58261f5f6dd54106130022db has failed to start on node 5a7bd7b6-e75d-4894-ab91-85ef94e9108d for this reason: Address already in use. I tried both 0.0.0.0 and 127.0.0.1 same difference. What am I missing? I'm at wits end here. _________________________________________________________ On Friday, November 11, 2016 at 11:03:59 AM UTC-5, Jochen Schalanda wrote: > > Hi Ed, > > if it's one of the official OVAs, you might want to read > http://docs.graylog.org/en/2.1/pages/configuration/graylog_ctl.html and > run graylog-ctl reconfigure (after you've checked all settings). > > Cheers, > Jochen > > On Friday, 11 November 2016 15:09:47 UTC+1, Ed Berlot wrote: >> >> Someone insalled it before I got here, was just handed the project but to >> the best of my knowledge it's a prebuilt VM >> >> On Friday, November 11, 2016 at 5:51:52 AM UTC-5, Jochen Schalanda wrote: >>> >>> Hi Ed, >>> >>> as you might have already seen in your Elasticsearch logs, it's unable >>> to bind to the given IP address and port. Fix those in the Elasticsearch >>> configuration. >>> >>> You should also consider using the official Graylog virtual machine >>> appliances which free you from the burden to setup everything by yourself: >>> http://docs.graylog.org/en/2.1/pages/installation/virtual_machine_appliances.html >>> >>> Cheers, >>> Jochen >>> >>> On Thursday, 10 November 2016 19:02:57 UTC+1, Ed Berlot wrote: >>>> >>>> Busy morning, I changed the IP to the server, it also failed, then I >>>> changed it to 0..0.0.0 same issue. >>>> I finally got to the logs, they aren't available without makiing some >>>> permission changes. >>>> >>>> I figured the easiest way to to about this is to do reboot and let >>>> everything start "fresh". >>>> >>>> From the Elastic Search log >>>> >>>> .BindTransportException[Failed to bind to [9300-9400]]; nested: >>>> ChannelException[Failed to bind to: /10.60.10.158:9400]; nested: >>>> BindException[Cannot assign requested address]; >>>> at >>>> org.elasticsearch.transport.netty.NettyTransport.bindToPort(NettyTransport.java:478) >>>> at >>>> org.elasticsearch.transport.netty.NettyTransport.bindServerBootstrap(NettyTransport.java:440) >>>> at >>>> org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:321) >>>> at >>>> org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:68) >>>> at >>>> org.elasticsearch.transport.TransportService.doStart(TransportService.java:182) >>>> at >>>> org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:68) >>>> at org.elasticsearch.node.Node.start(Node.java:278) >>>> at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:206) >>>> at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:272) >>>> at >>>> org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35) >>>> Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: >>>> /10.60.10.158:9400 >>>> at >>>> org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272) >>>> at >>>> org.elasticsearch.transport.netty.NettyTransport$1.onPortNumber(NettyTransport.java:460) >>>> at >>>> org.elasticsearch.common.transport.PortsRange.iterate(PortsRange.java:69) >>>> at >>>> org.elasticsearch.transport.netty.NettyTransport.bindToPort(NettyTransport.java:456) >>>> ... 9 more >>>> Caused by: java.net.BindException: Cannot assign requested address >>>> at sun.nio.ch.Net.bind0(Native Method) >>>> at sun.nio.ch.Net.bind(Net.java:433) >>>> at sun.nio.ch.Net.bind(Net.java:425) >>>> at >>>> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) >>>> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) >>>> at >>>> org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193) >>>> at >>>> org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391) >>>> at >>>> org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315) >>>> at >>>> org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42) >>>> at >>>> org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) >>>> at >>>> org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) >>>> at >>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) >>>> at >>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) >>>> at java.lang.Thread.run(Thread.java:745) >>>> [2016-11-10 17:25:33,717][INFO ][node ] [Ritchie >>>> Gilmore] stopping ... >>>> [2016-11-10 17:25:33,720][INFO ][node ] [Ritchie >>>> Gilmore] stopped >>>> [2016-11-10 17:25:33,721][INFO ][node ] [Ritchie >>>> Gilmore] closing ... >>>> [2016-11-10 17:25:33,728][INFO ][node ] [Ritchie >>>> Gilmore] closed >>>> [2016-11-10 17:25:35,386][INFO ][node ] [Magilla] >>>> version[2.3.1], pid[12769], build[bd98092/2016-04-04T12:25:05Z] >>>> [2016-11-10 17:25:35,387][INFO ][node ] [Magilla] >>>> initializing .... >>>> >>>> >>>> >>>> _____________________________________________________________________________ >>>> >>>> Fronm server\current >>>> 016-11-10_17:55:24.25567 2016-11-10 17:55:24,255 WARN : >>>> org.graylog2.outputs.BlockingBatchedESOutput - Error while waiting for >>>> healthy Elasticsearch cluster. Not flushing. >>>> 2016-11-10_17:55:24.25719 java.util.concurrent.TimeoutException: >>>> Elasticsearch cluster didn't get healthy within timeout >>>> 2016-11-10_17:55:24.25925 at >>>> org.graylog2.indexer.cluster.Cluster.waitForConnectedAndHealthy(Cluster.java:179) >>>> >>>> ~[graylog.jar:?] >>>> 2016-11-10_17:55:24.26310 at >>>> org.graylog2.indexer.cluster.Cluster.waitForConnectedAndHealthy(Cluster.java:184) >>>> >>>> ~[graylog.jar:?] >>>> 2016-11-10_17:55:24.26504 at >>>> org.graylog2.outputs.BlockingBatchedESOutput.flush(BlockingBatchedESOutput.java:112) >>>> >>>> [graylog.jar:?] >>>> 2016-11-10_17:55:24.26626 at >>>> org.graylog2.outputs.BlockingBatchedESOutput.write(BlockingBatchedESOutput.java:105) >>>> >>>> [graylog.jar:?] >>>> 2016-11-10_17:55:24.26828 at >>>> org.graylog2.buffers.processors.OutputBufferProcessor$1.run(OutputBufferProcessor.java:189) >>>> >>>> [graylog.jar:?] >>>> 2016-11-10_17:55:24.26946 at >>>> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) >>>> >>>> [graylog.jar:?] >>>> 2016-11-10_17:55:24.27157 at >>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) >>>> [?:1.8.0_77] >>>> 2016-11-10_17:55:24.27270 at >>>> java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_77] >>>> 2016-11-10_17:55:24.27453 at >>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) >>>> >>>> [?:1.8.0_77] >>>> 2016-11-10_17:55:24.27605 at >>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) >>>> >>>> [?:1.8.0_77] >>>> 2016-11-10_17:55:24.27766 at java.lang.Thread.run(Thread.java:745) >>>> [?:1.8.0_77] >>>> 2016-11-10_17:55:43.10244 2016-11-10 17:55:43,102 INFO : >>>> org.graylog2.periodical.IndexerClusterCheckerThread - Indexer not fully >>>> initialized yet. Skipping periodic cluster chec >>>> >>> -- You received this message because you are subscribed to the Google Groups "Graylog Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/graylog2/9e0653ed-5b3f-4334-a3cb-ffeee65fbf01%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
