Yes, the plan was to decommision all vms in digitalocean. Before doing so, 
all data will be re-populated during clustering into aws then retire vms in 
digitalocean one by one. In aws i hv private & public ip while in 
Digitalocean all public ip. Thanks.

On Sunday, March 29, 2015 at 3:48:05 PM UTC+8, Mark Walkom wrote:
>
> You're building a cluster between your AWS instances and your DigitalOcean 
> ones, am I understanding this correctly?
>
> On 29 March 2015 at 11:56, Azman Ahmad <lesdesp...@gmail.com <javascript:>
> > wrote:
>
>> unicast. 
>>
>> here 
>> discovery.zen.ping.multicast.enabled: false 
>> discovery.zen.ping.unicast.hosts : 
>> ["10.0.0.187:9300","x.x.x.x:9300","x.x.x.x:9300"] 
>>  *x.x.x.x = VMs in digitalocean 
>> #discovery.zen.ping.unicast.hosts : ["10.0.0.187"] 
>> discovery.zen.ping.timeout: 180s 
>> discovery.zen.minimum_master_nodes: 2 
>>
>> just to share the error that i got when i played with yml 
>>
>> network.bind_host: 10.0.0.111 
>> network.publish_host: 52.74.94.198 
>> network.host: 127.0.0.1 
>>
>> log : 
>> detected_master [I love S & DataStoreVM2]  [node1]failed to connect to 
>> node [node1][.... 
>> [transport] [DataStoreVM2] bound_address {inet[/10.0.0.111:9300]}, 
>> publish_address {inet[/52.74.94.198:9300]} 
>> [http] [DataStoreVM2] bound_address {inet[/10.0.0.111:9200]}, 
>> publish_address {inet[/127.0.0.1:9200]} 
>> ,,,,DataStoreVM2] failed to reconnect to node [DataSto.... 
>>
>>
>> Network.bind_host: 127.0.0.1 
>> network.publish_host: 52.74.94.198 
>> network.host: 10.0.0.111 
>> Log= 
>> failed to send join request to master [[I love Siti.... 
>> [transport] [DataStoreVM2] bound_address {inet[/127.0.0.1:9300]}, 
>> publish_address {inet[/52.74.94.198:9300] 
>> [http] [DataStoreVM2] bound_address {inet[/127.0.0.1:9200]}, 
>> publish_address {inet[/10.0.0.111:9200] 
>>
>>
>> network.bind_host: 127.0.0.1 
>> network.publish_host: 10.0.0.111 
>> network.host: 10.0.0.111 
>> Log= 
>> DataStoreVM2] failed to reconnect to node [DataSto.... 
>> bound_address {inet[/127.0.0.1:9300]}, publish_address {inet[/
>> 10.0.0.111:9300]} 
>> bound_address {inet[/127.0.0.1:9200]}, publish_address {inet[/
>> 10.0.0.111:9200]} 
>>
>>
>> Network.bind_host: 127.0.0.1 
>> network.publish_host: 52.74.94.198 
>> network.host: 10.0.0.111 
>> http.host: 52.74.94.198 
>> Log = 
>> - BindHttpException[Failed to bind to [9200]] 
>>         ChannelException[Failed to bind to: /52.74.94.198:9200] 
>>                 BindException[Cannot assign requested address] 
>>
>> On Sunday, March 29, 2015 at 5:09:26 AM UTC+8, Mark Walkom wrote:
>>>
>>> Are you using multicast or unicast?
>>>
>>> On 28 March 2015 at 21:21, Azman Ahmad <lesdesp...@gmail.com> wrote:
>>>
>>>> Both nodes running on AWS. Both sides pingbles and curlable. Had check 
>>>> resolution in this forum but the case is not similar at all. Badly need 
>>>> help as project meeting dateline. Please.... 
>>>>
>>>> yml setting. 
>>>> network.publish_host: 52.74.94.198 
>>>> network.host: 10.0.0.111 
>>>>
>>>> etc/hosts 
>>>> 127.0.0.1 localhost 
>>>> 10.0.0.246 DataStoreVM1 
>>>> 10.0.0.111 DataStoreVM2 
>>>>
>>>>
>>>> [2015-03-27 13:12:21,239][WARN ][cluster.service          ] 
>>>> [DataStoreVM2] failed to connect to node [[DataStoreVM1][
>>>> X6Teb0UpQ1uCV31tVRCQ6g][ip-10-0-0-246][inet[/52.74.93.147:
>>>> 9300]]{master=false}] 
>>>> org.elasticsearch.transport.ConnectTransportException: 
>>>> [DataStoreVM1][inet[/52.74.93.147:9300]] connect_timeout[30s] 
>>>>         at org.elasticsearch.transport.netty.NettyTransport.
>>>> connectToChannels(NettyTransport.java:807) 
>>>>         at org.elasticsearch.transport.netty.NettyTransport.
>>>> connectToNode(NettyTransport.java:741) 
>>>>         at org.elasticsearch.transport.netty.NettyTransport.
>>>> connectToNode(NettyTransport.java:714) 
>>>>         at org.elasticsearch.transport.TransportService.connectToNode(
>>>> TransportService.java:150) 
>>>>         at org.elasticsearch.cluster.service.InternalClusterService$
>>>> UpdateTask.run(InternalClusterService.java:411) 
>>>>         at org.elasticsearch.common.util.concurrent.
>>>> PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(
>>>> PrioritizedEsThreadPoolExecutor.java:153) 
>>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
>>>> ThreadPoolExecutor.java:1145) 
>>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>>> ThreadPoolExecutor.java:615) 
>>>>         at java.lang.Thread.run(Thread.java:745) 
>>>> Caused by: org.elasticsearch.common.netty.channel.ConnectTimeoutException: 
>>>> connection timed out: /52.74.93.147:9300 
>>>>         at org.elasticsearch.common.netty.channel.socket.nio.
>>>> NioClientBoss.processConnectTimeout(NioClientBoss.java:139) 
>>>>         at org.elasticsearch.common.netty.channel.socket.nio.
>>>> NioClientBoss.process(NioClientBoss.java:83) 
>>>>         at org.elasticsearch.common.netty.channel.socket.nio.
>>>> AbstractNioSelector.run(AbstractNioSelector.java:318) 
>>>>         at org.elasticsearch.common.netty.channel.socket.nio.
>>>> NioClientBoss.run(NioClientBoss.java:42) 
>>>>         at org.elasticsearch.common.netty.util.
>>>> ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) 
>>>>         at org.elasticsearch.common.netty.util.internal.
>>>> DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) 
>>>>         ... 3 more 
>>>> [2015-03-27 13:12:51,037][WARN ][cluster.service          ] 
>>>> [DataStoreVM2] failed to reconnect to node [DataStoreVM2][dAjDjc-
>>>> 0SxGG5JdDundhzw][ip-10-0-0-111][inet[/52.74.94.198:9300]]{data=false, 
>>>> master=false} 
>>>> org.elasticsearch.transport.ConnectTransportException: 
>>>> [DataStoreVM2][inet[/52.74.94.198:9300]] connect_timeout[30s] 
>>>>         at org.elasticsearch.transport.netty.NettyTransport.
>>>> connectToChannels(NettyTransport.java:807) 
>>>>         at org.elasticsearch.transport.netty.NettyTransport.
>>>> connectToNode(NettyTransport.java:741) 
>>>>         at org.elasticsearch.transport.netty.NettyTransport.
>>>> connectToNode(NettyTransport.java:714) 
>>>>         at org.elasticsearch.transport.TransportService.connectToNode(
>>>> TransportService.java:150) 
>>>>         at org.elasticsearch.cluster.service.InternalClusterService$
>>>> ReconnectToNodes.run(InternalClusterService.java:521) 
>>>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
>>>> ThreadPoolExecutor.java:1145) 
>>>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>>> ThreadPoolExecutor.java:615) 
>>>>         at java.lang.Thread.run(Thread.java:745) 
>>>> Caused by: org.elasticsearch.common.netty.channel.ConnectTimeoutException: 
>>>> connection timed out: /52.74.94.198:9300 
>>>>         at org.elasticsearch.common.netty.channel.socket.nio.
>>>> NioClientBoss.processConnectTimeout(NioClientBoss.java:139) 
>>>>         at org.elasticsearch.common.netty.channel.socket.nio.
>>>> NioClientBoss.process(NioClientBoss.java:83) 
>>>>         at org.elasticsearch.common.netty.channel.socket.nio.
>>>> AbstractNioSelector.run(AbstractNioSelector.java:318) 
>>>>         at org.elasticsearch.common.netty.channel.socket.nio.
>>>> NioClientBoss.run(NioClientBoss.java:42) 
>>>>         at org.elasticsearch.common.netty.util.
>>>> ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) 
>>>>         at org.elasticsearch.common.netty.util.internal.
>>>> DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) 
>>>>
>>>> ubuntu@ip-10-0-0-111:~$ ifconfig 
>>>> eth0      Link encap:Ethernet  HWaddr 02:db:17:9f:9a:b4   
>>>>           inet addr:10.0.0.111  Bcast:10.0.0.255  Mask:255.255.255.0 
>>>>           inet6 addr: fe80::db:17ff:fe9f:9ab4/64 Scope:Link 
>>>>           UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1 
>>>>           RX packets:161001 errors:0 dropped:0 overruns:0 frame:0 
>>>>           TX packets:135696 errors:0 dropped:0 overruns:0 carrier:0 
>>>>           collisions:0 txqueuelen:1000 
>>>>           RX bytes:146146281 (146.1 MB)  TX bytes:12125096 (12.1 MB) 
>>>>
>>>> lo        Link encap:Local Loopback   
>>>>           inet addr:127.0.0.1  Mask:255.0.0.0 
>>>>           inet6 addr: ::1/128 Scope:Host 
>>>>           UP LOOPBACK RUNNING  MTU:65536  Metric:1 
>>>>           RX packets:2304 errors:0 dropped:0 overruns:0 frame:0 
>>>>           TX packets:2304 errors:0 dropped:0 overruns:0 carrier:0 
>>>>           collisions:0 txqueuelen:0 
>>>>           RX bytes:171445 (171.4 KB)  TX bytes:171445 (171.4 KB) 
>>>>
>>>> -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "elasticsearch" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to elasticsearc...@googlegroups.com.
>>>> To view this discussion on the web visit https://groups.google.com/d/
>>>> msgid/elasticsearch/f13cef71-b3b5-4f17-893f-5edf5ddfd3fa%
>>>> 40googlegroups.com 
>>>> <https://groups.google.com/d/msgid/elasticsearch/f13cef71-b3b5-4f17-893f-5edf5ddfd3fa%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com <javascript:>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/ae080b66-5570-4aca-9fee-00cc86036afe%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/elasticsearch/ae080b66-5570-4aca-9fee-00cc86036afe%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e36a9c63-7aad-4f9e-87a9-0a15f265cd95%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to