What we prototyped was configuring via spring the list of IPs to ignore,
because a given installation seemed to have a constant address for the
bridge network, and this approach was reliable, once you know the bridge
IPs.   It is also a more general solution.

When the container starts, you get a list of IP addresses from the kernel.
 At that point it is impossible to know from inside the container which of
those addresses can be used by other ignite nodes, at least without
external information.   For exampe, ifI have ignite running on an AWS
instance that has an internal and external address, it is impossible to
know which address will be able to reach the other nodes, unless you are
told.   So perhaps we should have used a list of ranges rather than a list
in our prototype.

For the docker sub-case where all the nodes seem to get the same useless
address, I would think we can ignore IP address/port pairs that are the
current node is also advertising.    That does not generalize to other
cases were the kernel provides unusable addresses.    I didn't quite
understand why if we try to connect to port we are advertising, this would
need to timeout, rather than getting immediately rejected, unless Ignite
has explicit code to do detected and ignore a self message.   But if there
is a IP:port pair that the current node is claiming as an endpoint, it
should not try to use that IP:port to connect to other nodes.....

On Tue, Nov 20, 2018 at 2:27 PM David Harvey <syssoft...@gmail.com> wrote:

> What we prototyped was configuring via spring the list of IPs to ignore,
> because a given installation seemed to have a constant address for the
> bridge network, and this approach was reliable, once you know the bridge
> IPs.
>
> When the container starts, you get a list of IP addresses from the
> kernel.   At that point it is impossible to know from inside the container
> which of those addresses can be used by other ignite nodes, at least
> without external information.   Similarly, if I have an AWS instance
>
> I am wondering
>
>
>
> On Tue, Nov 20, 2018 at 2:08 PM Alexey Goncharuk <
> alexey.goncha...@gmail.com> wrote:
>
>> Hi David,
>>
>> This is something we have also encountered recently and I was wondering
>> how
>> this can be mitigated in a general case. Do you know if an application can
>> detect that it is being run in a docker container and add the
>> corresponding
>> list of bridge IPs automatically on start? If so, I this we can add this
>> to
>> the Ignite so that it works out of the box.
>>
>> --AG
>>
>>
>> вт, 20 нояб. 2018 г. в 19:58, David Harvey <syssoft...@gmail.com>:
>>
>> > We see some annoying behavior with S3 discovery because Ignite will
>> push to
>> > the discovery S3 bucket the IP address of the local docker bridge
>> network
>> > (172.17.0.1) in our case.   Basically, each node when coming online
>> tries
>> > that address first, and has to go through a network timeout to recover.
>> >
>> > To address this, have prototyped a simple extension to
>> TcpCommunicationSpi
>> > to allow configuration of a list of IP addresses that should be
>> completely
>> > ignored, and will create a ticket and generate a pull request for it.
>> >
>> > If there is a better approach, please let us know.
>> >
>> > Thanks
>> > Dave Harvey
>> >
>>
>

Reply via email to