On Tue, Mar 14, 2017 at 3:42 PM, Yrineu Rodrigues <
[email protected]> wrote:

> Hi ODL team,
>
> IWould like to open a discussion about two issues regarding ODL cluster:
>
> 1 - I know that the MIN of nodes in an ODL cluster is 3, so, if have, what
> is the MAX, It's possible to connect 10+ nodes in an uniquely cluster? Have
> someone tested it before?
>

There is no actual limit. I'm not aware of anyone deploying/testing 10+
nodes but it should work.


> 2 - Suppose that we have 4 ODL instances (M1, M2, M3 and M4), it's
> possible to configure ODL in a cluster as follows?:
>      M1-M2 // M1 and M2 are located in a server in Brazil
>      M2-M3
>      M3-M4 // M3 and M4 are located in a server on USA
>
> * I want to configure just M2 to share info with M3, so, M1 and M4 do not
> share anything directly.
>
> It's possible?
>
>
There is a federation project to share a subset of data between clusters.
Perhaps this is what you're looking for.


> Thanks in advance,
> --
> *Yrineu Rodrigues*
> Software Engineer
>
> *SERRO*
> www.serro.com
> LinkedIn • Facebook • YouTube • Vimeo • Twitter  *@TeamSerro*
>
> *Disclaimer: This e-mail message contains information intended solely for
> the intended recipient and is confidential or private in nature. If you are
> not the intended recipient, you must not read, disseminate, distribute,
> copy or otherwise use this message or any file attached to this message.
> Any such unauthorized use is prohibited and may be unlawful. If you have
> received this message in error, please notify the sender immediately by
> email, facsimile or telephone and then delete the original message from
> your machine.*
>
>
> *San Francisco  |  Santa Clara  |  New York  |  Toronto  |  Mumbai  |
> Pune*
>
> _______________________________________________
> Discuss mailing list
> [email protected]
> https://lists.opendaylight.org/mailman/listinfo/discuss
>
>
_______________________________________________
Discuss mailing list
[email protected]
https://lists.opendaylight.org/mailman/listinfo/discuss

Reply via email to