Hi,

On Monday, March 25, 2019 at 7:25:05 PM UTC+5:30, Federico Capoano wrote:
>
> Yes, but we also want to allow to scale horizontally more easily, so the 
> traffic can be dispatched to more instances of the same containers if 
> needed (imagine a kubernetes cluster spread on multiple nodes).
>

Great. I made a docker swarm stack and tested horizontal scaling of my 
prototype of the docker-compose. I understand the requirement better now.
**The mentioned docker-compose stack is a simple openwisp container and a 
redis container.
 

> The Admin interface is used ony by administrators, not very often, so we 
> likley won't need more instances of the same containers to scale.
> The load on the other services increases with the size of the network 
> (number of devices or also number of users in the case of the radius 
> module), so we will need to scale up with those.  
>

Thanks for clearing that up.
  

> each module has its own URLs and APIs that are not admin related which are 
> used to provide configurations, update the network topology, communicate 
> with freeradius and so on, each one of these group of features should run 
> in isolation.
>
A more realistic scenario is a user who wants to use only admin + radius 
> module, or admin + controller module.
> But we should not add restrictions on which containers the users want to 
> use because we should consider this to be a base on which users and 
> developers can build a solution tailored to their needs with custom modules.
>
django-channels is the framework with which you build the websocket server, 
> but the actual logic that allows you to do anything with the websocket is 
> in OpenWISP.
> At the moment only OpenWISP Controller has some websocket logic (inherited 
> from django-loci) but in the future we will have more.
>

> The duty of this container is to serve the websocket server and process 
> data coming from websocket clients.
>
> Immagine the same installation we have today, but instead of having it on 
> a single VM, we have it spread on different containers, each container 
> dedicated to a single service: the containers which receive more traffic 
> can be scaled up, either vertically with more resources (RAM, CPU) or 
> horizontally with more containers if possible (if a load balancer in front 
> of the containers is available to distribute traffic, we can do this with 
> nginx for all the containers which serve HTTP or WebSocket requests, with 
> the celery containers we don't need a load balancer because they read from 
> the broker service which in our case is redis by default).
>

That explained some crucial points to help me understand! :)
 

> I hope is clearer now!
>

Yes, thank you.  Currently, I am reading more on the same. :)
I am making a prototype with some of the features for understanding 
further, I am working on the features in whichever order I feel is most 
important for complete understanding.
Please let me know if there is a specific requirement of the prototype that 
you'd like to see before the deadline so that I can focus on that first.

Ajay 

-- 
You received this message because you are subscribed to the Google Groups 
"OpenWISP" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to