Errata, I meant: *I don't think we are breaking down OpenWISP into
microservices.*

On Sun, Apr 7, 2019 at 5:12 PM Federico Capoano <[email protected]>
wrote:

> *Premise: I don't think we are not breaking down OpenWISP into
> microservices.*
>
> My exprience with microservices is not happy. They make things a lot more
> complicated.
> OpenWISP 2 is modular, that gives us the advantage that we don't need to
> force people to write modular code by separating features in different
> services, which helps to keep things simple
> and ease maintenance.
>
> The modular nature of OpenWISP allows us to break a monolithic OpenWISP
> instance in different containers running different parts of it, but I would
> not call this microservice architecture.
> I want to do this break down of the monolithic OpenWISP instance into
> services because it will make it easier to assign resources to the
> different services that need to be scaled up.
>
> Let's not mention microservices anymore in this project because I believe
> it will fuel confusion.
>
> On Sat, Apr 6, 2019 at 8:30 AM Ajay Tripathi <[email protected]> wrote:
>
>> Hi,
>>
>> *Update:* I've added documentation for testing and building the
>> containers..
>>
>> *Questions/Discussion:*
>> 1. About Database:
>> It looks like openwisp-radius and openwisp-controller can't migrate to
>> the same database.
>> Unlike the network-topology module which migrated to the same database as
>> the openwisp-controller,
>> I had to change the database name for the radius module.
>> Is this an expected behaviour or do we need to make openwisp-controller
>> database compatible with
>> the openwisp-radius database in the final version?
>>
>
> It's not the expected behaviour and it should not happen.
>
> I have some openwisp instances running fine with both openwisp-radius and
> openwisp-controller (development version of all openwisp modules).
>
> What issue are you having, can you paste the error you're getting?
>
>
>> 2. Migration problem with docker-compose:
>> When we run docker-compose for the first time. All the containers start
>> migrating parallelly which causes some containers
>> to fail in migrating. If we re-run the containers everything works fine
>> because the rest of the containers get a chance to migrate,
>> I think we need some kind of flag for the containers to communicate about
>> migrations in the first run. Please suggest on the same.
>>
>
> We should find a way to start the general admin dashboard container first,
> which should have all the django apps in INSTALLED_APPS, so migrations are
> run there.
> Then the other services can be started in parallel afterwards.
>
>
>> 3. Terraform creating order:
>> When the database server starts, it takes a while to allow connections on
>> port 5432.
>> But since terraform starts all the pods parallelly, some pods that try to
>> connect before postgresql is ready
>> and the pods fail, re-running the pods solve the problem but I could not
>> find how I can communicate to
>> terraform to let it know that database server is accepting connections
>> and dependand pods can be
>> created. Please advice how is it usually done. :)
>>
>
> The suggestion given by 2stacks seems good to me, if we need some
> dependencies to be up, we need to use all the tools at our disposal to wait
> until those service become ready (with a configurable timeout).
>
> I'll reply to other parts of the thread in my next email.
>

-- 
You received this message because you are subscribed to the Google Groups 
"OpenWISP" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to