Hi all, Starting a new thread here so it's a bit more manageable to open up and reply to.
@Aleksandar Vidakovic <[email protected]> can you give us an update and more guidance on next steps in case someone is able to assist. We're getting close! Ed On Fri, Dec 14, 2018 at 7:33 AM Ed Cable <[email protected]> wrote: > Hi gentlemen, > > I just wanted to follow up on these two items once again. > > Thanks, > > Ed > > On Tue, Dec 4, 2018 at 2:53 PM Ed Cable <[email protected]> wrote: > >> @Avik Ganguly <[email protected]> were you able to take up any of >> the suggested tasks by Myrle? >> >> @Aleksandar Vidakovic <[email protected]> Can you provide an >> update on the status of deploying Fineract CN on the Apache VMs? >> >> Thanks, >> >> Ed >> >> On Thu, Oct 25, 2018 at 3:39 AM Myrle Krantz <[email protected]> wrote: >> >>> Hi Avik, >>> >>> You've asked several questions here, which is why it took me a while to >>> get >>> to answering. >>> >>> Before I go on, allow me to clarify my use of certain terms in the >>> context >>> of Fineract CN: >>> * unit test -- a test for just one or two classes. Doesn't require >>> spinning up the service. Doesn't require resources like a DB or user >>> interaction. Is fast enough to be made part of the build. These tests >>> are >>> included in the same module and package as the class which they are >>> intended to test. (This is a definition which is generally agreed upon in >>> the industry.) >>> * component test -- a test for a service. Requires spinning up just that >>> service. Requires the use of an embedded DB. Individual tests might be >>> fast, but the testsuite is rather expensive to run because startup times >>> are impacted by running two DBs and spinning up the service. These tests >>> are included in the same repository as the service they are intended to >>> test. External services are mocked. (The industry has no widely agreed >>> upon definition for component tests; this is my definition.) >>> * integration test -- a test for multiple services. These tests have >>> their >>> own repositories. >>> >>> demo-server serves three purposes: >>> * It illustrates the steps need to start and provision the services. >>> * It serves as an integration test for one possible the provisioning >>> workflow. >>> * It makes it possible to start up the services locally so that a UI >>> developer can program against the latest changes. >>> >>> demo-server is not intended to be a used in production for multiple >>> reasons: >>> * It's slow. It starts everything, even stuff you may not need, and it >>> provisions everything. It then waits for everything to register itself >>> with Eureka. It lowers the times for the eureka server to speed up >>> startup, but this results in to many cycles being used for Eureka >>> updates. >>> * It's not secure. For example: identity.token.refresh.secureCookie is >>> set >>> to false to make http transport possible. >>> >>> If you're creating a new service, I don't expect that you'll need to run >>> demo-server very often. Your emphasis should be on your component tests. >>> These can run in just a minute or two, and you don't have to worry (much) >>> about JWT tokens, headers, and the like inside of these tests. >>> >>> If demo-server doesn't serve your purposes I see a few possibilities: >>> * create a private fork of the demo-server, then create the setup you >>> prefer. >>> * submit a PR with another "test" in demo-server which starts the >>> services >>> you need. I'd be happy to help you with this. >>> * submit a PR which makes the starting of various services in the >>> demo-service optional. I'd be happy to help you with this as well. >>> >>> One idea for speeding up local deployments (which is intended only to >>> enable developers and not to be a production deployment method) is to >>> start >>> all the services in one process. This would reduce service startup times >>> by folding them all into one. It would also replace inter-process >>> communication during provisioning with in-process communication. I >>> expect >>> that we could get startup under 5 minutes. It would also make it >>> possible >>> to do component-testing on services which are dependent on other services >>> without extensive, error-prone mocking. Writing tests for portfolio >>> probably cost me more time than writing the code, and debugging mocks >>> isn't >>> fun. But those benefits would only apply for development environments. >>> That's what FINCN-25 is about. >>> >>> Rhythm isn't resource-hungry. Not sure where that came from... >>> >>> I don't believe Cassandra is the problem here, but I haven't examined >>> that >>> closely. Please prove me wrong. I'd love to see a fix like that, if it >>> really is that easy. : o) >>> >>> Best Regards, >>> Myrle >>> >>> >>> >>> On Wed, Oct 10, 2018 at 6:24 PM Avik Ganguly <[email protected]> >>> wrote: >>> >>> > Hi guys, >>> > >>> > I am assuming that the suggested form of development for Fineract CN >>> is to >>> > write unit tests to test the code which you just wrote locally and then >>> > push the code to a remote cluster or a massive VM and wait 40+ minutes >>> to >>> > run the API / integration tests there. A lot of the confusion >>> regarding the >>> > stacktraces in identity, rhythm, timeouts and auto-shutdown might or >>> might >>> > not have anything to do with the resource crunch in my local. (i7, 8GB, >>> > SSD) >>> > >>> > Maybe a FAQ section in "How to build Fineract CN" to help devs with >>> local >>> > setup would go a long way to bring in contributions to the core >>> modules. >>> > >>> > 1. Change demo server orchestration logic from running Identity, >>> > Organization, Customer, Accounting, Portfolio, Deposit, Teller, >>> Reporting, >>> > Cheque, Payroll, Group and Notification services to running only >>> Identity, >>> > Organization, Customer and working set of your choice (Portfolio and >>> > Accounting OR Deposit and Accounting OR Group and Portfolio) >>> > >>> > 2. Leave out Rhythm from orchestration logic for now. A JIRA ticket >>> > tracking ability of Rhythm to run with low resource footprint will >>> probably >>> > help. >>> > >>> > >>> > @Myrle, can you elaborate on FINCN-25? Is the motivation of this >>> ticket to >>> > reduce resource footprint only or also to reduce the startup time from >>> 40 >>> > minutes to a reasonable couple of minutes? Do you have any debugging >>> tips >>> > with respect to the slow startup caused by services is due to >>> registering >>> > with eureka or provisioning or sequential startup? >>> > >>> > Maybe we should also follow up in Cassandra mailing list regarding >>> > configuring Cassandra to run within 8GB of RAM and include the same >>> config >>> > as part of the FAQ. >>> > >>> > >>> > Regards, >>> > Avik. >>> > ᐧ >>> > >>> > On Wed, Sep 19, 2018 at 2:23 AM Courage Angeh <[email protected]> >>> > wrote: >>> > >>> > > Thanks for the update. Ed. >>> > > >>> > > If you need an extra hand please let me know Aleks. >>> > > >>> > > On Tue, Sep 18, 2018, 4:27 PM Ed Cable <[email protected]> wrote: >>> > > >>> > > > Hi all, >>> > > > >>> > > > Just to bring this thread back up to top of everyone's inbox. Aleks >>> > > > estimates he has about a day left to get the demo server up and >>> running >>> > > is >>> > > > hoping to tackle it by tomorrow. Here are the two approaches he >>> > suggested >>> > > > now that we have the second VM allocated: >>> > > > >>> > > > - continue with the JUnit based startup sequence of the demo >>> module; >>> > the >>> > > > one thing that I would need to change is to split up the startup >>> > sequence >>> > > > and recompile >>> > > > - create a set of Docker Compose files to do the startup on the 2 >>> VM >>> > > > instances (maybe I could borrow some stuff from Courage) >>> > > > >>> > > > I would prefer the latter, because you have more fine grained >>> control >>> > and >>> > > > it's easier move service instances around... and you don't have to >>> > > > recompile anything. >>> > > > >>> > > > Ed >>> > > > >>> > > > On Tue, Jul 17, 2018 at 4:09 AM Myrle Krantz <[email protected]> >>> wrote: >>> > > > >>> > > > > Hey Aleks, >>> > > > > >>> > > > > I don’t see anything in that log that would explain the behavior >>> > you’re >>> > > > > observing. Markus did most of the work on deposits. Markus? Do >>> you >>> > have >>> > > > any >>> > > > > ideas? >>> > > > > >>> > > > > Best Regards, >>> > > > > Myrle >>> > > > > >>> > > > > >>> > > > > >>> > > > > On Fri 13. Jul 2018 at 07:06 Aleksandar Vidakovic < >>> > > > > [email protected]> >>> > > > > wrote: >>> > > > > >>> > > > > > @Myrle: with your proposed changes things look a bit better. >>> I've >>> > > > > disabled >>> > > > > > Rhythm as you suggested and get past all microservice startups >>> > with a >>> > > > lot >>> > > > > > less exceptions (one is coming repeatedly from Netty native >>> > > transport, >>> > > > > > because epoll is not available). >>> > > > > > >>> > > > > > The one exception (actually it happened on 2 occasions) that is >>> > still >>> > > > > > there: >>> > > > > > >>> > > > > > [snip] >>> > > > > > >>> > > > > > 13:33:45.080 [main] INFO >>> > > > > o.a.f.c.d.s.DepositAccountManagementApplication - >>> > > > > > Started DepositAccountManagementApplication in 55.187 seconds >>> (JVM >>> > > > > running >>> > > > > > for 62.772) >>> > > > > > 13:34:45.730 [AsyncResolver-bootstrap-executor-0] INFO >>> > > > > > c.n.d.s.r.aws.ConfigClusterResolver - Resolving eureka >>> endpoints >>> > via >>> > > > > > configuration >>> > > > > > Resolving artifact >>> > > > > > >>> org.apache.fineract.cn.teller:service-boot:jar:0.1.0-BUILD-SNAPSHOT >>> > > > > > Resolving metadata >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.apache.fineract.cn.teller:service-boot:0.1.0-BUILD-SNAPSHOT/maven-metadata.xml >>> > > > > > from /root/.m2/repository (enhanced) >>> > > > > > Resolved metadata >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.apache.fineract.cn.teller:service-boot:0.1.0-BUILD-SNAPSHOT/maven-metadata.xml >>> > > > > > from /root/.m2/repository (enhanced) >>> > > > > > Resolved artifact >>> > > > > > >>> org.apache.fineract.cn.teller:service-boot:jar:0.1.0-BUILD-SNAPSHOT >>> > > > from >>> > > > > > /root/.m2/repository (enhanced) >>> > > > > > 13:34:56.800 [DiscoveryClient-0] ERROR >>> > > > c.n.discovery.TimedSupervisorTask >>> > > > > - >>> > > > > > task supervisor timed out >>> > > > > > java.util.concurrent.TimeoutException: null >>> > > > > > at java.util.concurrent.FutureTask.get(FutureTask.java:205) >>> > > > > > at >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> com.netflix.discovery.TimedSupervisorTask.run(TimedSupervisorTask.java:64) >>> > > > > > at >>> > > > > >>> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) >>> > > > > > at java.util.concurrent.FutureTask.run(FutureTask.java:266) >>> > > > > > at >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) >>> > > > > > at >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) >>> > > > > > at >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) >>> > > > > > at >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) >>> > > > > > at java.lang.Thread.run(Thread.java:748) >>> > > > > > 13:34:56.801 [DiscoveryClient-1] ERROR >>> > > > c.n.discovery.TimedSupervisorTask >>> > > > > - >>> > > > > > task supervisor timed out >>> > > > > > java.util.concurrent.TimeoutException: null >>> > > > > > at java.util.concurrent.FutureTask.get(FutureTask.java:205) >>> > > > > > at >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> com.netflix.discovery.TimedSupervisorTask.run(TimedSupervisorTask.java:64) >>> > > > > > at >>> > > > > >>> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) >>> > > > > > at java.util.concurrent.FutureTask.run(FutureTask.java:266) >>> > > > > > at >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) >>> > > > > > at >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) >>> > > > > > at >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) >>> > > > > > at >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) >>> > > > > > at java.lang.Thread.run(Thread.java:748) >>> > > > > > 13:34:56.916 [DiscoveryClient-2] ERROR >>> > > > c.n.discovery.TimedSupervisorTask >>> > > > > - >>> > > > > > task supervisor timed out >>> > > > > > java.util.concurrent.TimeoutException: null >>> > > > > > at java.util.concurrent.FutureTask.get(FutureTask.java:205) >>> > > > > > at >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> com.netflix.discovery.TimedSupervisorTask.run(TimedSupervisorTask.java:64) >>> > > > > > at >>> > > > > >>> > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) >>> > > > > > at java.util.concurrent.FutureTask.run(FutureTask.java:266) >>> > > > > > at >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) >>> > > > > > at >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) >>> > > > > > at >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) >>> > > > > > at >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) >>> > > > > > at java.lang.Thread.run(Thread.java:748) >>> > > > > > 13:35:08.586 [main] INFO >>> > o.s.c.a.AnnotationConfigApplicationContext >>> > > - >>> > > > > > Refreshing >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.context.annotation.AnnotationConfigApplicationContext@7dc5e7b4 >>> > > > > > : >>> > > > > > startup date [Fri Jul 13 13:35:08 UTC 2018]; root of context >>> > > hierarchy >>> > > > > > 13:35:10.182 [main] INFO >>> > > > o.s.b.f.a.AutowiredAnnotationBeanPostProcessor >>> > > > > - >>> > > > > > JSR-330 'javax.inject.Inject' annotation found and supported >>> for >>> > > > > autowiring >>> > > > > > 13:35:10.371 [main] INFO >>> > > > > > >>> o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker >>> > - >>> > > > Bean >>> > > > > > 'configurationPropertiesRebinderAutoConfiguration' of type >>> [class >>> > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$ffaf96f9] >>> > > > > > is not eligible for getting processed by all BeanPostProcessors >>> > (for >>> > > > > > example: not eligible for auto-proxying) >>> > > > > > 13:35:10.435 [background-preinit] INFO >>> > > > > o.h.validator.internal.util.Version >>> > > > > > - HV000001: Hibernate Validator 5.3.0.Final >>> > > > > > >>> > > > > > [/snip] >>> > > > > > >>> > > > > > It looks like the whole startup procedure goes through and I >>> can >>> > see >>> > > > some >>> > > > > > kind of "summary" that lists all REST endpoints. But it stays >>> in >>> > that >>> > > > > state >>> > > > > > for only a minute or so and then a cascade of shutdowns are >>> > triggered >>> > > > > > again. The only thing that stays alive is Eureka which >>> complains >>> > > about >>> > > > > the >>> > > > > > inaccessible services. >>> > > > > > >>> > > > > > Any idea what else we could do other than just ordering an >>> > additional >>> > > > > > machine? >>> > > > > > >>> > > > > > Cheers, >>> > > > > > >>> > > > > > Aleks >>> > > > > > >>> > > > > > On Mon, Jun 18, 2018 at 5:42 PM Myrle Krantz <[email protected] >>> > >>> > > wrote: >>> > > > > > >>> > > > > > > Hey Aleks, >>> > > > > > > >>> > > > > > > It looks like that exception is coming from Rhythm. I >>> suspect >>> > it's >>> > > > not >>> > > > > > the >>> > > > > > > cause of your difficulties, since you say your other >>> services are >>> > > > going >>> > > > > > > down too. You can run most of the Fineract server without >>> > Rhythm. >>> > > > You >>> > > > > > > could edit the demo-server script to leave it out for the >>> > purposes >>> > > of >>> > > > > > > testing. The consequence will be a lack of interest >>> calculations >>> > > for >>> > > > > > > disbursed loans, but for finding out what the real problem >>> is, >>> > it'd >>> > > > be >>> > > > > > good >>> > > > > > > to get this one out of the way. >>> > > > > > > >>> > > > > > > Best Regards, >>> > > > > > > Myrle >>> > > > > > > >>> > > > > > > (FYI: I've removed the non-list subscribers from the to. >>> Most of >>> > > > them >>> > > > > > are >>> > > > > > > already subscribed, and those who aren't, don't want to be.) >>> > > > > > > >>> > > > > > > On Wed, Jun 13, 2018 at 1:40 AM Aleksandar Vidakovic < >>> > > > > > > [email protected]> wrote: >>> > > > > > > >>> > > > > > > > I managed to further improve the startup procedure (aka >>> less >>> > > > > exceptions >>> > > > > > > > than before)... there are still some timeouts, but not as >>> many >>> > as >>> > > > > > before. >>> > > > > > > > >>> > > > > > > > The demo server still dies after this exception: >>> > > > > > > > >>> > > > > > > > 23:27:41.896 [AsyncResolver-bootstrap-executor-0] INFO >>> > > > > > > > c.n.d.s.r.aws.ConfigClusterResolver - Resolving eureka >>> > endpoints >>> > > > via >>> > > > > > > > configuration >>> > > > > > > > 23:27:44.564 [AsyncResolver-bootstrap-executor-0] INFO >>> > > > > > > > c.n.d.s.r.aws.ConfigClusterResolver - Resolving eureka >>> > endpoints >>> > > > via >>> > > > > > > > configuration >>> > > > > > > > 23:28:43.669 [AsyncResolver-bootstrap-executor-0] INFO >>> > > > > > > > c.n.d.s.r.aws.ConfigClusterResolver - Resolving eureka >>> > endpoints >>> > > > via >>> > > > > > > > configuration >>> > > > > > > > 23:28:45.990 [pool-9-thread-1] ERROR >>> > > > > > > o.s.s.s.TaskUtils$LoggingErrorHandler >>> > > > > > > > - Unexpected error occurred in scheduled task. >>> > > > > > > > >>> > org.springframework.transaction.CannotCreateTransactionException: >>> > > > > Could >>> > > > > > > not >>> > > > > > > > open JPA EntityManager for transaction; nested exception is >>> > > > > > > > javax.persistence.PersistenceException: >>> > > > > > > > java.sql.SQLNonTransientConnectionException: Could not read >>> > > > > resultset: >>> > > > > > > > unexpected end of stream, read 0 bytes from 4 >>> > > > > > > > Query is : set autocommit=0 >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:431) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:373) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:430) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:276) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.apache.fineract.cn.rhythm.service.internal.service.Drummer$$EnhancerBySpringCGLIB$$70dfc8e3.checkForBeatsNeeded(<generated>) >>> > > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native >>> Method) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> > > > > > > > at java.lang.reflect.Method.invoke(Method.java:498) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:65) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) >>> > > > > > > > at >>> > > > > > > >>> > > > >>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) >>> > > > > > > > at >>> > > java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) >>> > > > > > > > at java.lang.Thread.run(Thread.java:748) >>> > > > > > > > Caused by: javax.persistence.PersistenceException: >>> > > > > > > > java.sql.SQLNonTransientConnectionException: Could not read >>> > > > > resultset: >>> > > > > > > > unexpected end of stream, read 0 bytes from 4 >>> > > > > > > > Query is : set autocommit=0 >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1692) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1602) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.hibernate.jpa.spi.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:1700) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > >>> > > > >>> > >>> org.hibernate.jpa.internal.TransactionImpl.begin(TransactionImpl.java:48) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.orm.jpa.vendor.HibernateJpaDialect.beginTransaction(HibernateJpaDialect.java:189) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:380) >>> > > > > > > > ... 20 common frames omitted >>> > > > > > > > Caused by: java.sql.SQLNonTransientConnectionException: >>> Could >>> > not >>> > > > > read >>> > > > > > > > resultset: unexpected end of stream, read 0 bytes from 4 >>> > > > > > > > Query is : set autocommit=0 >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.mariadb.jdbc.internal.util.ExceptionMapper.get(ExceptionMapper.java:123) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.mariadb.jdbc.internal.util.ExceptionMapper.throwException(ExceptionMapper.java:69) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.mariadb.jdbc.MariaDbStatement.executeQueryEpilog(MariaDbStatement.java:242) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:270) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.mariadb.jdbc.MariaDbStatement.executeUpdate(MariaDbStatement.java:399) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.mariadb.jdbc.MariaDbConnection.setAutoCommit(MariaDbConnection.java:584) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> com.jolbox.bonecp.ConnectionHandle.setAutoCommit(ConnectionHandle.java:1292) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.hibernate.resource.jdbc.internal.AbstractLogicalConnectionImplementor.begin(AbstractLogicalConnectionImplementor.java:67) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.begin(LogicalConnectionManagedImpl.java:238) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.begin(JdbcResourceLocalTransactionCoordinatorImpl.java:214) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.hibernate.engine.transaction.internal.TransactionImpl.begin(TransactionImpl.java:52) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.hibernate.internal.SessionImpl.beginTransaction(SessionImpl.java:1512) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > >>> > > > >>> > >>> org.hibernate.jpa.internal.TransactionImpl.begin(TransactionImpl.java:45) >>> > > > > > > > ... 22 common frames omitted >>> > > > > > > > Caused by: >>> org.mariadb.jdbc.internal.util.dao.QueryException: >>> > > Could >>> > > > > not >>> > > > > > > > read resultset: unexpected end of stream, read 0 bytes >>> from 4 >>> > > > > > > > Query is : set autocommit=0 >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.getResult(AbstractQueryProtocol.java:909) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.executeQuery(AbstractQueryProtocol.java:604) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.mariadb.jdbc.MariaDbStatement.executeInternal(MariaDbStatement.java:261) >>> > > > > > > > ... 31 common frames omitted >>> > > > > > > > Caused by: java.io.EOFException: unexpected end of stream, >>> > read 0 >>> > > > > bytes >>> > > > > > > > from 4 >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.mariadb.jdbc.internal.packet.read.ReadPacketFetcher.getReusableBuffer(ReadPacketFetcher.java:168) >>> > > > > > > > at >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.getResult(AbstractQueryProtocol.java:900) >>> > > > > > > > ... 33 common frames omitted >>> > > > > > > > 23:29:05.500 [qtp6422064-20] INFO >>> > > > > o.a.f.c.l.c.ServiceExceptionFilter - >>> > > > > > > > Responding with a service error ServiceError{code=409, >>> > > > > > > message='Application >>> > > > > > > > identity-v1 already exists!'} >>> > > > > > > > 23:29:21.282 [Thread-15] INFO >>> > > > > > > > o.s.b.c.e.AnnotationConfigEmbeddedWebApplicationContext - >>> > Closing >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@11438d26 >>> > > > > > > > : >>> > > > > > > > startup date [Tue Jun 12 23:02:02 UTC 2018]; parent: >>> > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.context.annotation.AnnotationConfigApplicationContext@7dc5e7b4 >>> > > > > > > > 23:29:23.717 [Thread-15] INFO >>> > o.s.c.s.DefaultLifecycleProcessor >>> > > - >>> > > > > > > Stopping >>> > > > > > > > beans in phase 2147483647 >>> > > > > > > > 23:29:24.267 [Thread-15] INFO >>> > o.s.c.s.DefaultLifecycleProcessor >>> > > - >>> > > > > > > Stopping >>> > > > > > > > beans in phase 0 >>> > > > > > > > 23:29:24.557 [Thread-15] INFO >>> > > o.s.b.a.e.jmx.EndpointMBeanExporter >>> > > > - >>> > > > > > > > Unregistering JMX-exposed beans on shutdown >>> > > > > > > > 23:29:24.579 [Thread-15] INFO >>> > > o.s.b.a.e.jmx.EndpointMBeanExporter >>> > > > - >>> > > > > > > > Unregistering JMX-exposed beans >>> > > > > > > > >>> > > > > > > > The only thing I've left: start the databases (aka >>> Cassandra) >>> > on >>> > > a >>> > > > > > > separate >>> > > > > > > > machine (e. g. on Digital Ocean) and see how this is >>> working >>> > > then. >>> > > > > > > > >>> > > > > > > > ... FYI. >>> > > > > > > > >>> > > > > > > > Cheers, >>> > > > > > > > >>> > > > > > > > Aleks >>> > > > > > > > >>> > > > > > > > On Tue, Jun 12, 2018 at 11:16 PM Aleksandar Vidakovic < >>> > > > > > > > [email protected]> wrote: >>> > > > > > > > >>> > > > > > > > > @Rajan: it's not running, because the demo server >>> dies... See >>> > > > logs >>> > > > > > from >>> > > > > > > > > previous message... >>> > > > > > > > > >>> > > > > > > > > We just have to figure that one out and we should be >>> good to >>> > > go. >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > On Tue, Jun 12, 2018, 9:11 PM Rajan Maurya < >>> > > > > [email protected] >>> > > > > > > >>> > > > > > > > > wrote: >>> > > > > > > > > >>> > > > > > > > >> Thanks, Alex, >>> > > > > > > > >> >>> > > > > > > > >> I can see web app is running but I can't log in with the >>> > > > > credentials >>> > > > > > > > that >>> > > > > > > > >> you shared earlier >>> > > > > > > > >> >>> > > > > > > > >> Tenant : playground >>> > > > > > > > >> Username : operator >>> > > > > > > > >> Password : init1@l >>> > > > > > > > >> >>> > > > > > > > >> Thanks >>> > > > > > > > >> >>> > > > > > > > >> On Wed, Jun 13, 2018 at 12:32 AM Aleksandar Vidakovic < >>> > > > > > > > >> [email protected]> wrote: >>> > > > > > > > >> >>> > > > > > > > >> > Hi all, >>> > > > > > > > >> > >>> > > > > > > > >> > ... sweeeet! I got a considerable step further. >>> > > > > > > > >> > >>> > > > > > > > >> > @Markus: thanks for the conversation... gave me an >>> idea >>> > how >>> > > to >>> > > > > fix >>> > > > > > > > (most >>> > > > > > > > >> > of) it! >>> > > > > > > > >> > >>> > > > > > > > >> > The problem was - as suspected - the memory (or lack >>> of). >>> > > The >>> > > > VM >>> > > > > > > > >> instance >>> > > > > > > > >> > at Apache has net 30G available. 16G are needed for >>> > > Cassandra >>> > > > > and >>> > > > > > in >>> > > > > > > > my >>> > > > > > > > >> > previous tests I tried to run the demo server with up >>> to >>> > > 16G. >>> > > > > The >>> > > > > > > > >> problem >>> > > > > > > > >> > is that the demo server needs considerably more memory >>> > than >>> > > > 16G. >>> > > > > > > > >> According >>> > > > > > > > >> > to my observation it's around 23-24G. Even if you >>> limit >>> > the >>> > > > demo >>> > > > > > > > server >>> > > > > > > > >> > memory allocation with "-Xmx" to 12G (for example) it >>> > would >>> > > > just >>> > > > > > > > >> continue >>> > > > > > > > >> > to consume more memory. >>> > > > > > > > >> > >>> > > > > > > > >> > My solution: add a swap file. Not ideal in terms of >>> > > > performance, >>> > > > > > but >>> > > > > > > > it >>> > > > > > > > >> > gets the whole thing going... at least mostly (please >>> read >>> > > > on). >>> > > > > > > > >> > >>> > > > > > > > >> > Current status: >>> > > > > > > > >> > >>> > > > > > > > >> > - Cassandra (and all other Docker containers) are >>> still >>> > > > > > running; >>> > > > > > > > >> > Cassandra died in my previous attempts at the >>> latest >>> > when >>> > > > the >>> > > > > > > > Teller >>> > > > > > > > >> > application started, but with the swap file it's >>> still >>> > > > > running >>> > > > > > > > >> > - all Spring Boot apps are starting now >>> > > > > > > > >> > - I get once in a while some spurious timeouts (I >>> think >>> > > > when >>> > > > > > > either >>> > > > > > > > >> the >>> > > > > > > > >> > Eureka server is contacted or the config server) >>> > > > > > > > >> > - there are also sometimes exceptions concerning >>> MySQL >>> > > > > > > connections, >>> > > > > > > > >> but >>> > > > > > > > >> > doesn't seem to matter overall >>> > > > > > > > >> > - I could issue an authentication request with >>> Postman >>> > to >>> > > > > > > > >> > http://fineract-vm.apache.org:2021/identity/v1; >>> just >>> > to >>> > > > test >>> > > > > > if >>> > > > > > > > >> > anything >>> > > > > > > > >> > is responding >>> > > > > > > > >> > - PROBLEM: but the demo server (the Spring Boot >>> apps) >>> > > > > > eventually >>> > > > > > > > dies >>> > > > > > > > >> > after a couple of minutes; I don't think that the >>> > memory >>> > > is >>> > > > > the >>> > > > > > > > >> problem >>> > > > > > > > >> > at >>> > > > > > > > >> > this point; I think it's more of a timeout problem >>> > > > concerning >>> > > > > > the >>> > > > > > > > >> > communication with Eureka and/or the config server >>> > (maybe >>> > > > > > because >>> > > > > > > > the >>> > > > > > > > >> > system is a bit slow) >>> > > > > > > > >> > >>> > > > > > > > >> > ... and here the memory footprint ("free -h") when >>> > > everything >>> > > > is >>> > > > > > > > >> running, >>> > > > > > > > >> > just FYI: >>> > > > > > > > >> > >>> > > > > > > > >> > total used free >>> shared >>> > > > > > buff/cache >>> > > > > > > > >> > available >>> > > > > > > > >> > Mem: 31G 31G 226M >>> 8.4M >>> > > > > > 169M >>> > > > > > > > >> > 47M >>> > > > > > > > >> > Swap: 15G 6.8G 9.2G >>> > > > > > > > >> > >>> > > > > > > > >> > ... after the Payroll application is started >>> (approx.) I >>> > see >>> > > > > these >>> > > > > > > > >> > exceptions appearing in the logs (a whole series of >>> them): >>> > > > > > > > >> > >>> > > > > > > > >> > 18:19:36.465 [AsyncResolver-bootstrap-executor-0] INFO >>> > > > > > > > >> > c.n.d.s.r.aws.ConfigClusterResolver - Resolving eureka >>> > > > endpoints >>> > > > > > via >>> > > > > > > > >> > configuration >>> > > > > > > > >> > Resolving artifact >>> > > > > > > > >> > >>> > > > > >>> org.apache.fineract.cn.group:service-boot:jar:0.1.0-BUILD-SNAPSHOT >>> > > > > > > > >> > Resolving metadata >>> > > > > > > > >> > >>> > > > > > > > >> > >>> > > > > > > > >> >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.apache.fineract.cn.group:service-boot:0.1.0-BUILD-SNAPSHOT/maven-metadata.xml >>> > > > > > > > >> > from /root/.m2/repository (enhanced) >>> > > > > > > > >> > Resolved metadata >>> > > > > > > > >> > >>> > > > > > > > >> > >>> > > > > > > > >> >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.apache.fineract.cn.group:service-boot:0.1.0-BUILD-SNAPSHOT/maven-metadata.xml >>> > > > > > > > >> > from /root/.m2/repository (enhanced) >>> > > > > > > > >> > Resolved artifact >>> > > > > > > > >> > >>> > > > > >>> org.apache.fineract.cn.group:service-boot:jar:0.1.0-BUILD-SNAPSHOT >>> > > > > > > > from >>> > > > > > > > >> > /root/.m2/repository (enhanced) >>> > > > > > > > >> > 18:19:44.689 [DiscoveryClient-0] ERROR >>> > > > > > > > >> c.n.discovery.TimedSupervisorTask - >>> > > > > > > > >> > task supervisor timed out >>> > > > > > > > >> > java.util.concurrent.TimeoutException: null >>> > > > > > > > >> > at >>> > java.util.concurrent.FutureTask.get(FutureTask.java:205) >>> > > > > > > > >> > at >>> > > > > > > > >> > >>> > > > > > > > >> >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> com.netflix.discovery.TimedSupervisorTask.run(TimedSupervisorTask.java:64) >>> > > > > > > > >> > at >>> > > > > > > > >> >>> > > > > > > >>> > > > >>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) >>> > > > > > > > >> > at >>> > java.util.concurrent.FutureTask.run(FutureTask.java:266) >>> > > > > > > > >> > at >>> > > > > > > > >> > >>> > > > > > > > >> > >>> > > > > > > > >> >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) >>> > > > > > > > >> > at >>> > > > > > > > >> > >>> > > > > > > > >> > >>> > > > > > > > >> >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) >>> > > > > > > > >> > at >>> > > > > > > > >> > >>> > > > > > > > >> > >>> > > > > > > > >> >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) >>> > > > > > > > >> > at >>> > > > > > > > >> > >>> > > > > > > > >> > >>> > > > > > > > >> >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) >>> > > > > > > > >> > at java.lang.Thread.run(Thread.java:748) >>> > > > > > > > >> > >>> > > > > > > > >> > ... but it still seems to have no problem starting the >>> > next >>> > > > app >>> > > > > > > > (Group) >>> > > > > > > > >> and >>> > > > > > > > >> > things look good for a while... >>> > > > > > > > >> > >>> > > > > > > > >> > ... until suddenly it seems that app instances are >>> > beginning >>> > > > to >>> > > > > > > > >> shutdown... >>> > > > > > > > >> > and I'm not sure why this happens: >>> > > > > > > > >> > >>> > > > > > > > >> > 18:27:19.603 [AsyncResolver-bootstrap-executor-0] INFO >>> > > > > > > > >> > c.n.d.s.r.aws.ConfigClusterResolver - Resolving eureka >>> > > > endpoints >>> > > > > > via >>> > > > > > > > >> > configuration >>> > > > > > > > >> > 18:27:39.974 [qtp1631119258-22] INFO >>> > > > > > > > >> c.datastax.driver.core.ClockFactory - >>> > > > > > > > >> > Using native clock to generate timestamps. >>> > > > > > > > >> > 18:27:39.975 [qtp1631119258-22] WARN >>> > > > > > > > >> c.datastax.driver.core.CodecRegistry >>> > > > > > > > >> > - Ignoring codec LocalDateTimeCodec [timestamp <-> >>> > > > > > > > >> java.time.LocalDateTime] >>> > > > > > > > >> > because it collides with previously registered codec >>> > > > > > > > LocalDateTimeCodec >>> > > > > > > > >> > [timestamp <-> java.time.LocalDateTime] >>> > > > > > > > >> > 18:27:41.309 [qtp1631119258-22] INFO >>> > > > > > > > c.d.d.c.p.DCAwareRoundRobinPolicy >>> > > > > > > > >> - >>> > > > > > > > >> > Using data-center name 'datacenter1' for >>> > > > DCAwareRoundRobinPolicy >>> > > > > > (if >>> > > > > > > > >> this >>> > > > > > > > >> > is incorrect, please provide the correct datacenter >>> name >>> > > with >>> > > > > > > > >> > DCAwareRoundRobinPolicy constructor) >>> > > > > > > > >> > 18:27:41.319 [qtp1631119258-22] INFO >>> > > > > > > > com.datastax.driver.core.Cluster - >>> > > > > > > > >> > New Cassandra host /127.0.0.1:9042 added >>> > > > > > > > >> > 18:27:41.661 [AsyncResolver-bootstrap-executor-0] INFO >>> > > > > > > > >> > c.n.d.s.r.aws.ConfigClusterResolver - Resolving eureka >>> > > > endpoints >>> > > > > > via >>> > > > > > > > >> > configuration >>> > > > > > > > >> > 18:27:51.110 [Thread-15] INFO >>> > > > > > > > >> > >>> o.s.b.c.e.AnnotationConfigEmbeddedWebApplicationContext - >>> > > > > Closing >>> > > > > > > > >> > >>> > > > > > > > >> > >>> > > > > > > > >> >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@11438d26 >>> > > > > > > > >> > : >>> > > > > > > > >> > startup date [Tue Jun 12 17:55:35 UTC 2018]; parent: >>> > > > > > > > >> > >>> > > > > > > > >> > >>> > > > > > > > >> >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> org.springframework.context.annotation.AnnotationConfigApplicationContext@7dc5e7b4 >>> > > > > > > > >> > 18:27:51.341 [AsyncResolver-bootstrap-executor-0] INFO >>> > > > > > > > >> > c.n.d.s.r.aws.ConfigClusterResolver - Resolving eureka >>> > > > endpoints >>> > > > > > via >>> > > > > > > > >> > configuration >>> > > > > > > > >> > 18:27:53.373 [Thread-15] INFO >>> > > > > o.s.c.s.DefaultLifecycleProcessor - >>> > > > > > > > >> Stopping >>> > > > > > > > >> > beans in phase 2147483647 >>> > > > > > > > >> > 18:27:54.331 [Thread-15] INFO >>> > > > > o.s.c.s.DefaultLifecycleProcessor - >>> > > > > > > > >> Stopping >>> > > > > > > > >> > beans in phase 0 >>> > > > > > > > >> > 18:27:54.736 [Thread-15] INFO >>> > > > > > o.s.b.a.e.jmx.EndpointMBeanExporter - >>> > > > > > > > >> > Unregistering JMX-exposed beans on shutdown >>> > > > > > > > >> > 18:27:54.740 [Thread-15] INFO >>> > > > > > o.s.b.a.e.jmx.EndpointMBeanExporter - >>> > > > > > > > >> > Unregistering JMX-exposed beans >>> > > > > > > > >> > 18:27:54.798 [Thread-15] INFO >>> > > > > o.s.j.e.a.AnnotationMBeanExporter - >>> > > > > > > > >> > Unregistering JMX-exposed beans on shutdown >>> > > > > > > > >> > 18:27:54.800 [Thread-15] INFO >>> > > > > o.s.j.e.a.AnnotationMBeanExporter - >>> > > > > > > > >> > Unregistering JMX-exposed beans >>> > > > > > > > >> > 18:27:54.937 [Thread-15] INFO >>> > > > > > > > >> > o.s.o.j.LocalContainerEntityManagerFactoryBean - >>> Closing >>> > JPA >>> > > > > > > > >> > EntityManagerFactory for persistence unit 'metaPU' >>> > > > > > > > >> > 18:27:58.376 [async-processor-1] INFO >>> > > > > > > > >> o.f.c.internal.util.VersionPrinter - >>> > > > > > > > >> > Flyway 3.2.1 by Boxfuse >>> > > > > > > > >> > 18:27:58.677 [async-processor-1] INFO >>> > > > > > > > >> o.f.c.i.dbsupport.DbSupportFactory - >>> > > > > > > > >> > Database: jdbc:mysql://localhost:3306/seshat (MySQL >>> 10.3) >>> > > > > > > > >> > 18:28:00.620 [async-processor-1] INFO >>> > > > > > > > >> o.f.core.internal.command.DbValidate >>> > > > > > > > >> > - Validated 2 migrations (execution time 00:01.169s) >>> > > > > > > > >> > 18:28:00.901 [Thread-15] INFO >>> > > > > o.e.jetty.server.AbstractConnector >>> > > > > > - >>> > > > > > > > >> Stopped >>> > > > > > > > >> > ServerConnector@e162a35{HTTP/1.1,[http/1.1]}{ >>> 0.0.0.0:2020 >>> > } >>> > > > > > > > >> > 18:28:01.268 [Thread-15] INFO >>> > > > > > o.e.j.server.handler.ContextHandler - >>> > > > > > > > >> > Stopped >>> o.s.b.c.e.j.JettyEmbeddedWebAppContext@10850d17 >>> > > > > > > > >> > >>> > > > > > > > >> > >>> > > > > > > > >> >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> {/provisioner/v1,file:///tmp/jetty-docbase.5486166363042759626.2020/,UNAVAILABLE} >>> > > > > > > > >> > >>> > > > > > > > >> > ... the list is actually a lot longer... in the rest >>> of >>> > the >>> > > > log >>> > > > > > you >>> > > > > > > > can >>> > > > > > > > >> see >>> > > > > > > > >> > that one by one all the apps are shutting down. >>> > > > > > > > >> > >>> > > > > > > > >> > The last (error) entries I see are these: >>> > > > > > > > >> > >>> > > > > > > > >> > E18:29:31.278 [DefaultMessageListenerContainer-3] >>> ERROR >>> > > > > > > > >> > o.s.j.l.DefaultMessageListenerContainer - Could not >>> > refresh >>> > > > JMS >>> > > > > > > > >> Connection >>> > > > > > > > >> > for destination 'cheques-v1' - retrying using >>> > > > > > > > >> FixedBackOff{interval=5000, >>> > > > > > > > >> > currentAttempts=0, maxAttempts=unlimited}. Cause: >>> Error >>> > > while >>> > > > > > > > >> attempting to >>> > > > > > > > >> > retrieve a connection from the pool; nested exception >>> is >>> > > > > > > > >> > javax.jms.JMSException: Could not connect to broker >>> URL: >>> > > > > > > > >> > tcp://localhost:61616. Reason: >>> java.net.ConnectException: >>> > > > > > Connection >>> > > > > > > > >> > refused (Connection refused) >>> > > > > > > > >> > >>> > > > > > > > >> > ... I guess an embedded ActiveMQ was also shutdown in >>> the >>> > > > > process >>> > > > > > of >>> > > > > > > > the >>> > > > > > > > >> > previous errors. >>> > > > > > > > >> > >>> > > > > > > > >> > So far from here... any input on the above highly >>> > > appreciated. >>> > > > > > > > Overall I >>> > > > > > > > >> > think we are close. >>> > > > > > > > >> > >>> > > > > > > > >> > Cheers, >>> > > > > > > > >> > >>> > > > > > > > >> > Aleks >>> > > > > > > > >> > >>> > > > > > > > >> > >>> > > > > > > > >> > On Tue, Jun 12, 2018 at 6:16 PM Markus Geiss < >>> > > [email protected] >>> > > > > >>> > > > > > > wrote: >>> > > > > > > > >> > >>> > > > > > > > >> > > One thing we maybe can do, given we are part of the >>> > Apache >>> > > > > > Family, >>> > > > > > > > is >>> > > > > > > > >> to >>> > > > > > > > >> > > ask the Cassandra community for some suggestions. >>> > > > > > > > >> > > >>> > > > > > > > >> > > Cheers >>> > > > > > > > >> > > >>> > > > > > > > >> > > Markus >>> > > > > > > > >> > > >>> > > > > > > > >> > > On Tue, Jun 12, 2018 at 6:04 PM Markus Geiss < >>> > > > [email protected] >>> > > > > > >>> > > > > > > > wrote: >>> > > > > > > > >> > > >>> > > > > > > > >> > > > Hey all, >>> > > > > > > > >> > > > >>> > > > > > > > >> > > > we are not running Cassandra in a container, we >>> have >>> > > > > dedicated >>> > > > > > > VMs >>> > > > > > > > >> to >>> > > > > > > > >> > run >>> > > > > > > > >> > > > our cluster. >>> > > > > > > > >> > > > >>> > > > > > > > >> > > > Given this we are not the best persons to help >>> here, >>> > > > sorry. >>> > > > > (; >>> > > > > > > > >> > > > >>> > > > > > > > >> > > > Cheers >>> > > > > > > > >> > > > >>> > > > > > > > >> > > > Markus >>> > > > > > > > >> > > > >>> > > > > > > > >> > > > On Tue, Jun 12, 2018 at 10:35 AM Ed Cable < >>> > > > > [email protected]> >>> > > > > > > > >> wrote: >>> > > > > > > > >> > > > >>> > > > > > > > >> > > >> Adding Patric directly to this thread too so he >>> can >>> > > give >>> > > > > his >>> > > > > > > > input >>> > > > > > > > >> > > >> regarding to the challenges with Cassandra. >>> > > > > > > > >> > > >> >>> > > > > > > > >> > > >> Ed >>> > > > > > > > >> > > >> On Tue, Jun 12, 2018 at 12:53 AM Aleksandar >>> > Vidakovic < >>> > > > > > > > >> > > >> [email protected]> wrote: >>> > > > > > > > >> > > >> >>> > > > > > > > >> > > >>> Hi Victor, >>> > > > > > > > >> > > >>> >>> > > > > > > > >> > > >>> .... this is the relevant part of the >>> > > docker-compose.yml >>> > > > > > file: >>> > > > > > > > >> > > >>> >>> > > > > > > > >> > > >>> cassandra0: >>> > > > > > > > >> > > >>> image: cassandra:3.11.1 >>> > > > > > > > >> > > >>> container_name: cassandra0 >>> > > > > > > > >> > > >>> ports: >>> > > > > > > > >> > > >>> - 9042:9042 >>> > > > > > > > >> > > >>> - 9160:9160 >>> > > > > > > > >> > > >>> - 7199:7199 >>> > > > > > > > >> > > >>> - 8778:8778 >>> > > > > > > > >> > > >>> volumes: >>> > > > > > > > >> > > >>> - ./cassandra:/etc/cassandra >>> > > > > > > > >> > > >>> environment: >>> > > > > > > > >> > > >>> - CASSANDRA_START_RPC=true >>> > > > > > > > >> > > >>> - CASSANDRA_SEEDS=cassandra0 >>> > > > > > > > >> > > >>> - CASSANDRA_CLUSTER_NAME=fineract_cluster >>> > > > > > > > >> > > >>> ulimits: >>> > > > > > > > >> > > >>> memlock: -1 >>> > > > > > > > >> > > >>> nproc: 32768 >>> > > > > > > > >> > > >>> nofile: 100000 >>> > > > > > > > >> > > >>> >>> > > > > > > > >> > > >>> ... and just to be complete... here's the Docker >>> > > service >>> > > > > > > > >> > configuration >>> > > > > > > > >> > > >>> ("/lib/systemd/system/docker.service"): >>> > > > > > > > >> > > >>> >>> > > > > > > > >> > > >>> [Unit] >>> > > > > > > > >> > > >>> Description=Docker Application Container Engine >>> > > > > > > > >> > > >>> Documentation=https://docs.docker.com >>> > > > > > > > >> > > >>> After=network-online.target docker.socket >>> > > > > firewalld.service >>> > > > > > > > >> > > >>> Wants=network-online.target >>> > > > > > > > >> > > >>> Requires=docker.socket >>> > > > > > > > >> > > >>> >>> > > > > > > > >> > > >>> [Service] >>> > > > > > > > >> > > >>> Type=notify >>> > > > > > > > >> > > >>> # the default is not to use systemd for cgroups >>> > > because >>> > > > > the >>> > > > > > > > >> delegate >>> > > > > > > > >> > > >>> issues >>> > > > > > > > >> > > >>> still >>> > > > > > > > >> > > >>> # exists and systemd currently does not support >>> the >>> > > > cgroup >>> > > > > > > > feature >>> > > > > > > > >> > set >>> > > > > > > > >> > > >>> required >>> > > > > > > > >> > > >>> # for containers run by docker >>> > > > > > > > >> > > >>> ExecStart=/usr/bin/dockerd -H fd:// >>> > > > > > > > >> > > >>> ExecReload=/bin/kill -s HUP $MAINPID >>> > > > > > > > >> > > >>> LimitNOFILE=1048576 >>> > > > > > > > >> > > >>> # Having non-zero Limit*s causes performance >>> > problems >>> > > > due >>> > > > > to >>> > > > > > > > >> > accounting >>> > > > > > > > >> > > >>> overhead >>> > > > > > > > >> > > >>> # in the kernel. We recommend using cgroups to >>> do >>> > > > > > > > container-local >>> > > > > > > > >> > > >>> accounting. >>> > > > > > > > >> > > >>> LimitNPROC=infinity >>> > > > > > > > >> > > >>> LimitCORE=infinity >>> > > > > > > > >> > > >>> LimitMEMLOCK=infinity >>> > > > > > > > >> > > >>> # Uncomment TasksMax if your systemd version >>> > supports >>> > > > it. >>> > > > > > > > >> > > >>> # Only systemd 226 and above support this >>> version. >>> > > > > > > > >> > > >>> TasksMax=infinity >>> > > > > > > > >> > > >>> TimeoutStartSec=0 >>> > > > > > > > >> > > >>> # set delegate yes so that systemd does not >>> reset >>> > the >>> > > > > > cgroups >>> > > > > > > of >>> > > > > > > > >> > docker >>> > > > > > > > >> > > >>> containers >>> > > > > > > > >> > > >>> Delegate=yes >>> > > > > > > > >> > > >>> # kill only the docker process, not all >>> processes in >>> > > the >>> > > > > > > cgroup >>> > > > > > > > >> > > >>> KillMode=process >>> > > > > > > > >> > > >>> # restart the docker process if it exits >>> prematurely >>> > > > > > > > >> > > >>> Restart=on-failure >>> > > > > > > > >> > > >>> StartLimitBurst=3 >>> > > > > > > > >> > > >>> StartLimitInterval=60s >>> > > > > > > > >> > > >>> >>> > > > > > > > >> > > >>> [Install] >>> > > > > > > > >> > > >>> WantedBy=multi-user.target >>> > > > > > > > >> > > >>> >>> > > > > > > > >> > > >>> ... the one tweak I did there is to set >>> LimitMEMLOCK >>> > > to >>> > > > > > > > >> infinity... >>> > > > > > > > >> > do >>> > > > > > > > >> > > >>> you >>> > > > > > > > >> > > >>> think the problems are file handle related? >>> Should I >>> > > set >>> > > > > > > > >> LimitNOFILE >>> > > > > > > > >> > > also >>> > > > > > > > >> > > >>> to infinity? >>> > > > > > > > >> > > >>> >>> > > > > > > > >> > > >>> Appreciate the help. >>> > > > > > > > >> > > >>> >>> > > > > > > > >> > > >>> Cheers, >>> > > > > > > > >> > > >>> >>> > > > > > > > >> > > >>> Aleks >>> > > > > > > > >> > > >>> >>> > > > > > > > >> > > >>> On Tue, Jun 12, 2018 at 9:09 AM Victor Romero < >>> > > > > > > > >> > > >>> [email protected]> >>> > > > > > > > >> > > >>> wrote: >>> > > > > > > > >> > > >>> >>> > > > > > > > >> > > >>> > Hi Aleks, >>> > > > > > > > >> > > >>> > >>> > > > > > > > >> > > >>> > Can you share the ulimits flags that the >>> > cassandra's >>> > > > > > > container >>> > > > > > > > >> is >>> > > > > > > > >> > > using >>> > > > > > > > >> > > >>> > while it is running? >>> > > > > > > > >> > > >>> > >>> > > > > > > > >> > > >>> > The values are being set in the composer >>> file? Or >>> > in >>> > > > the >>> > > > > > > > >> upstart or >>> > > > > > > > >> > > >>> > systemd docker's deamon config files? >>> > > > > > > > >> > > >>> > >>> > > > > > > > >> > > >>> > >>> > > > > > > > >> > > >>> > >>> > > > > > > > >> > > >>> > Enviado desde TypeApp >>> > > > > > > > >> > > >>> > >>> > > > > > > > >> > > >>> > En jun. 11, 2018 10:02 AM, en 10:02 AM, >>> Aleksandar >>> > > > > > > Vidakovic < >>> > > > > > > > >> > > >>> > [email protected]> escribió: >>> > > > > > > > >> > > >>> > >... and I should have attached the Cassandra >>> log >>> > > > > dump... >>> > > > > > so >>> > > > > > > > >> here >>> > > > > > > > >> > it >>> > > > > > > > >> > > >>> > >is... >>> > > > > > > > >> > > >>> > > >>> > > > > > > > >> > > >>> > >On Mon, Jun 11, 2018 at 4:49 PM Aleksandar >>> > > Vidakovic >>> > > > < >>> > > > > > > > >> > > >>> > >[email protected]> wrote: >>> > > > > > > > >> > > >>> > > >>> > > > > > > > >> > > >>> > >> Hi all, >>> > > > > > > > >> > > >>> > >> >>> > > > > > > > >> > > >>> > >> ... as you might have noticed I did a >>> couple of >>> > > > > > restarts >>> > > > > > > > >> > today... >>> > > > > > > > >> > > >>> the >>> > > > > > > > >> > > >>> > >> problem I am facing now: I can't get >>> Cassandra >>> > to >>> > > > run >>> > > > > > in >>> > > > > > > a >>> > > > > > > > >> > stable >>> > > > > > > > >> > > >>> > >way. >>> > > > > > > > >> > > >>> > >> >>> > > > > > > > >> > > >>> > >> Things look quite OK for a while when >>> running >>> > the >>> > > > > demo >>> > > > > > > > >> server, >>> > > > > > > > >> > but >>> > > > > > > > >> > > >>> > >then >>> > > > > > > > >> > > >>> > >> suddenly Cassandra dies (sometimes it >>> starts >>> > > > > > misbehaving >>> > > > > > > > with >>> > > > > > > > >> > the >>> > > > > > > > >> > > >>> > >deposit >>> > > > > > > > >> > > >>> > >> microservice startup, sometimes with >>> portfolio >>> > or >>> > > > > > > teller). >>> > > > > > > > I >>> > > > > > > > >> > tried >>> > > > > > > > >> > > >>> to >>> > > > > > > > >> > > >>> > >> increase the memory (4G, 8G and 16G) and >>> set >>> > some >>> > > > > > Docker >>> > > > > > > > >> limits >>> > > > > > > > >> > to >>> > > > > > > > >> > > >>> > >> "infinity" (especially LimitMEMLOCK). >>> > > > > > > > >> > > >>> > >> >>> > > > > > > > >> > > >>> > >> I've attached Cassandra's log dump... maybe >>> > > someone >>> > > > > can >>> > > > > > > > help >>> > > > > > > > >> out >>> > > > > > > > >> > > >>> > >here? Is >>> > > > > > > > >> > > >>> > >> it even possible to run Fineract CN on >>> 32GB of >>> > > > > memory? >>> > > > > > > > >> > > >>> > >> >>> > > > > > > > >> > > >>> > >> Other than that the setup would be ready to >>> > go... >>> > > > we >>> > > > > > just >>> > > > > > > > >> need >>> > > > > > > > >> > to >>> > > > > > > > >> > > >>> get >>> > > > > > > > >> > > >>> > >the >>> > > > > > > > >> > > >>> > >> database running more reliably. >>> > > > > > > > >> > > >>> > >> >>> > > > > > > > >> > > >>> > >> Cheers, >>> > > > > > > > >> > > >>> > >> >>> > > > > > > > >> > > >>> > >> Aleks >>> > > > > > > > >> > > >>> > >> >>> > > > > > > > >> > > >>> > >> On Mon, Jun 11, 2018 at 2:38 PM Aleksandar >>> > > > Vidakovic >>> > > > > < >>> > > > > > > > >> > > >>> > >> [email protected]> wrote: >>> > > > > > > > >> > > >>> > >> >>> > > > > > > > >> > > >>> > >>> @Rajan: I had to restart it again (needed >>> to >>> > add >>> > > > > some >>> > > > > > > > >> > additional >>> > > > > > > > >> > > >>> > >reverse >>> > > > > > > > >> > > >>> > >>> proxy configuration for the web UI to >>> work)... >>> > > and >>> > > > > > there >>> > > > > > > > >> were >>> > > > > > > > >> > > more >>> > > > > > > > >> > > >>> > >>> exceptions that I hope will be fixed now. >>> > > > > > > > >> > > >>> > >>> >>> > > > > > > > >> > > >>> > >>> Just FYI >>> > > > > > > > >> > > >>> > >>> >>> > > > > > > > >> > > >>> > >>> On Mon, Jun 11, 2018 at 1:47 PM Aleksandar >>> > > > > Vidakovic < >>> > > > > > > > >> > > >>> > >>> [email protected]> wrote: >>> > > > > > > > >> > > >>> > >>> >>> > > > > > > > >> > > >>> > >>>> No prob.... Let me know how it goes... >>> Can >>> > get >>> > > > back >>> > > > > > to >>> > > > > > > > >> testing >>> > > > > > > > >> > > >>> only >>> > > > > > > > >> > > >>> > >>>> later tonight. >>> > > > > > > > >> > > >>> > >>>> >>> > > > > > > > >> > > >>> > >>>> Cheers >>> > > > > > > > >> > > >>> > >>>> >>> > > > > > > > >> > > >>> > >>>> >>> > > > > > > > >> > > >>> > >>>> On Mon, Jun 11, 2018, 1:45 PM Rajan >>> Maurya >>> > > > > > > > >> > > >>> > ><[email protected]> >>> > > > > > > > >> > > >>> > >>>> wrote: >>> > > > > > > > >> > > >>> > >>>> >>> > > > > > > > >> > > >>> > >>>>> Sorry missed the 30 min, I will test >>> after >>> > 30 >>> > > > min. >>> > > > > > > > >> > > >>> > >>>>> >>> > > > > > > > >> > > >>> > >>>>> Big thanks for this 🙂 >>> > > > > > > > >> > > >>> > >>>>> >>> > > > > > > > >> > > >>> > >>>>> >>> > > > > > > > >> > > >>> > >>>>> >>> > > > > > > > >> > > >>> > >>>>> >>> > > > > > > > >> > > >>> > >>>>> On Mon, Jun 11, 2018 at 5:12 PM >>> Aleksandar >>> > > > > > Vidakovic < >>> > > > > > > > >> > > >>> > >>>>> [email protected]> wrote: >>> > > > > > > > >> > > >>> > >>>>> >>> > > > > > > > >> > > >>> > >>>>> > @Rajan: can't see the image you >>> posted. >>> > > > > > > > >> > > >>> > >>>>> > >>> > > > > > > > >> > > >>> > >>>>> > ... and as I said: the services are >>> still >>> > > > > > > starting... >>> > > > > > > > >> and >>> > > > > > > > >> > the >>> > > > > > > > >> > > >>> > >best >>> > > > > > > > >> > > >>> > >>>>> estimate >>> > > > > > > > >> > > >>> > >>>>> > I have right now (as already >>> mentioned): >>> > > 30min >>> > > > > > > > >> > > >>> > >>>>> > >>> > > > > > > > >> > > >>> > >>>>> > On Mon, Jun 11, 2018 at 1:40 PM Rajan >>> > > Maurya < >>> > > > > > > > >> > > >>> > >>>>> [email protected]> >>> > > > > > > > >> > > >>> > >>>>> > wrote: >>> > > > > > > > >> > > >>> > >>>>> > >>> > > > > > > > >> > > >>> > >>>>> > > [image: image.png] >>> > > > > > > > >> > > >>> > >>>>> > > >>> > > > > > > > >> > > >>> > >>>>> > > >>> > > > > > > > >> > > >>> > >>>>> > > >>> > > > > > > > >> > > >>> > >>>>> > > >>> > > > > > > > >> > > >>> > >>>>> > > >>> > > > > > > > >> > > >>> > >>>>> > > On Mon, Jun 11, 2018 at 5:08 PM >>> Rajan >>> > > > Maurya < >>> > > > > > > > >> > > >>> > >>>>> [email protected]> >>> > > > > > > > >> > > >>> > >>>>> > > wrote: >>> > > > > > > > >> > > >>> > >>>>> > > >>> > > > > > > > >> > > >>> > >>>>> > >> [image: image.png] >>> > > > > > > > >> > > >>> > >>>>> > >> I am getting this. >>> > > > > > > > >> > > >>> > >>>>> > >> >>> > > > > > > > >> > > >>> > >>>>> > >> >>> > > > > > > > >> > > >>> > >>>>> > >> >>> > > > > > > > >> > > >>> > >>>>> > >> >>> > > > > > > > >> > > >>> > >>>>> > >> >>> > > > > > > > >> > > >>> > >>>>> > >> On Mon, Jun 11, 2018 at 5:05 PM >>> > > Aleksandar >>> > > > > > > > Vidakovic >>> > > > > > > > >> < >>> > > > > > > > >> > > >>> > >>>>> > >> [email protected]> wrote: >>> > > > > > > > >> > > >>> > >>>>> > >> >>> > > > > > > > >> > > >>> > >>>>> > >>> Hi all, >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> so... the demo server is (almost) >>> > > ready... >>> > > > > it >>> > > > > > > took >>> > > > > > > > >> me a >>> > > > > > > > >> > > >>> > >moment >>> > > > > > > > >> > > >>> > >>>>> and a >>> > > > > > > > >> > > >>> > >>>>> > >>> couple >>> > > > > > > > >> > > >>> > >>>>> > >>> of restarts to figure out some >>> boot >>> > > > > > failures... >>> > > > > > > > the >>> > > > > > > > >> > > >>> services >>> > > > > > > > >> > > >>> > >are >>> > > > > > > > >> > > >>> > >>>>> quite >>> > > > > > > > >> > > >>> > >>>>> > >>> resource hungry and default >>> settings >>> > > won't >>> > > > > do >>> > > > > > > it. >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> The services are still starting >>> and >>> > this >>> > > > > will >>> > > > > > > > take a >>> > > > > > > > >> > > while >>> > > > > > > > >> > > >>> > >(my >>> > > > > > > > >> > > >>> > >>>>> best >>> > > > > > > > >> > > >>> > >>>>> > guess >>> > > > > > > > >> > > >>> > >>>>> > >>> right now is around 30min or so). >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> You can access the web UI at: >>> > > > > > > > >> > > >>> > >http://fineract-vm.apache.org/login >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> Credentials: >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> Tenant : playground >>> > > > > > > > >> > > >>> > >>>>> > >>> Username : operator >>> > > > > > > > >> > > >>> > >>>>> > >>> Password : init1@l >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> Note (to self): the safest way to >>> > > compile >>> > > > > this >>> > > > > > > app >>> > > > > > > > >> is >>> > > > > > > > >> > > with >>> > > > > > > > >> > > >>> > >NodeJS >>> > > > > > > > >> > > >>> > >>>>> > 8.11.1 >>> > > > > > > > >> > > >>> > >>>>> > >>> (I tried with 10.3.0 before, won't >>> > > work). >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> ... and the webservices at: >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> Identity Service: >>> > > > > > > > >> > > >>> > > >>> http://fineract-vm.apache.org:2021/identity/v1 >>> > > > > > > > >> > > >>> > >>>>> > >>> Office Service: >>> > > > > > > > >> > > >>> http://fineract-vm.apache.org:2023/office/v1 >>> > > > > > > > >> > > >>> > >>>>> > >>> Customer Service: >>> > > > > > > > >> > > >>> > > >>> http://fineract-vm.apache.org:2024/customer/v1 >>> > > > > > > > >> > > >>> > >>>>> > >>> Accounting Service: >>> > > > > > > > >> > > >>> > >>>>> >>> > > > http://fineract-vm.apache.org:2025/accounting/v1 >>> > > > > > > > >> > > >>> > >>>>> > >>> Portfolio Service: >>> > > > > > > > >> > > >>> > >>>>> >>> > > http://fineract-vm.apache.org:2026/portfolio/v1 >>> > > > > > > > >> > > >>> > >>>>> > >>> Deposit Service: >>> > > > > > > > >> > > >>> > > >>> http://fineract-vm.apache.org:2027/deposit/v1 >>> > > > > > > > >> > > >>> > >>>>> > >>> Teller Service: >>> > > > > > > > >> > > >>> http://fineract-vm.apache.org:2028/teller/v1 >>> > > > > > > > >> > > >>> > >>>>> > >>> Reporting Service: >>> > > > > > > > >> > > >>> > >>>>> >>> > > http://fineract-vm.apache.org:2029/reporting/v1 >>> > > > > > > > >> > > >>> > >>>>> > >>> Cheque Service: >>> > > > > > > > >> > > >>> > > >>> http://fineract-vm.apache.org:2030/cheques/v1 >>> > > > > > > > >> > > >>> > >>>>> > >>> Payroll Service: >>> > > > > > > > >> > > >>> > > >>> http://fineract-vm.apache.org:2031/payroll/v1 >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> Note: restarting the services >>> takes >>> > > quite >>> > > > a >>> > > > > > > > >> while... if >>> > > > > > > > >> > > you >>> > > > > > > > >> > > >>> > >>>>> encounter >>> > > > > > > > >> > > >>> > >>>>> > >>> connection problems retry a >>> couple of >>> > > > > minutes >>> > > > > > > > later >>> > > > > > > > >> to >>> > > > > > > > >> > > >>> > >ensure I >>> > > > > > > > >> > > >>> > >>>>> am not >>> > > > > > > > >> > > >>> > >>>>> > >>> currently working on something. >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> For the moment I'll restart the >>> > backend >>> > > > > > services >>> > > > > > > > >> once >>> > > > > > > > >> > per >>> > > > > > > > >> > > >>> > >day or >>> > > > > > > > >> > > >>> > >>>>> so to >>> > > > > > > > >> > > >>> > >>>>> > >>> reset the data; I guess we have to >>> > > figure >>> > > > > out >>> > > > > > > how >>> > > > > > > > we >>> > > > > > > > >> > want >>> > > > > > > > >> > > >>> to >>> > > > > > > > >> > > >>> > >>>>> handle >>> > > > > > > > >> > > >>> > >>>>> > this >>> > > > > > > > >> > > >>> > >>>>> > >>> (also concerning passwords etc.). >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> If you have any suggestions where >>> to >>> > put >>> > > > > this >>> > > > > > > demo >>> > > > > > > > >> > server >>> > > > > > > > >> > > >>> > >>>>> configuration >>> > > > > > > > >> > > >>> > >>>>> > >>> then let me know here. >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> Let me know if you encounter any >>> > > problems >>> > > > (I >>> > > > > > > have >>> > > > > > > > >> not >>> > > > > > > > >> > > >>> > >extensively >>> > > > > > > > >> > > >>> > >>>>> > tested >>> > > > > > > > >> > > >>> > >>>>> > >>> it >>> > > > > > > > >> > > >>> > >>>>> > >>> yet). >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> Cheers, >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> Aleks >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> On Mon, Jun 11, 2018 at 12:00 PM >>> > > > Aleksandar >>> > > > > > > > >> Vidakovic < >>> > > > > > > > >> > > >>> > >>>>> > >>> [email protected]> wrote: >>> > > > > > > > >> > > >>> > >>>>> > >>> >>> > > > > > > > >> > > >>> > >>>>> > >>> > Progress! All the modules seem >>> to be >>> > > > > > starting >>> > > > > > > > fine >>> > > > > > > > >> > > now... >>> > > > > > > > >> > > >>> > >just >>> > > > > > > > >> > > >>> > >>>>> have >>> > > > > > > > >> > > >>> >> >> >> -- >> *Ed Cable* >> President/CEO, Mifos Initiative >> [email protected] | Skype: edcable | Mobile: +1.484.477.8649 >> >> *Collectively Creating a World of 3 Billion Maries | *http://mifos.org >> <http://facebook.com/mifos> <http://www.twitter.com/mifos> >> >> > > -- > *Ed Cable* > President/CEO, Mifos Initiative > [email protected] | Skype: edcable | Mobile: +1.484.477.8649 > > *Collectively Creating a World of 3 Billion Maries | *http://mifos.org > <http://facebook.com/mifos> <http://www.twitter.com/mifos> > > -- *Ed Cable* President/CEO, Mifos Initiative [email protected] | Skype: edcable | Mobile: +1.484.477.8649 *Collectively Creating a World of 3 Billion Maries | *http://mifos.org <http://facebook.com/mifos> <http://www.twitter.com/mifos>
