Hi,
I filed a bug for this a while back: https://issues.apache.org/jira/browse/CLOUDSTACK-9227 If you have any further info, please add to the ticket - particularly if you find a fix :) Kind regards, Paul Angus [email protected] www.shapeblue.com 53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue -----Original Message----- From: Marc-Andre Jutras [mailto:[email protected]] Sent: 13 June 2016 14:20 To: [email protected] Subject: Re: Can't stop mgmt server cleanly for CS 4.8.0 try to upgrade your tomcat to ver 7... On 2016-06-10 4:10 PM, Yiping Zhang wrote: > RHEL 6.7 / java-1.7.0-openjdk-1.7.0.85-2.6.1.3 / tomcat6-6.0.24-90 > > On 6/10/16, 12:49 PM, "Marc-Andre Jutras" <[email protected]> wrote: > >> which java / tomcat / centos version you're running on ? >> >> >> On 2016-06-10 3:41 PM, Yiping Zhang wrote: >>> Hi, all: >>> >>> We have a cron job to restart mgmt. service once a week. However, after we >>> upgraded to CS 4.8.0, the cron job would leave the service stopped and >>> Nagios starts to page oncall. We traced the problem to that the mgmt. >>> service won’t stop cleanly, even when we try to stop it manually from CLI: >>> >>> # service cloudstack-management stop >>> Stopping cloudstack-management: [FAILED] >>> # service cloudstack-management status cloudstack-management dead >>> but pid file exists The pid file locates at >>> /var/run/cloudstack-management.pid and lock file at >>> /var/lock/subsys/cloudstack-management. >>> Starting cloudstack-management will take care of them or you can >>> manually clean up. >>> # >>> >>> The catalina.out log file has following exception: >>> >>> INFO [o.a.c.s.l.CloudStackExtendedLifeCycle] (Thread-95:null) >>> stopping bean ClusterManagerImpl INFO [c.c.c.ClusterManagerImpl] >>> (Thread-95:null) Stopping Cluster manager, msid : 60274787591663 >>> ERROR [c.c.c.ClusterServiceServletContainer] (Thread-11:null) >>> Unexpected exception >>> java.net.SocketException: Socket closed >>> at java.net.PlainSocketImpl.socketAccept(Native Method) >>> at >>> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398) >>> at java.net.ServerSocket.implAccept(ServerSocket.java:530) >>> at java.net.ServerSocket.accept(ServerSocket.java:498) >>> at >>> com.cloud.cluster.ClusterServiceServletContainer$ListenerThread.run( >>> ClusterServiceServletContainer.java:131) >>> log4j:WARN No appenders could be found for logger >>> (com.cloud.cluster.ClusterServiceServletContainer). >>> log4j:WARN Please initialize the log4j system properly. >>> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for >>> more info. >>> Exception in thread "Timer-2" java.lang.NullPointerException >>> at >>> org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:304) >>> at >>> org.apache.cloudstack.managed.context.ManagedContextRunnable.getContext(ManagedContextRunnable.java:66) >>> at >>> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46) >>> at >>> org.apache.cloudstack.managed.context.ManagedContextTimerTask.run(ManagedContextTimerTask.java:27) >>> at java.util.TimerThread.mainLoop(Timer.java:555) >>> at java.util.TimerThread.run(Timer.java:505) >>> Exception in thread "Timer-1" java.lang.NullPointerException >>> at >>> org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:304) >>> at >>> org.apache.cloudstack.managed.context.ManagedContextRunnable.getContext(ManagedContextRunnable.java:66) >>> at >>> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46) >>> at >>> org.apache.cloudstack.managed.context.ManagedContextTimerTask.run(ManagedContextTimerTask.java:27) >>> at java.util.TimerThread.mainLoop(Timer.java:555) >>> at java.util.TimerThread.run(Timer.java:505) >>> >>> The SocketException does not show up in every CS instances, but those >>> NullPointerException for thread Timer-1/2 are present for all CS instances. >>> >>> As a work around, we have to stop the service again, to clean up leftover >>> pid and lock files, before starting the service again. >>> >>> Has anyone else seen this problem ? >>> >>> Thanks, >>> >>> Yiping
