On 01/12/2012 01:11 PM, Roka, Rajendra wrote:
Any more suggestions on this?
According to the new log, it still timed out after 60 seconds, so either
that wasn't long enough either, or there is a misconfiguration and the
database can't start because of it:
**
Jan 10 11:42:57 atp-wwdev1 modcluster: Starting service: mysql on node
Jan 10 11:42:57 atp-wwdev1 rgmanager[1690]: Starting stopped service
service:mysql
Jan 10 11:42:58 atp-wwdev1 rgmanager[5252]: Adding IPv4 address
10.26.240.95/24 to eth0
Jan 10 11:43:01 atp-wwdev1 rgmanager[5401]: Starting Service mysql:mysql
Jan 10 11:44:01 atp-wwdev1 rgmanager[5657]: Starting Service
mysql:mysql > Failed - Timeout Error
Jan 10 11:44:01 atp-wwdev1 rgmanager[1690]: start on mysql "mysql"
returned 1 (generic error)
Jan 10 11:44:02 atp-wwdev1 rgmanager[1690]: #68: Failed to start
service:mysql; return value: 1
What does it say in your mysql log? The resource script runs the
command to start the database and then waits for it to return success.
It waited 60 seconds, and hadn't received any notice that the database
started or not, so it gave up.
Look in the logs to see if there is any indication as to why the
database won't start. It could be because you have the wrong
configuration in /etc/my.cnf, no permissions on some critical
directories, or the resource script is misconfigured. Also, you should
investigate whether you can manually start the database (after mounting
the NFS mount and adding the VIP of course) outside of cluster (and
compare working and failing mysql logs).
Regards,
Ryan Mitchell
Software Maintenance Engineer
Support Engineering Group
Red Hat, Inc.
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster