[Bug 1546459] Re: segfault at b774bd9d ip b7352a0d sp bfda8f30 error 7 in libresolv-2.19.so[b7349000+13000]

2016-02-23 Thread born2chill
Please fix this bug in the mini.iso - we're using it to provision
servers with foreman and had problems. It took us the better part of
yesterday until we discovered that it wasn't our setup but a broken
package.

Cudos to the other posters for pointing out the workaround with the FQDN
:)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1546459

Title:
  segfault at b774bd9d ip b7352a0d sp bfda8f30 error 7 in
  libresolv-2.19.so[b7349000+13000]

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/eglibc/+bug/1546459/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1327222] Re: Segfault: pacemaker segfaults randomly on Ubuntu trusty 14.04

2014-06-16 Thread born2chill
** Changed in: corosync (Ubuntu)
   Status: New = Invalid

** Changed in: pacemaker (Ubuntu)
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1327222

Title:
  Segfault: pacemaker segfaults randomly on Ubuntu trusty 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1327222] Re: Segfault: pacemaker segfaults randomly on Ubuntu trusty 14.04

2014-06-16 Thread born2chill
I found out that not the cluster stack itself was causing the issues but
the tool that I used to configure the cluster: LCMC. Although LCMC has
been working flawlessly for me on older versions of corosync/pacemaker,
it seems as it hasn't been updated to work with corosync 2.3.x and
pacemaker 1.1x. So everyone watch out until the LCMC gets updated (at
least 1.6.8 as of 2014-06-16 doesn't work reliably).

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1327222

Title:
  Segfault: pacemaker segfaults randomly on Ubuntu trusty 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1327222] Re: Segfault: pacemaker segfaults randomly on Ubuntu trusty 14.04

2014-06-16 Thread born2chill
** Changed in: corosync (Ubuntu)
   Status: New = Invalid

** Changed in: pacemaker (Ubuntu)
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1327222

Title:
  Segfault: pacemaker segfaults randomly on Ubuntu trusty 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1327222] Re: Segfault: pacemaker segfaults randomly on Ubuntu trusty 14.04

2014-06-16 Thread born2chill
I found out that not the cluster stack itself was causing the issues but
the tool that I used to configure the cluster: LCMC. Although LCMC has
been working flawlessly for me on older versions of corosync/pacemaker,
it seems as it hasn't been updated to work with corosync 2.3.x and
pacemaker 1.1x. So everyone watch out until the LCMC gets updated (at
least 1.6.8 as of 2014-06-16 doesn't work reliably).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1327222

Title:
  Segfault: pacemaker segfaults randomly on Ubuntu trusty 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1327222] Re: Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

2014-06-07 Thread born2chill
** Also affects: pacemaker (Ubuntu)
   Importance: Undecided
   Status: New

** Description changed:

  I'm running a two node HA Cluster with pacemaker/corosync and a pretty
  simple configuration  - only an IP address, one service  and two clone
  sets of resources are managed (see below). however i run into constant
- crashes of corosync on both nodes. At the moment this behaviour makes
- the cluster unusable.
+ crashes of pacemaker (looked like corossync at first) on both nodes. At
+ the moment this behaviour makes the cluster unusable.
  
  I attached the cluster config, cib.xml and the crashdumps to the bug,
  hopefully someone can make something of it.
- 
  
  ~# crm_mon -1
  crm_mon -1
  Last updated: Fri Jun  6 15:43:14 2014
  Last change: Fri Jun  6 10:28:17 2014 via cibadmin on lbsrv52
  Stack: corosync
  Current DC: lbsrv51 (1) - partition with quorum
  Version: 1.1.10-42f2063
  2 Nodes configured
  6 Resources configured
  
  Online: [ lbsrv51 lbsrv52 ]
  
-  Resource Group: grp_HAProxy-Front-IPs
-  res_IPaddr2_Test   (ocf::heartbeat:IPaddr2):   Started lbsrv51 
-  res_pdnsd_pdnsd(lsb:pdnsd):Started lbsrv51 
-  Clone Set: cl_isc-dhcp-server_1 [res_isc-dhcp-server_1]
-  Started: [ lbsrv51 lbsrv52 ]
-  Clone Set: cl_tftpd-hpa_1 [res_tftpd-hpa_1]
-  Started: [ lbsrv51 lbsrv52 ]
- 
+  Resource Group: grp_HAProxy-Front-IPs
+  res_IPaddr2_Test   (ocf::heartbeat:IPaddr2):   Started lbsrv51
+  res_pdnsd_pdnsd(lsb:pdnsd):Started lbsrv51
+  Clone Set: cl_isc-dhcp-server_1 [res_isc-dhcp-server_1]
+  Started: [ lbsrv51 lbsrv52 ]
+  Clone Set: cl_tftpd-hpa_1 [res_tftpd-hpa_1]
+  Started: [ lbsrv51 lbsrv52 ]
  
  == corosync.log; ==
  Jun 06 15:14:56 [2324] lbsrv51cib:error: pcmk_cpg_dispatch:   
  Connection to the CPG API failed: Library error (2)
  Jun 06 15:14:56 [2327] lbsrv51  attrd:error: pcmk_cpg_dispatch:   
  Connection to the CPG API failed: Library error (2)
  Jun 06 15:14:56 [2327] lbsrv51  attrd: crit: attrd_cs_destroy:  Lost 
connection to Corosync service!
  Jun 06 15:14:56 [2327] lbsrv51  attrd:   notice: main:  Exiting...
  Jun 06 15:14:56 [2324] lbsrv51cib:error: cib_cs_destroy:
Corosync connection lost!  Exiting.
  Jun 06 15:14:56 [2327] lbsrv51  attrd:   notice: main:  Disconnecting 
client 0x7f1f86244a10, pid=2329...
  Jun 06 15:14:56 [2324] lbsrv51cib: info: terminate_cib: 
cib_cs_destroy: Exiting fast...
  Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:  
  Destroying 0 events
  Jun 06 15:14:56 [2327] lbsrv51  attrd:error: 
attrd_cib_connection_destroy:  Connection to the CIB terminated...
  Jun 06 15:14:56 [2324] lbsrv51cib: info: qb_ipcs_us_withdraw: 
  withdrawing server sockets
  Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:  
  Destroying 0 events
  Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:  
  Destroying 0 events
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng:error: crm_ipc_read:  
Connection to cib_rw failed
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng:error: mainloop_gio_callback:   
  Connection to cib_rw[0x7f52f2d82c10] closed (I/O condition=17)
  Jun 06 15:14:56 [2324] lbsrv51cib: info: qb_ipcs_us_withdraw: 
  withdrawing server sockets
  Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:  
  Destroying 0 events
  Jun 06 15:14:56 [2324] lbsrv51cib: info: qb_ipcs_us_withdraw: 
  withdrawing server sockets
  Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_xml_cleanup:   
Cleaning up memory from libxml2
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng:   notice: cib_connection_destroy:  
  Connection to the CIB terminated. Shutting down.
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: stonith_shutdown:  
Terminating with  1 clients
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: crm_client_destroy:  
  Destroying 0 events
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: qb_ipcs_us_withdraw: 
  withdrawing server sockets
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: main:  Done
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: crm_xml_cleanup:   
Cleaning up memory from libxml2
  Jun 06 15:14:56 [2329] lbsrv51   crmd:error: crm_ipc_read:  
Connection to cib_shm failed
  Jun 06 15:14:56 [2329] lbsrv51   crmd:error: mainloop_gio_callback:   
  Connection to cib_shm[0x7f97ed1f6980] closed (I/O condition=17)
  Jun 06 15:14:56 [2329] lbsrv51   crmd:error: 
crmd_cib_connection_destroy:   Connection to the CIB terminated...
  Jun 06 15:14:56 [2329] lbsrv51   crmd:error: do_log:FSA: Input 
I_ERROR from crmd_cib_connection_destroy() received in state S_IDLE
  Jun 06 15:14:56 [2329] lbsrv51   crmd:   notice: do_state_transition: 
  State transition S_IDLE - 

[Bug 1327222] Re: Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

2014-06-07 Thread born2chill
** Also affects: pacemaker (Ubuntu)
   Importance: Undecided
   Status: New

** Description changed:

  I'm running a two node HA Cluster with pacemaker/corosync and a pretty
  simple configuration  - only an IP address, one service  and two clone
  sets of resources are managed (see below). however i run into constant
- crashes of corosync on both nodes. At the moment this behaviour makes
- the cluster unusable.
+ crashes of pacemaker (looked like corossync at first) on both nodes. At
+ the moment this behaviour makes the cluster unusable.
  
  I attached the cluster config, cib.xml and the crashdumps to the bug,
  hopefully someone can make something of it.
- 
  
  ~# crm_mon -1
  crm_mon -1
  Last updated: Fri Jun  6 15:43:14 2014
  Last change: Fri Jun  6 10:28:17 2014 via cibadmin on lbsrv52
  Stack: corosync
  Current DC: lbsrv51 (1) - partition with quorum
  Version: 1.1.10-42f2063
  2 Nodes configured
  6 Resources configured
  
  Online: [ lbsrv51 lbsrv52 ]
  
-  Resource Group: grp_HAProxy-Front-IPs
-  res_IPaddr2_Test   (ocf::heartbeat:IPaddr2):   Started lbsrv51 
-  res_pdnsd_pdnsd(lsb:pdnsd):Started lbsrv51 
-  Clone Set: cl_isc-dhcp-server_1 [res_isc-dhcp-server_1]
-  Started: [ lbsrv51 lbsrv52 ]
-  Clone Set: cl_tftpd-hpa_1 [res_tftpd-hpa_1]
-  Started: [ lbsrv51 lbsrv52 ]
- 
+  Resource Group: grp_HAProxy-Front-IPs
+  res_IPaddr2_Test   (ocf::heartbeat:IPaddr2):   Started lbsrv51
+  res_pdnsd_pdnsd(lsb:pdnsd):Started lbsrv51
+  Clone Set: cl_isc-dhcp-server_1 [res_isc-dhcp-server_1]
+  Started: [ lbsrv51 lbsrv52 ]
+  Clone Set: cl_tftpd-hpa_1 [res_tftpd-hpa_1]
+  Started: [ lbsrv51 lbsrv52 ]
  
  == corosync.log; ==
  Jun 06 15:14:56 [2324] lbsrv51cib:error: pcmk_cpg_dispatch:   
  Connection to the CPG API failed: Library error (2)
  Jun 06 15:14:56 [2327] lbsrv51  attrd:error: pcmk_cpg_dispatch:   
  Connection to the CPG API failed: Library error (2)
  Jun 06 15:14:56 [2327] lbsrv51  attrd: crit: attrd_cs_destroy:  Lost 
connection to Corosync service!
  Jun 06 15:14:56 [2327] lbsrv51  attrd:   notice: main:  Exiting...
  Jun 06 15:14:56 [2324] lbsrv51cib:error: cib_cs_destroy:
Corosync connection lost!  Exiting.
  Jun 06 15:14:56 [2327] lbsrv51  attrd:   notice: main:  Disconnecting 
client 0x7f1f86244a10, pid=2329...
  Jun 06 15:14:56 [2324] lbsrv51cib: info: terminate_cib: 
cib_cs_destroy: Exiting fast...
  Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:  
  Destroying 0 events
  Jun 06 15:14:56 [2327] lbsrv51  attrd:error: 
attrd_cib_connection_destroy:  Connection to the CIB terminated...
  Jun 06 15:14:56 [2324] lbsrv51cib: info: qb_ipcs_us_withdraw: 
  withdrawing server sockets
  Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:  
  Destroying 0 events
  Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:  
  Destroying 0 events
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng:error: crm_ipc_read:  
Connection to cib_rw failed
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng:error: mainloop_gio_callback:   
  Connection to cib_rw[0x7f52f2d82c10] closed (I/O condition=17)
  Jun 06 15:14:56 [2324] lbsrv51cib: info: qb_ipcs_us_withdraw: 
  withdrawing server sockets
  Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:  
  Destroying 0 events
  Jun 06 15:14:56 [2324] lbsrv51cib: info: qb_ipcs_us_withdraw: 
  withdrawing server sockets
  Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_xml_cleanup:   
Cleaning up memory from libxml2
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng:   notice: cib_connection_destroy:  
  Connection to the CIB terminated. Shutting down.
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: stonith_shutdown:  
Terminating with  1 clients
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: crm_client_destroy:  
  Destroying 0 events
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: qb_ipcs_us_withdraw: 
  withdrawing server sockets
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: main:  Done
  Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: crm_xml_cleanup:   
Cleaning up memory from libxml2
  Jun 06 15:14:56 [2329] lbsrv51   crmd:error: crm_ipc_read:  
Connection to cib_shm failed
  Jun 06 15:14:56 [2329] lbsrv51   crmd:error: mainloop_gio_callback:   
  Connection to cib_shm[0x7f97ed1f6980] closed (I/O condition=17)
  Jun 06 15:14:56 [2329] lbsrv51   crmd:error: 
crmd_cib_connection_destroy:   Connection to the CIB terminated...
  Jun 06 15:14:56 [2329] lbsrv51   crmd:error: do_log:FSA: Input 
I_ERROR from crmd_cib_connection_destroy() received in state S_IDLE
  Jun 06 15:14:56 [2329] lbsrv51   crmd:   notice: do_state_transition: 
  State transition S_IDLE - 

[Bug 1327222] [NEW] Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

2014-06-06 Thread born2chill
Public bug reported:

I'm running a two node HA Cluster with pacemaker/corosync and a pretty
simple configuration  - only an IP address, one service  and two clone
sets of resources are managed (see below). however i run into constant
crashes of corosync on both nodes. At the moment this behaviour makes
the cluster unusable.

I attached the cluster config, cib.xml and the crashdumps to the bug,
hopefully someone can make something of it.


~# crm_mon -1
crm_mon -1
Last updated: Fri Jun  6 15:43:14 2014
Last change: Fri Jun  6 10:28:17 2014 via cibadmin on lbsrv52
Stack: corosync
Current DC: lbsrv51 (1) - partition with quorum
Version: 1.1.10-42f2063
2 Nodes configured
6 Resources configured

Online: [ lbsrv51 lbsrv52 ]

 Resource Group: grp_HAProxy-Front-IPs
 res_IPaddr2_Test   (ocf::heartbeat:IPaddr2):   Started lbsrv51 
 res_pdnsd_pdnsd(lsb:pdnsd):Started lbsrv51 
 Clone Set: cl_isc-dhcp-server_1 [res_isc-dhcp-server_1]
 Started: [ lbsrv51 lbsrv52 ]
 Clone Set: cl_tftpd-hpa_1 [res_tftpd-hpa_1]
 Started: [ lbsrv51 lbsrv52 ]


== corosync.log; ==
Jun 06 15:14:56 [2324] lbsrv51cib:error: pcmk_cpg_dispatch: 
Connection to the CPG API failed: Library error (2)
Jun 06 15:14:56 [2327] lbsrv51  attrd:error: pcmk_cpg_dispatch: 
Connection to the CPG API failed: Library error (2)
Jun 06 15:14:56 [2327] lbsrv51  attrd: crit: attrd_cs_destroy:  Lost 
connection to Corosync service!
Jun 06 15:14:56 [2327] lbsrv51  attrd:   notice: main:  Exiting...
Jun 06 15:14:56 [2324] lbsrv51cib:error: cib_cs_destroy:
Corosync connection lost!  Exiting.
Jun 06 15:14:56 [2327] lbsrv51  attrd:   notice: main:  Disconnecting 
client 0x7f1f86244a10, pid=2329...
Jun 06 15:14:56 [2324] lbsrv51cib: info: terminate_cib: 
cib_cs_destroy: Exiting fast...
Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:
Destroying 0 events
Jun 06 15:14:56 [2327] lbsrv51  attrd:error: 
attrd_cib_connection_destroy:  Connection to the CIB terminated...
Jun 06 15:14:56 [2324] lbsrv51cib: info: qb_ipcs_us_withdraw:   
withdrawing server sockets
Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:
Destroying 0 events
Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:
Destroying 0 events
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng:error: crm_ipc_read:  
Connection to cib_rw failed
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng:error: mainloop_gio_callback: 
Connection to cib_rw[0x7f52f2d82c10] closed (I/O condition=17)
Jun 06 15:14:56 [2324] lbsrv51cib: info: qb_ipcs_us_withdraw:   
withdrawing server sockets
Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:
Destroying 0 events
Jun 06 15:14:56 [2324] lbsrv51cib: info: qb_ipcs_us_withdraw:   
withdrawing server sockets
Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_xml_cleanup:   
Cleaning up memory from libxml2
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng:   notice: cib_connection_destroy:
Connection to the CIB terminated. Shutting down.
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: stonith_shutdown:  
Terminating with  1 clients
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: crm_client_destroy:
Destroying 0 events
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: qb_ipcs_us_withdraw:   
withdrawing server sockets
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: main:  Done
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: crm_xml_cleanup:   
Cleaning up memory from libxml2
Jun 06 15:14:56 [2329] lbsrv51   crmd:error: crm_ipc_read:  
Connection to cib_shm failed
Jun 06 15:14:56 [2329] lbsrv51   crmd:error: mainloop_gio_callback: 
Connection to cib_shm[0x7f97ed1f6980] closed (I/O condition=17)
Jun 06 15:14:56 [2329] lbsrv51   crmd:error: 
crmd_cib_connection_destroy:   Connection to the CIB terminated...
Jun 06 15:14:56 [2329] lbsrv51   crmd:error: do_log:FSA: Input 
I_ERROR from crmd_cib_connection_destroy() received in state S_IDLE
Jun 06 15:14:56 [2329] lbsrv51   crmd:   notice: do_state_transition:   
State transition S_IDLE - S_RECOVERY [ input=I_ERROR cause=C_FSA_INTERNAL 
origin=crmd_cib_connection_destroy ]
Jun 06 15:14:56 [2329] lbsrv51   crmd:  warning: do_recover:
Fast-tracking shutdown in response to errors
Jun 06 15:14:56 [2329] lbsrv51   crmd:  warning: do_election_vote:  Not 
voting in election, we're in state S_RECOVERY
Jun 06 15:14:56 [2329] lbsrv51   crmd: info: do_dc_release: DC role 
released
Jun 06 15:14:56 [2322] lbsrv51 pacemakerd: info: pcmk_child_exit:   Child 
process stonith-ng (2325) exited: OK (0)
Jun 06 15:14:56 [2322] lbsrv51 pacemakerd: info: crm_cs_flush:  Sent 0 
CPG messages  (1 remaining, last=10): Library error (2)
Jun 06 15:14:56 

[Bug 1327222] Re: Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

2014-06-06 Thread born2chill
** Attachment added: pacemaker crashdump
   
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+attachment/4126486/+files/_usr_lib_pacemaker_crmd.111.crash

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1327222

Title:
  Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1327222] Re: Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

2014-06-06 Thread born2chill
** Attachment added: corosync crashdump
   
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+attachment/4126487/+files/_usr_sbin_corosync.0.crash

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1327222

Title:
  Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1327222] Re: Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

2014-06-06 Thread born2chill
** Attachment added: Cluster Information Base XML
   
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+attachment/4126485/+files/cib.xml

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1327222

Title:
  Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1327222] Re: Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

2014-06-06 Thread born2chill
At the moment I'm running corosync in debug mode, so I should get more
logs soon.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to corosync in Ubuntu.
https://bugs.launchpad.net/bugs/1327222

Title:
  Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1327222] [NEW] Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

2014-06-06 Thread born2chill
Public bug reported:

I'm running a two node HA Cluster with pacemaker/corosync and a pretty
simple configuration  - only an IP address, one service  and two clone
sets of resources are managed (see below). however i run into constant
crashes of corosync on both nodes. At the moment this behaviour makes
the cluster unusable.

I attached the cluster config, cib.xml and the crashdumps to the bug,
hopefully someone can make something of it.


~# crm_mon -1
crm_mon -1
Last updated: Fri Jun  6 15:43:14 2014
Last change: Fri Jun  6 10:28:17 2014 via cibadmin on lbsrv52
Stack: corosync
Current DC: lbsrv51 (1) - partition with quorum
Version: 1.1.10-42f2063
2 Nodes configured
6 Resources configured

Online: [ lbsrv51 lbsrv52 ]

 Resource Group: grp_HAProxy-Front-IPs
 res_IPaddr2_Test   (ocf::heartbeat:IPaddr2):   Started lbsrv51 
 res_pdnsd_pdnsd(lsb:pdnsd):Started lbsrv51 
 Clone Set: cl_isc-dhcp-server_1 [res_isc-dhcp-server_1]
 Started: [ lbsrv51 lbsrv52 ]
 Clone Set: cl_tftpd-hpa_1 [res_tftpd-hpa_1]
 Started: [ lbsrv51 lbsrv52 ]


== corosync.log; ==
Jun 06 15:14:56 [2324] lbsrv51cib:error: pcmk_cpg_dispatch: 
Connection to the CPG API failed: Library error (2)
Jun 06 15:14:56 [2327] lbsrv51  attrd:error: pcmk_cpg_dispatch: 
Connection to the CPG API failed: Library error (2)
Jun 06 15:14:56 [2327] lbsrv51  attrd: crit: attrd_cs_destroy:  Lost 
connection to Corosync service!
Jun 06 15:14:56 [2327] lbsrv51  attrd:   notice: main:  Exiting...
Jun 06 15:14:56 [2324] lbsrv51cib:error: cib_cs_destroy:
Corosync connection lost!  Exiting.
Jun 06 15:14:56 [2327] lbsrv51  attrd:   notice: main:  Disconnecting 
client 0x7f1f86244a10, pid=2329...
Jun 06 15:14:56 [2324] lbsrv51cib: info: terminate_cib: 
cib_cs_destroy: Exiting fast...
Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:
Destroying 0 events
Jun 06 15:14:56 [2327] lbsrv51  attrd:error: 
attrd_cib_connection_destroy:  Connection to the CIB terminated...
Jun 06 15:14:56 [2324] lbsrv51cib: info: qb_ipcs_us_withdraw:   
withdrawing server sockets
Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:
Destroying 0 events
Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:
Destroying 0 events
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng:error: crm_ipc_read:  
Connection to cib_rw failed
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng:error: mainloop_gio_callback: 
Connection to cib_rw[0x7f52f2d82c10] closed (I/O condition=17)
Jun 06 15:14:56 [2324] lbsrv51cib: info: qb_ipcs_us_withdraw:   
withdrawing server sockets
Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_client_destroy:
Destroying 0 events
Jun 06 15:14:56 [2324] lbsrv51cib: info: qb_ipcs_us_withdraw:   
withdrawing server sockets
Jun 06 15:14:56 [2324] lbsrv51cib: info: crm_xml_cleanup:   
Cleaning up memory from libxml2
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng:   notice: cib_connection_destroy:
Connection to the CIB terminated. Shutting down.
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: stonith_shutdown:  
Terminating with  1 clients
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: crm_client_destroy:
Destroying 0 events
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: qb_ipcs_us_withdraw:   
withdrawing server sockets
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: main:  Done
Jun 06 15:14:56 [2325] lbsrv51 stonith-ng: info: crm_xml_cleanup:   
Cleaning up memory from libxml2
Jun 06 15:14:56 [2329] lbsrv51   crmd:error: crm_ipc_read:  
Connection to cib_shm failed
Jun 06 15:14:56 [2329] lbsrv51   crmd:error: mainloop_gio_callback: 
Connection to cib_shm[0x7f97ed1f6980] closed (I/O condition=17)
Jun 06 15:14:56 [2329] lbsrv51   crmd:error: 
crmd_cib_connection_destroy:   Connection to the CIB terminated...
Jun 06 15:14:56 [2329] lbsrv51   crmd:error: do_log:FSA: Input 
I_ERROR from crmd_cib_connection_destroy() received in state S_IDLE
Jun 06 15:14:56 [2329] lbsrv51   crmd:   notice: do_state_transition:   
State transition S_IDLE - S_RECOVERY [ input=I_ERROR cause=C_FSA_INTERNAL 
origin=crmd_cib_connection_destroy ]
Jun 06 15:14:56 [2329] lbsrv51   crmd:  warning: do_recover:
Fast-tracking shutdown in response to errors
Jun 06 15:14:56 [2329] lbsrv51   crmd:  warning: do_election_vote:  Not 
voting in election, we're in state S_RECOVERY
Jun 06 15:14:56 [2329] lbsrv51   crmd: info: do_dc_release: DC role 
released
Jun 06 15:14:56 [2322] lbsrv51 pacemakerd: info: pcmk_child_exit:   Child 
process stonith-ng (2325) exited: OK (0)
Jun 06 15:14:56 [2322] lbsrv51 pacemakerd: info: crm_cs_flush:  Sent 0 
CPG messages  (1 remaining, last=10): Library error (2)
Jun 06 15:14:56 

[Bug 1327222] Re: Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

2014-06-06 Thread born2chill
** Attachment added: pacemaker crashdump
   
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+attachment/4126486/+files/_usr_lib_pacemaker_crmd.111.crash

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1327222

Title:
  Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1327222] Re: Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

2014-06-06 Thread born2chill
** Attachment added: Cluster Information Base XML
   
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+attachment/4126485/+files/cib.xml

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1327222

Title:
  Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1327222] Re: Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

2014-06-06 Thread born2chill
** Attachment added: corosync crashdump
   
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+attachment/4126487/+files/_usr_sbin_corosync.0.crash

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1327222

Title:
  Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1327222] Re: Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

2014-06-06 Thread born2chill
At the moment I'm running corosync in debug mode, so I should get more
logs soon.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1327222

Title:
  Segfault: corosync segfaults randomly on Ubuntu trusty 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1327222/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1325847] [NEW] Improvement: initscript enhancement with support for conf.d and configtest on startup

2014-06-03 Thread born2chill
Public bug reported:

The haproxy initscript misses a configtest option, which haproxy natively 
supports. It also does not warn the user, if haproxy has been disabled in the 
default file, but exits silently.
Also, it has become a de-facto standard for daemons to include conf.d 
configuration file support.
I attached a patch for the current init script, which remedies all these issues 
and should be forward/backward compatible to haproxy 1.3/1.4/1.5.

** Affects: haproxy (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: improvement init patch

** Patch added: patch for current haproxy init.d script
   
https://bugs.launchpad.net/bugs/1325847/+attachment/4124542/+files/haproxy_init.patch

** Description changed:

- The haproxy initscript misses a configtest option, which haproxy natively 
supports. It
- also does not warn the user, if haproxy has been disabled in the default 
file, but exits
- silently.
- Also, it has become a de-facto standard for daemons to include conf.d 
configuration
- file support. 
- I attached a patch for the current init script, which remedies all these 
issues and should
- be forward/backward compatible to haproxy 1.3/1.4/1.5.
+ The haproxy initscript misses a configtest option, which haproxy natively 
supports. It also does not warn the user, if haproxy has been disabled in the 
default file, but exits silently.
+ Also, it has become a de-facto standard for daemons to include conf.d 
configuration file support.
+ I attached a patch for the current init script, which remedies all these 
issues and should be forward/backward compatible to haproxy 1.3/1.4/1.5.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1325847

Title:
  Improvement: initscript enhancement with support for conf.d and
  configtest on startup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1325847/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1325847] Re: Improvement: initscript enhancement with support for conf.d and configtest on startup

2014-06-03 Thread born2chill
I filed a bug with debian, perhaps it will get included in one of the next 
releases:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=750459

** Bug watch added: Debian Bug tracker #750459
   http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=750459

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to haproxy in Ubuntu.
https://bugs.launchpad.net/bugs/1325847

Title:
  Improvement: initscript enhancement with support for conf.d and
  configtest on startup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1325847/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 1325847] [NEW] Improvement: initscript enhancement with support for conf.d and configtest on startup

2014-06-03 Thread born2chill
Public bug reported:

The haproxy initscript misses a configtest option, which haproxy natively 
supports. It also does not warn the user, if haproxy has been disabled in the 
default file, but exits silently.
Also, it has become a de-facto standard for daemons to include conf.d 
configuration file support.
I attached a patch for the current init script, which remedies all these issues 
and should be forward/backward compatible to haproxy 1.3/1.4/1.5.

** Affects: haproxy (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: improvement init patch

** Patch added: patch for current haproxy init.d script
   
https://bugs.launchpad.net/bugs/1325847/+attachment/4124542/+files/haproxy_init.patch

** Description changed:

- The haproxy initscript misses a configtest option, which haproxy natively 
supports. It
- also does not warn the user, if haproxy has been disabled in the default 
file, but exits
- silently.
- Also, it has become a de-facto standard for daemons to include conf.d 
configuration
- file support. 
- I attached a patch for the current init script, which remedies all these 
issues and should
- be forward/backward compatible to haproxy 1.3/1.4/1.5.
+ The haproxy initscript misses a configtest option, which haproxy natively 
supports. It also does not warn the user, if haproxy has been disabled in the 
default file, but exits silently.
+ Also, it has become a de-facto standard for daemons to include conf.d 
configuration file support.
+ I attached a patch for the current init script, which remedies all these 
issues and should be forward/backward compatible to haproxy 1.3/1.4/1.5.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1325847

Title:
  Improvement: initscript enhancement with support for conf.d and
  configtest on startup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1325847/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 1325847] Re: Improvement: initscript enhancement with support for conf.d and configtest on startup

2014-06-03 Thread born2chill
I filed a bug with debian, perhaps it will get included in one of the next 
releases:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=750459

** Bug watch added: Debian Bug tracker #750459
   http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=750459

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1325847

Title:
  Improvement: initscript enhancement with support for conf.d and
  configtest on startup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/haproxy/+bug/1325847/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 545903] [NEW] Initscript does not enable timesync, misses status option

2010-03-24 Thread born2chill
Public bug reported:

Binary package hint: open-vm-tools

Description:Ubuntu lucid (development branch)
Release:10.04
Package:open-vm-tools [2010.02.23-236320-1+ubuntu1]

The open-vm-tools are missing an option to enable the host-to-vm
timesync, which can be activated via the vmware-toolbox-cmd-command.
Also the init-script doesn't have a 'status' option, which is always
nice to have (monitoring etc...). Both options are missing at the
moment.

** Affects: open-vm-tools (Ubuntu)
 Importance: Undecided
 Assignee: Ubuntu Virtualisation team (ubuntu-virt)
 Status: New

-- 
Initscript does not enable timesync, misses status option
https://bugs.launchpad.net/bugs/545903
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 545903] Re: Initscript does not enable timesync, misses status option

2010-03-24 Thread born2chill

** Attachment added: Init Script and default-file
   http://launchpadlibrarian.net/41823025/open-vm-tools-files.tar.bz2

** Changed in: open-vm-tools (Ubuntu)
 Assignee: (unassigned) = Ubuntu Virtualisation team (ubuntu-virt)

** Description changed:

  Binary package hint: open-vm-tools
  
  Description:Ubuntu lucid (development branch)
  Release:10.04
  Package:open-vm-tools [2010.02.23-236320-1+ubuntu1]
  
- 
- The open-vm-tools are missing an option to enable the host-to-vm timesync, 
which can be activated via the vmware-toolbox-cmd-command. Also the 
init-script it doesn't have a 'status' option, which is always nice to have 
(monitoring etc...). Both options are missing at the moment.
+ The open-vm-tools are missing an option to enable the host-to-vm
+ timesync, which can be activated via the vmware-toolbox-cmd-command.
+ Also the init-script doesn't have a 'status' option, which is always
+ nice to have (monitoring etc...). Both options are missing at the
+ moment.

-- 
Initscript does not enable timesync, misses status option
https://bugs.launchpad.net/bugs/545903
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 359177] Re: Strange or obsolete code in mysql initscript

2009-11-25 Thread born2chill
** Changed in: mysql-dfsg-5.0 (Ubuntu)
 Assignee: (unassigned) = Patch for MySQL team (mysql-patch-team)

-- 
Strange or obsolete code in mysql initscript
https://bugs.launchpad.net/bugs/359177
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to mysql-dfsg-5.0 in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 359177] Re: Strange or obsolete code in mysql initscript

2009-11-25 Thread born2chill
** Changed in: mysql-dfsg-5.0 (Ubuntu)
 Assignee: (unassigned) = Patch for MySQL team (mysql-patch-team)

-- 
Strange or obsolete code in mysql initscript
https://bugs.launchpad.net/bugs/359177
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 359177] Re: Strange or obsolete code in mysql initscript

2009-11-17 Thread born2chill
Hi Alexey,

true, killing all mysqld-procs  isn't the best choice, however i tried
to stick to the original initscript which also killed the processes
after some time. Its no problem to change this behaviour if wanted, only
the 'kill -9' and 'log_msg' lines have to be altered. Who is up to
decide/implement this (sorry, i don't know any mysql-maintainer...)?

br,
David

-- 
Strange or obsolete code in mysql initscript
https://bugs.launchpad.net/bugs/359177
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to mysql-dfsg-5.0 in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 359177] Re: Strange or obsolete code in mysql initscript

2009-11-17 Thread born2chill
Hi Alexey,

true, killing all mysqld-procs  isn't the best choice, however i tried
to stick to the original initscript which also killed the processes
after some time. Its no problem to change this behaviour if wanted, only
the 'kill -9' and 'log_msg' lines have to be altered. Who is up to
decide/implement this (sorry, i don't know any mysql-maintainer...)?

br,
David

-- 
Strange or obsolete code in mysql initscript
https://bugs.launchpad.net/bugs/359177
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 359177] Re: Strange or obsolete code in mysql initscript

2009-11-16 Thread born2chill
Patch for the mysql init script attached. Reworked the 'stop'-section
some more, removed unused / old variables etc...

br,
David

** Attachment added: Patch for the mysql-server-5.0 init script.
   http://launchpadlibrarian.net/35750472/mysql.patch

-- 
Strange or obsolete code in mysql initscript
https://bugs.launchpad.net/bugs/359177
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to mysql-dfsg-5.0 in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 359177] Re: Strange or obsolete code in mysql initscript

2009-11-16 Thread born2chill
Patch for the mysql init script attached. Reworked the 'stop'-section
some more, removed unused / old variables etc...

br,
David

** Attachment added: Patch for the mysql-server-5.0 init script.
   http://launchpadlibrarian.net/35750472/mysql.patch

-- 
Strange or obsolete code in mysql initscript
https://bugs.launchpad.net/bugs/359177
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 207787] Re: Multiple tomcat instances

2009-10-08 Thread born2chill
hi,

updated the forums posts too. binary packages ready for download now.

br,
David

-- 
Multiple tomcat instances
https://bugs.launchpad.net/bugs/207787
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 207787] Re: Multiple tomcat instances

2009-10-07 Thread born2chill
hi,

as a start i repackaged tomcat6 for 8.04 LTS, starting from the jaunty package 
and using the apache source files from tomcat 6.0.20. Please consider this post 
on the forums:
http://ubuntuforums.org/showpost.php?p=8066371postcount=4

also the new packages are up  for grabs here:
http://aegis.dest-unreachable.net/debs/tomcat/

how do we want to continue the multi instance effort?

br,
David

-- 
Multiple tomcat instances
https://bugs.launchpad.net/bugs/207787
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 207787] Re: Multiple tomcat instances

2009-10-02 Thread born2chill
Hi there,

seem to lots of use cases out there... here are my two cents to the
multi-instance tomcat5.5 thingy.

skel- files, little reworked init-script and a create-instance script
that accepts options for ports, ip and some java-opts. drop the contents
of the zipfile to /usr/local/bin, then

unzip tomcat5.5_instance.zip
mv tomcat5.5_start_instance /etc/init.d

view options:
./tomcat5.5_new_instance.sh -h 

it will create a new tomcat instance with its own init script and
catalina_base, log and work folders, so no cluttering up the system. it
also has an option to remove instances if they are no longer used. needs
just tomcat5.5 and a java-jdk, will re-use the
/usr/share/tomcat5.5-libs.

example:
./tomcat_new_instance.sh -C -I test -i 10.205.30.113 -j 
/usr/lib/jvm/java-1.5.0-sun -a 8010 -p 8181 -s 8006 -x 128 -y 1024 -z 256

would create an instance, using sun's java 1.5, binding to ip
10.205.30.113 with ajp on 8010, http on 8181 and shutdownport on 8006,
setting Xms memory to 128m, Xmx to 1024 and max permissionsize to 256m.

./tomcat_new_instance.sh -D -I test
would remove the above instance.

br,
David

** Attachment added: yet another tomcat5.5 instance mechanism
   http://launchpadlibrarian.net/32865305/tomcat5.5_instance.zip

-- 
Multiple tomcat instances
https://bugs.launchpad.net/bugs/207787
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 207787] Re: Multiple tomcat instances

2009-10-02 Thread born2chill
Hi again,

thx for your comments, greatly appreciated. At my company we mainly use
standard packages and this script builds upon them, however i was not
aware of neither the broken state of tomcat5.5 nor the thread you
pointed out above. Thanks for that.

No, it is no problem to rename the script or even split it into
create/delete. Actually it was only supposed to create instances and be
done, but well things tend to grow... I was actually going to rework it
for tomcat6 and the upcoming 10.04 'lucid' LTS release, as we will use
that in the foreseeable future.

As we deploy servers via puppet (
http://reductivelabs.com/trac/puppet/wiki/AboutPuppet  check it out,
it's cool! ;), the script has all the options it needs so we can
automatically deploy and start a complete instance. Also, you need
'root'-rights to operate it in opposite to the scripts in the tomcat-
user package. If you look at the reworked init script, you'll notice
that it is only to be called via symlinks; It determines via the
symlink's name what instance it should handle, so i'm not sure if this
is possible with your new init scripts. However i like the idea as it
gives the whole thing some consistency: all the folders (catalina_base,
log, config...) and the initscript share the instances name. This makes
server handling more easily in large environments imho, but we'll
probably have to work out some 'golden middle' solution that won't
interfere with all the work you've already done.

I'll take a look at your packages when i'm back at the company on
monday. The packages you provide are not the same as those coming with
standard ubuntu repos right now i suppose?

br  thx,
David

-- 
Multiple tomcat instances
https://bugs.launchpad.net/bugs/207787
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 359177] Re: Strange or obsolete code in mysql initscript

2009-09-24 Thread born2chill
I also have problems with this initscript (mostly with wrong return
codes in scripted shutdowns), however i think that the author overlooked
an 'else' statement:

@142ff: instead of:
142   if [ $r -ne 0 ]; then
143 log_end_msg 1
144 [ $VERBOSE != no ]  log_failure_msg Error: $shutdown_out
145 log_daemon_msg Killing MySQL database server by signal 
mysqld
146 killall -15 mysqld
147 server_down=
148 for i in 1 2 3 4 5 6 7 8 9 10; do
149   sleep 1
150   if mysqld_status check_dead nowarn; then server_down=1; 
break; fi
151 done

imho it probably shoud be:
142   if [ $r -ne 0 ]; then
143 log_end_msg 1
144 [ $VERBOSE != no ]  log_failure_msg Error: $shutdown_out
145 log_daemon_msg Killing MySQL database server by signal 
mysqld
146 killall -15 `pgrep mysqld_safe\ --server-id=${INSTANCENMB}`
147   else
148 server_down=
149 for i in `seq 1 10`; do
150   sleep 1
151   if mysqld_status check_dead nowarn; then server_down=1; 
break; fi
152 done

in this case the server is only killed if the 'mysqladmin shutdown'
before the statement fails, otherwise the script goes into the 10
seconds for-loop checking if mysql has gone away for sure, otherwise
killing it with -9. It is still true that the 'VERBOSE'-test is
obsolete...

br,
David

-- 
Strange or obsolete code in mysql initscript
https://bugs.launchpad.net/bugs/359177
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to mysql-dfsg-5.0 in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 359177] Re: Strange or obsolete code in mysql initscript

2009-09-24 Thread born2chill
I also have problems with this initscript (mostly with wrong return
codes in scripted shutdowns), however i think that the author overlooked
an 'else' statement:

@142ff: instead of:
142   if [ $r -ne 0 ]; then
143 log_end_msg 1
144 [ $VERBOSE != no ]  log_failure_msg Error: $shutdown_out
145 log_daemon_msg Killing MySQL database server by signal 
mysqld
146 killall -15 mysqld
147 server_down=
148 for i in 1 2 3 4 5 6 7 8 9 10; do
149   sleep 1
150   if mysqld_status check_dead nowarn; then server_down=1; 
break; fi
151 done

imho it probably shoud be:
142   if [ $r -ne 0 ]; then
143 log_end_msg 1
144 [ $VERBOSE != no ]  log_failure_msg Error: $shutdown_out
145 log_daemon_msg Killing MySQL database server by signal 
mysqld
146 killall -15 `pgrep mysqld_safe\ --server-id=${INSTANCENMB}`
147   else
148 server_down=
149 for i in `seq 1 10`; do
150   sleep 1
151   if mysqld_status check_dead nowarn; then server_down=1; 
break; fi
152 done

in this case the server is only killed if the 'mysqladmin shutdown'
before the statement fails, otherwise the script goes into the 10
seconds for-loop checking if mysql has gone away for sure, otherwise
killing it with -9. It is still true that the 'VERBOSE'-test is
obsolete...

br,
David

-- 
Strange or obsolete code in mysql initscript
https://bugs.launchpad.net/bugs/359177
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs