Re: [PacketFence-users] New PF 7.0 Cluster Configuration Question
Just ran into that myself the other day. Like some sysctl variable wasn't set. Sent from my iPhone On Jun 7, 2017, at 10:02 AM, Peter Reilly via PacketFence-users < packetfence-users@lists.sourceforge.net> wrote: Was resolved by a reboot of all hosts in case anyone else has the same issue. Peter On 05/31/2017 05:17 PM, Peter Reilly wrote: Thank you, By resync you mean: /usr/local/pf/bin/cluster/sync ? Although that completed correctly the 1st time, now I get errors: ERROR : [1496265119.53917] Failed to connect to config service for namespace resource::switches_list, retrying [1496265119.53917] Failed to connect to config service for namespace resource::switches_list, retrying ERROR : [1496265119.63971] Failed to connect to config service for namespace resource::switches_list, retrying [1496265119.63971] Failed to connect to config service for namespace resource::switches_list, retrying ERROR : [1496265119.7403] Failed to connect to config service for namespace resource::switches_list, retrying [1496265119.7403] Failed to connect to config service for namespace resource::switches_list, retrying ERROR : [1496265119.84084] Failed to connect to config service for namespace resource::switches_list, retrying [1496265119.84084] Failed to connect to config service for namespace resource::switches_list, retrying Can I reset the config on that host back to defaults? I have a snapshot I can revert to if needed. Peter On 05/31/2017 04:38 PM, Antoine Amacher wrote: Hello Peter, One of your node is note properly sync or cannot communicate with others: "wsrep_incoming_addresses 10.18.0.36:3306,10.18.0.37:3306" Only 2 nodes are connected on your instance. Isolate the one which is failing and try to resync it with the rest of the cluster. Thanks On 05/31/2017 03:56 PM, Peter Reilly wrote: I have a new configuration of packetfence, and I'm stuck at the final section of the guide: Checking the MariaDB sync. I'm new to MariaDB clustering. Any help would be appreciated. Thanks! Issuing mysql> show status like 'wsrep%'; shows: wsrep_apply_oooe 0.00 wsrep_apply_oool 0.00 wsrep_apply_window 1.00 wsrep_causal_reads 0 wsrep_cert_deps_distance 1.50 wsrep_cert_index_size 6 wsrep_cert_interval 0.00 wsrep_cluster_conf_id 91 wsrep_cluster_size 2 wsrep_cluster_state_uuid d53707fc-462e-11e7-9e94-2307d1d8c35a wsrep_cluster_status Primary wsrep_commit_oooe 0.00 wsrep_commit_oool 0.00 wsrep_commit_window 1.00 wsrep_connected ON wsrep_desync_count 1 wsrep_evs_delayed 02bbf569-4637-11e7-b3bd-475698c96cee:tcp://10.18.0.38:4567 :2,07b40a38-4636-11e7-96ad-8356df2560a3:tcp://10.18.0.38:4567 :2,0bdbe92c-4639-11e7-bd9f-5ef6eacc266d:tcp://10.18.0.38:4567 :2,0d6af863-4635-11e7-b534-065ae24e0e14:tcp://10.18.0.38:4567 :1,119bc620-4638-11e7-8c60-1f44bae4b807:tcp://10.18.0.38:4567 :2,13002f75-4634-11e7-a038-2206d6fbb42d:tcp://10.18.0.38:4567 :2,17864215-4637-11e7-baed-87a29423c2d7:tcp://10.18.0.38:4567 :1,1cc5fffc-4636-11e7-9611-b316acb7efd8:tcp://10.18.0.38:4567 :4,20f0aa1f-4639-11e7-b9e2-26ed5e81aa1f:tcp://10.18.0.38:4567 :5,2233a2eb-4635-11e7-8b9e-ab9cb722ae49:tcp://10.18.0.38:4567 :1,266bcf47-4638-11e7-a0e4-96c21bef49e7:tcp://10.18.0.38:4567 :4,28121797-4634-11e7-bcfe-56e8bb6c19b7:tcp://10.18.0.38:4567 :3,2c4f5b44-4637-11e7-8213-93a3892fcf15:tcp://10.18.0.38:4567 :3,35b7c77a-4639-11e7-9475-534e43ccb537:tcp://10.18.0.38:4567 :3,37506d89-4635-11e7-8dc9-b7fcdffd54b7:tcp://10.18.0.38:4567 :2,3b3344a8-4638-11e7-a347-96ac9e947fb6:tcp://10.18.0.38:4567 :2,3ce12393-4634-11e7-a81f-c2a8c8de2496:tcp://10.18.0.38:4567 :1,4114e63d-4637-11e7-b030-7efc7248d13f:tcp://10.18.0.38:4567 :2,46a3ea11-4636-11e7-8fda-13eadcade32c:tcp://10.18.0.38:4567 :4,4a8097a4-4639-11e7-bf2a-533999db367c:tcp://10.18.0.38:4567 :3,4c17ad42-4635-11e7-b6e6-d315ee85be43:tcp://10.18.0.38:4567 :4,50468a8e-4638-11e7-8601-1617155ed644:tcp://10.18.0.38:4567 :2,51a6cd8b-4634-11e7-8a97-fad9fee5825c:tcp://10.18.0.38:4567 :3,55da74f8-4637-11e7-a998-769b9f8e1983:tcp://10.18.0.38:4567 :4,5b6cbebf-4636-11e7-a547-0641a36aa6e9:tcp://10.18.0.38:4567 :3,5f47c624-4639-11e7-947c-6e7cf852e708:tcp://10.18.0.38:4567 :2,60e069b1-4635-11e7-aeae-d77204b723ef:tcp://10.18.0.38:4567 :3,650de7fc-4638-11e7-b6d9-827cc491159f:tcp://10.18.0.38:4567 :2,6673fcbd-4634-11e7-80b0-0283b8f039e0:tcp://10.18.0.38:4567 :2,6aa4abb6-4637-11e7-9ea7-0ebb92fd544f:tcp://10.18.0.38:4567 :1,6c5aea5a-4633-11e7-9601-1b63974f512e:tcp://10.18.0.38:4567 :4,70313a6d-4636-11e7-bc4c-eec94e18edfc:tcp://10.18.0.38:4567 :1,75ac13d0-4635-11e7-9b79-3a8f9493d61d:tcp://10.18.0.38:4567 :3,79d4f68b-4638-11e7-9269-d2d28710d4f wsrep_evs_evict_list | wsrep_evs_repl_latency 0.000296411/0.000296411/0.000296411/0/1 wsrep_evs_state GATHER wsrep_flow_control_paused 0.00 wsrep_flow_control_paused_ns 0 wsrep_flow_control_recv 0 wsrep_flow_control_sent 0 wsrep_gcomm_uuid d53696c2-462e-11e7-96e9-73b1bf863e3e wsrep_incoming_addresses 10.18.0.36:3306,10.18.0.37:3306 wsrep_last_committed 6
Re: [PacketFence-users] New PF 7.0 Cluster Configuration Question
Was resolved by a reboot of all hosts in case anyone else has the same issue. Peter On 05/31/2017 05:17 PM, Peter Reilly wrote: > Thank you, > > By resync you mean: /usr/local/pf/bin/cluster/sync ? > > Although that completed correctly the 1st time, now I get errors: > > ERROR : [1496265119.53917] Failed to connect to config service for > namespace resource::switches_list, retrying > [1496265119.53917] Failed to connect to config service for namespace > resource::switches_list, retrying > ERROR : [1496265119.63971] Failed to connect to config service for > namespace resource::switches_list, retrying > [1496265119.63971] Failed to connect to config service for namespace > resource::switches_list, retrying > ERROR : [1496265119.7403] Failed to connect to config service for > namespace resource::switches_list, retrying > [1496265119.7403] Failed to connect to config service for namespace > resource::switches_list, retrying > ERROR : [1496265119.84084] Failed to connect to config service for > namespace resource::switches_list, retrying > [1496265119.84084] Failed to connect to config service for namespace > resource::switches_list, retrying > > Can I reset the config on that host back to defaults? > > I have a snapshot I can revert to if needed. > > Peter > > > On 05/31/2017 04:38 PM, Antoine Amacher wrote: >> >> Hello Peter, >> >> One of your node is note properly sync or cannot communicate with others: >> >> "wsrep_incoming_addresses 10.18.0.36:3306,10.18.0.37:3306" >> >> Only 2 nodes are connected on your instance. >> >> Isolate the one which is failing and try to resync it with the rest >> of the cluster. >> >> Thanks >> >> >> On 05/31/2017 03:56 PM, Peter Reilly wrote: >>> I have a new configuration of packetfence, and I'm stuck at the >>> final section of the guide: Checking the MariaDB sync. >>> >>> I'm new to MariaDB clustering. Any help would be appreciated. Thanks! >>> >>> Issuing mysql> show status like 'wsrep%'; shows: >>> >>> wsrep_apply_oooe 0.00 >>> wsrep_apply_oool 0.00 >>> wsrep_apply_window 1.00 >>> wsrep_causal_reads 0 >>> wsrep_cert_deps_distance 1.50 >>> wsrep_cert_index_size 6 >>> wsrep_cert_interval 0.00 >>> wsrep_cluster_conf_id 91 >>> wsrep_cluster_size 2 >>> wsrep_cluster_state_uuid d53707fc-462e-11e7-9e94-2307d1d8c35a >>> wsrep_cluster_status Primary >>> wsrep_commit_oooe 0.00 >>> wsrep_commit_oool 0.00 >>> wsrep_commit_window 1.00 >>> wsrep_connected ON >>> wsrep_desync_count 1 >>> wsrep_evs_delayed >>> 02bbf569-4637-11e7-b3bd-475698c96cee:tcp://10.18.0.38:4567:2,07b40a38-4636-11e7-96ad-8356df2560a3:tcp://10.18.0.38:4567:2,0bdbe92c-4639-11e7-bd9f-5ef6eacc266d:tcp://10.18.0.38:4567:2,0d6af863-4635-11e7-b534-065ae24e0e14:tcp://10.18.0.38:4567:1,119bc620-4638-11e7-8c60-1f44bae4b807:tcp://10.18.0.38:4567:2,13002f75-4634-11e7-a038-2206d6fbb42d:tcp://10.18.0.38:4567:2,17864215-4637-11e7-baed-87a29423c2d7:tcp://10.18.0.38:4567:1,1cc5fffc-4636-11e7-9611-b316acb7efd8:tcp://10.18.0.38:4567:4,20f0aa1f-4639-11e7-b9e2-26ed5e81aa1f:tcp://10.18.0.38:4567:5,2233a2eb-4635-11e7-8b9e-ab9cb722ae49:tcp://10.18.0.38:4567:1,266bcf47-4638-11e7-a0e4-96c21bef49e7:tcp://10.18.0.38:4567:4,28121797-4634-11e7-bcfe-56e8bb6c19b7:tcp://10.18.0.38:4567:3,2c4f5b44-4637-11e7-8213-93a3892fcf15:tcp://10.18.0.38:4567:3,35b7c77a-4639-11e7-9475-534e43ccb537:tcp://10.18.0.38:4567:3,37506d89-4635-11e7-8dc9-b7fcdffd54b7:tcp://10.18.0.38:4567:2,3b3344a8-4638-11e7-a347-96ac9e947fb6:tcp://10.18.0.38:4567:2,3ce12393-4634-11e7-a81f-c2a8c8de2496:tcp://10.18.0.38:4567:1,4114e63d-4637-11e7-b030-7efc7248d13f:tcp://10.18.0.38:4567:2,46a3ea11-4636-11e7-8fda-13eadcade32c:tcp://10.18.0.38:4567:4,4a8097a4-4639-11e7-bf2a-533999db367c:tcp://10.18.0.38:4567:3,4c17ad42-4635-11e7-b6e6-d315ee85be43:tcp://10.18.0.38:4567:4,50468a8e-4638-11e7-8601-1617155ed644:tcp://10.18.0.38:4567:2,51a6cd8b-4634-11e7-8a97-fad9fee5825c:tcp://10.18.0.38:4567:3,55da74f8-4637-11e7-a998-769b9f8e1983:tcp://10.18.0.38:4567:4,5b6cbebf-4636-11e7-a547-0641a36aa6e9:tcp://10.18.0.38:4567:3,5f47c624-4639-11e7-947c-6e7cf852e708:tcp://10.18.0.38:4567:2,60e069b1-4635-11e7-aeae-d77204b723ef:tcp://10.18.0.38:4567:3,650de7fc-4638-11e7-b6d9-827cc491159f:tcp://10.18.0.38:4567:2,6673fcbd-4634-11e7-80b0-0283b8f039e0:tcp://10.18.0.38:4567:2,6aa4abb6-4637-11e7-9ea7-0ebb92fd544f:tcp://10.18.0.38:4567:1,6c5aea5a-4633-11e7-9601-1b63974f512e:tcp://10.18.0.38:4567:4,70313a6d-4636-11e7-bc4c-eec94e18edfc:tcp://10.18.0.38:4567:1,75ac13d0-4635-11e7-9b79-3a8f9493d61d:tcp://10.18.0.38:4567:3,79d4f68b-4638-11e7-9269-d2d28710d4f >>> wsrep_evs_evict_list | >>> wsrep_evs_repl_latency 0.000296411/0.000296411/0.000296411/0/1 >>> wsrep_evs_state GATHER >>> wsrep_flow_control_paused 0.00 >>> wsrep_flow_control_paused_ns 0 >>> wsrep_flow_control_recv 0 >>> wsrep_flow_control_sent 0 >>> wsrep_gcomm_uuid d53696c2-462e-11e7-96e9-73b1bf863e3e >>> wsrep_incoming_addresses 10.18.0.36:3306,10.18.0.37:3306 >>> wsrep_last_committed 6 >>> wsrep_local_bf_aborts 0 >>>
Re: [PacketFence-users] New PF 7.0 Cluster Configuration Question
Thank you, By resync you mean: /usr/local/pf/bin/cluster/sync ? Although that completed correctly the 1st time, now I get errors: ERROR : [1496265119.53917] Failed to connect to config service for namespace resource::switches_list, retrying [1496265119.53917] Failed to connect to config service for namespace resource::switches_list, retrying ERROR : [1496265119.63971] Failed to connect to config service for namespace resource::switches_list, retrying [1496265119.63971] Failed to connect to config service for namespace resource::switches_list, retrying ERROR : [1496265119.7403] Failed to connect to config service for namespace resource::switches_list, retrying [1496265119.7403] Failed to connect to config service for namespace resource::switches_list, retrying ERROR : [1496265119.84084] Failed to connect to config service for namespace resource::switches_list, retrying [1496265119.84084] Failed to connect to config service for namespace resource::switches_list, retrying Can I reset the config on that host back to defaults? I have a snapshot I can revert to if needed. Peter On 05/31/2017 04:38 PM, Antoine Amacher wrote: > > Hello Peter, > > One of your node is note properly sync or cannot communicate with others: > > "wsrep_incoming_addresses 10.18.0.36:3306,10.18.0.37:3306" > > Only 2 nodes are connected on your instance. > > Isolate the one which is failing and try to resync it with the rest of > the cluster. > > Thanks > > > On 05/31/2017 03:56 PM, Peter Reilly wrote: >> I have a new configuration of packetfence, and I'm stuck at the final >> section of the guide: Checking the MariaDB sync. >> >> I'm new to MariaDB clustering. Any help would be appreciated. Thanks! >> >> Issuing mysql> show status like 'wsrep%'; shows: >> >> wsrep_apply_oooe 0.00 >> wsrep_apply_oool 0.00 >> wsrep_apply_window 1.00 >> wsrep_causal_reads 0 >> wsrep_cert_deps_distance 1.50 >> wsrep_cert_index_size 6 >> wsrep_cert_interval 0.00 >> wsrep_cluster_conf_id 91 >> wsrep_cluster_size 2 >> wsrep_cluster_state_uuid d53707fc-462e-11e7-9e94-2307d1d8c35a >> wsrep_cluster_status Primary >> wsrep_commit_oooe 0.00 >> wsrep_commit_oool 0.00 >> wsrep_commit_window 1.00 >> wsrep_connected ON >> wsrep_desync_count 1 >> wsrep_evs_delayed >> 02bbf569-4637-11e7-b3bd-475698c96cee:tcp://10.18.0.38:4567:2,07b40a38-4636-11e7-96ad-8356df2560a3:tcp://10.18.0.38:4567:2,0bdbe92c-4639-11e7-bd9f-5ef6eacc266d:tcp://10.18.0.38:4567:2,0d6af863-4635-11e7-b534-065ae24e0e14:tcp://10.18.0.38:4567:1,119bc620-4638-11e7-8c60-1f44bae4b807:tcp://10.18.0.38:4567:2,13002f75-4634-11e7-a038-2206d6fbb42d:tcp://10.18.0.38:4567:2,17864215-4637-11e7-baed-87a29423c2d7:tcp://10.18.0.38:4567:1,1cc5fffc-4636-11e7-9611-b316acb7efd8:tcp://10.18.0.38:4567:4,20f0aa1f-4639-11e7-b9e2-26ed5e81aa1f:tcp://10.18.0.38:4567:5,2233a2eb-4635-11e7-8b9e-ab9cb722ae49:tcp://10.18.0.38:4567:1,266bcf47-4638-11e7-a0e4-96c21bef49e7:tcp://10.18.0.38:4567:4,28121797-4634-11e7-bcfe-56e8bb6c19b7:tcp://10.18.0.38:4567:3,2c4f5b44-4637-11e7-8213-93a3892fcf15:tcp://10.18.0.38:4567:3,35b7c77a-4639-11e7-9475-534e43ccb537:tcp://10.18.0.38:4567:3,37506d89-4635-11e7-8dc9-b7fcdffd54b7:tcp://10.18.0.38:4567:2,3b3344a8-4638-11e7-a347-96ac9e947fb6:tcp://10.18.0.38:4567:2,3ce12393-4634-11e7-a81f-c2a8c8de2496:tcp://10.18.0.38:4567:1,4114e63d-4637-11e7-b030-7efc7248d13f:tcp://10.18.0.38:4567:2,46a3ea11-4636-11e7-8fda-13eadcade32c:tcp://10.18.0.38:4567:4,4a8097a4-4639-11e7-bf2a-533999db367c:tcp://10.18.0.38:4567:3,4c17ad42-4635-11e7-b6e6-d315ee85be43:tcp://10.18.0.38:4567:4,50468a8e-4638-11e7-8601-1617155ed644:tcp://10.18.0.38:4567:2,51a6cd8b-4634-11e7-8a97-fad9fee5825c:tcp://10.18.0.38:4567:3,55da74f8-4637-11e7-a998-769b9f8e1983:tcp://10.18.0.38:4567:4,5b6cbebf-4636-11e7-a547-0641a36aa6e9:tcp://10.18.0.38:4567:3,5f47c624-4639-11e7-947c-6e7cf852e708:tcp://10.18.0.38:4567:2,60e069b1-4635-11e7-aeae-d77204b723ef:tcp://10.18.0.38:4567:3,650de7fc-4638-11e7-b6d9-827cc491159f:tcp://10.18.0.38:4567:2,6673fcbd-4634-11e7-80b0-0283b8f039e0:tcp://10.18.0.38:4567:2,6aa4abb6-4637-11e7-9ea7-0ebb92fd544f:tcp://10.18.0.38:4567:1,6c5aea5a-4633-11e7-9601-1b63974f512e:tcp://10.18.0.38:4567:4,70313a6d-4636-11e7-bc4c-eec94e18edfc:tcp://10.18.0.38:4567:1,75ac13d0-4635-11e7-9b79-3a8f9493d61d:tcp://10.18.0.38:4567:3,79d4f68b-4638-11e7-9269-d2d28710d4f >> wsrep_evs_evict_list | >> wsrep_evs_repl_latency 0.000296411/0.000296411/0.000296411/0/1 >> wsrep_evs_state GATHER >> wsrep_flow_control_paused 0.00 >> wsrep_flow_control_paused_ns 0 >> wsrep_flow_control_recv 0 >> wsrep_flow_control_sent 0 >> wsrep_gcomm_uuid d53696c2-462e-11e7-96e9-73b1bf863e3e >> wsrep_incoming_addresses 10.18.0.36:3306,10.18.0.37:3306 >> wsrep_last_committed 6 >> wsrep_local_bf_aborts 0 >> wsrep_local_cached_downto 1 >> wsrep_local_cert_failures 0 >> wsrep_local_commits 6 >> wsrep_local_index 1 >> wsrep_local_recv_queue 0 >> wsrep_local_recv_queue_avg 0.045627 >> wsrep_local_recv_queue_max 2 >> wsrep_local_recv_queue_min 0 >>
Re: [PacketFence-users] New PF 7.0 Cluster Configuration Question
Hello Peter, One of your node is note properly sync or cannot communicate with others: "wsrep_incoming_addresses 10.18.0.36:3306,10.18.0.37:3306" Only 2 nodes are connected on your instance. Isolate the one which is failing and try to resync it with the rest of the cluster. Thanks On 05/31/2017 03:56 PM, Peter Reilly wrote: I have a new configuration of packetfence, and I'm stuck at the final section of the guide: Checking the MariaDB sync. I'm new to MariaDB clustering. Any help would be appreciated. Thanks! Issuing mysql> show status like 'wsrep%'; shows: wsrep_apply_oooe 0.00 wsrep_apply_oool 0.00 wsrep_apply_window 1.00 wsrep_causal_reads 0 wsrep_cert_deps_distance 1.50 wsrep_cert_index_size 6 wsrep_cert_interval 0.00 wsrep_cluster_conf_id 91 wsrep_cluster_size 2 wsrep_cluster_state_uuid d53707fc-462e-11e7-9e94-2307d1d8c35a wsrep_cluster_status Primary wsrep_commit_oooe 0.00 wsrep_commit_oool 0.00 wsrep_commit_window 1.00 wsrep_connected ON wsrep_desync_count 1 wsrep_evs_delayed 02bbf569-4637-11e7-b3bd-475698c96cee:tcp://10.18.0.38:4567:2,07b40a38-4636-11e7-96ad-8356df2560a3:tcp://10.18.0.38:4567:2,0bdbe92c-4639-11e7-bd9f-5ef6eacc266d:tcp://10.18.0.38:4567:2,0d6af863-4635-11e7-b534-065ae24e0e14:tcp://10.18.0.38:4567:1,119bc620-4638-11e7-8c60-1f44bae4b807:tcp://10.18.0.38:4567:2,13002f75-4634-11e7-a038-2206d6fbb42d:tcp://10.18.0.38:4567:2,17864215-4637-11e7-baed-87a29423c2d7:tcp://10.18.0.38:4567:1,1cc5fffc-4636-11e7-9611-b316acb7efd8:tcp://10.18.0.38:4567:4,20f0aa1f-4639-11e7-b9e2-26ed5e81aa1f:tcp://10.18.0.38:4567:5,2233a2eb-4635-11e7-8b9e-ab9cb722ae49:tcp://10.18.0.38:4567:1,266bcf47-4638-11e7-a0e4-96c21bef49e7:tcp://10.18.0.38:4567:4,28121797-4634-11e7-bcfe-56e8bb6c19b7:tcp://10.18.0.38:4567:3,2c4f5b44-4637-11e7-8213-93a3892fcf15:tcp://10.18.0.38:4567:3,35b7c77a-4639-11e7-9475-534e43ccb537:tcp://10.18.0.38:4567:3,37506d89-4635-11e7-8dc9-b7fcdffd54b7:tcp://10.18.0.38:4567:2,3b3344a8-4638-11e7-a347-96ac9e947fb6:tcp://10.18.0.38:4567:2,3ce12393-4634-11e7-a81f-c2a8c8de2496:tcp://10.18.0.38:4567:1,4114e63d-4637-11e7-b030-7efc7248d13f:tcp://10.18.0.38:4567:2,46a3ea11-4636-11e7-8fda-13eadcade32c:tcp://10.18.0.38:4567:4,4a8097a4-4639-11e7-bf2a-533999db367c:tcp://10.18.0.38:4567:3,4c17ad42-4635-11e7-b6e6-d315ee85be43:tcp://10.18.0.38:4567:4,50468a8e-4638-11e7-8601-1617155ed644:tcp://10.18.0.38:4567:2,51a6cd8b-4634-11e7-8a97-fad9fee5825c:tcp://10.18.0.38:4567:3,55da74f8-4637-11e7-a998-769b9f8e1983:tcp://10.18.0.38:4567:4,5b6cbebf-4636-11e7-a547-0641a36aa6e9:tcp://10.18.0.38:4567:3,5f47c624-4639-11e7-947c-6e7cf852e708:tcp://10.18.0.38:4567:2,60e069b1-4635-11e7-aeae-d77204b723ef:tcp://10.18.0.38:4567:3,650de7fc-4638-11e7-b6d9-827cc491159f:tcp://10.18.0.38:4567:2,6673fcbd-4634-11e7-80b0-0283b8f039e0:tcp://10.18.0.38:4567:2,6aa4abb6-4637-11e7-9ea7-0ebb92fd544f:tcp://10.18.0.38:4567:1,6c5aea5a-4633-11e7-9601-1b63974f512e:tcp://10.18.0.38:4567:4,70313a6d-4636-11e7-bc4c-eec94e18edfc:tcp://10.18.0.38:4567:1,75ac13d0-4635-11e7-9b79-3a8f9493d61d:tcp://10.18.0.38:4567:3,79d4f68b-4638-11e7-9269-d2d28710d4f wsrep_evs_evict_list | wsrep_evs_repl_latency 0.000296411/0.000296411/0.000296411/0/1 wsrep_evs_state GATHER wsrep_flow_control_paused 0.00 wsrep_flow_control_paused_ns 0 wsrep_flow_control_recv 0 wsrep_flow_control_sent 0 wsrep_gcomm_uuid d53696c2-462e-11e7-96e9-73b1bf863e3e wsrep_incoming_addresses 10.18.0.36:3306,10.18.0.37:3306 wsrep_last_committed 6 wsrep_local_bf_aborts 0 wsrep_local_cached_downto 1 wsrep_local_cert_failures 0 wsrep_local_commits 6 wsrep_local_index 1 wsrep_local_recv_queue 0 wsrep_local_recv_queue_avg 0.045627 wsrep_local_recv_queue_max 2 wsrep_local_recv_queue_min 0 wsrep_local_replays 0 wsrep_local_send_queue 0 wsrep_local_send_queue_avg 0.00 wsrep_local_send_queue_max 1 wsrep_local_send_queue_min 0 wsrep_local_state 2 wsrep_local_state_comment Donor/Desynced wsrep_local_state_uuid d53707fc-462e-11e7-9e94-2307d1d8c35a wsrep_protocol_version 7 wsrep_provider_name Galera wsrep_provider_vendor Codership wsrep_provider_version 25.3.19(r3667) wsrep_ready OFF wsrep_received 263 wsrep_received_bytes 53338 wsrep_repl_data_bytes 2028 wsrep_repl_keys 22 wsrep_repl_keys_bytes 314 wsrep_repl_other_bytes 0 wsrep_replicated 6 wsrep_replicated_bytes 2726 wsrep_thread_count 2 The log shows the following. tail -n 500/var/lib/mysql/o-packetfence02-tdv.wctnet.org.err | grep -v 'A mysqld process already exists' 2017-05-31 15:52:57 140202702010112 [Note] WSREP: declaring 6bac5196 at tcp://10.18.0.36:4567 stable 2017-05-31 15:52:57 140202702010112 [Note] WSREP: Node 6bac5196 state prim 2017-05-31 15:52:59 140202702010112 [Note] WSREP: (d53696c2, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.18.0.38:4567 2017-05-31 15:53:00 140202702010112 [Note] WSREP: (d53696c2, 'tcp://0.0.0.0:4567') reconnecting to ad66a165 (tcp://10.18.0.38:4567), attempt 0 2017-05-31 15:53:03 140202702010112 [Note] WSREP: declaring
[PacketFence-users] New PF 7.0 Cluster Configuration Question
I have a new configuration of packetfence, and I'm stuck at the final section of the guide: Checking the MariaDB sync. I'm new to MariaDB clustering. Any help would be appreciated. Thanks! Issuing mysql> show status like 'wsrep%'; shows: wsrep_apply_oooe 0.00 wsrep_apply_oool 0.00 wsrep_apply_window 1.00 wsrep_causal_reads 0 wsrep_cert_deps_distance 1.50 wsrep_cert_index_size 6 wsrep_cert_interval 0.00 wsrep_cluster_conf_id 91 wsrep_cluster_size 2 wsrep_cluster_state_uuid d53707fc-462e-11e7-9e94-2307d1d8c35a wsrep_cluster_status Primary wsrep_commit_oooe 0.00 wsrep_commit_oool 0.00 wsrep_commit_window 1.00 wsrep_connected ON wsrep_desync_count 1 wsrep_evs_delayed 02bbf569-4637-11e7-b3bd-475698c96cee:tcp://10.18.0.38:4567:2,07b40a38-4636-11e7-96ad-8356df2560a3:tcp://10.18.0.38:4567:2,0bdbe92c-4639-11e7-bd9f-5ef6eacc266d:tcp://10.18.0.38:4567:2,0d6af863-4635-11e7-b534-065ae24e0e14:tcp://10.18.0.38:4567:1,119bc620-4638-11e7-8c60-1f44bae4b807:tcp://10.18.0.38:4567:2,13002f75-4634-11e7-a038-2206d6fbb42d:tcp://10.18.0.38:4567:2,17864215-4637-11e7-baed-87a29423c2d7:tcp://10.18.0.38:4567:1,1cc5fffc-4636-11e7-9611-b316acb7efd8:tcp://10.18.0.38:4567:4,20f0aa1f-4639-11e7-b9e2-26ed5e81aa1f:tcp://10.18.0.38:4567:5,2233a2eb-4635-11e7-8b9e-ab9cb722ae49:tcp://10.18.0.38:4567:1,266bcf47-4638-11e7-a0e4-96c21bef49e7:tcp://10.18.0.38:4567:4,28121797-4634-11e7-bcfe-56e8bb6c19b7:tcp://10.18.0.38:4567:3,2c4f5b44-4637-11e7-8213-93a3892fcf15:tcp://10.18.0.38:4567:3,35b7c77a-4639-11e7-9475-534e43ccb537:tcp://10.18.0.38:4567:3,37506d89-4635-11e7-8dc9-b7fcdffd54b7:tcp://10.18.0.38:4567:2,3b3344a8-4638-11e7-a347-96ac9e947fb6:tcp://10.18.0.38:4567:2,3ce12393-4634-11e7-a81f-c2a8c8de2496:tcp://10.18.0.38:4567:1,4114e63d-4637-11e7-b030-7efc7248d13f:tcp://10.18.0.38:4567:2,46a3ea11-4636-11e7-8fda-13eadcade32c:tcp://10.18.0.38:4567:4,4a8097a4-4639-11e7-bf2a-533999db367c:tcp://10.18.0.38:4567:3,4c17ad42-4635-11e7-b6e6-d315ee85be43:tcp://10.18.0.38:4567:4,50468a8e-4638-11e7-8601-1617155ed644:tcp://10.18.0.38:4567:2,51a6cd8b-4634-11e7-8a97-fad9fee5825c:tcp://10.18.0.38:4567:3,55da74f8-4637-11e7-a998-769b9f8e1983:tcp://10.18.0.38:4567:4,5b6cbebf-4636-11e7-a547-0641a36aa6e9:tcp://10.18.0.38:4567:3,5f47c624-4639-11e7-947c-6e7cf852e708:tcp://10.18.0.38:4567:2,60e069b1-4635-11e7-aeae-d77204b723ef:tcp://10.18.0.38:4567:3,650de7fc-4638-11e7-b6d9-827cc491159f:tcp://10.18.0.38:4567:2,6673fcbd-4634-11e7-80b0-0283b8f039e0:tcp://10.18.0.38:4567:2,6aa4abb6-4637-11e7-9ea7-0ebb92fd544f:tcp://10.18.0.38:4567:1,6c5aea5a-4633-11e7-9601-1b63974f512e:tcp://10.18.0.38:4567:4,70313a6d-4636-11e7-bc4c-eec94e18edfc:tcp://10.18.0.38:4567:1,75ac13d0-4635-11e7-9b79-3a8f9493d61d:tcp://10.18.0.38:4567:3,79d4f68b-4638-11e7-9269-d2d28710d4f wsrep_evs_evict_list | wsrep_evs_repl_latency 0.000296411/0.000296411/0.000296411/0/1 wsrep_evs_state GATHER wsrep_flow_control_paused 0.00 wsrep_flow_control_paused_ns 0 wsrep_flow_control_recv 0 wsrep_flow_control_sent 0 wsrep_gcomm_uuid d53696c2-462e-11e7-96e9-73b1bf863e3e wsrep_incoming_addresses 10.18.0.36:3306,10.18.0.37:3306 wsrep_last_committed 6 wsrep_local_bf_aborts 0 wsrep_local_cached_downto 1 wsrep_local_cert_failures 0 wsrep_local_commits 6 wsrep_local_index 1 wsrep_local_recv_queue 0 wsrep_local_recv_queue_avg 0.045627 wsrep_local_recv_queue_max 2 wsrep_local_recv_queue_min 0 wsrep_local_replays 0 wsrep_local_send_queue 0 wsrep_local_send_queue_avg 0.00 wsrep_local_send_queue_max 1 wsrep_local_send_queue_min 0 wsrep_local_state 2 wsrep_local_state_comment Donor/Desynced wsrep_local_state_uuid d53707fc-462e-11e7-9e94-2307d1d8c35a wsrep_protocol_version 7 wsrep_provider_name Galera wsrep_provider_vendor Codership wsrep_provider_version 25.3.19(r3667) wsrep_ready OFF wsrep_received 263 wsrep_received_bytes 53338 wsrep_repl_data_bytes 2028 wsrep_repl_keys 22 wsrep_repl_keys_bytes 314 wsrep_repl_other_bytes 0 wsrep_replicated 6 wsrep_replicated_bytes 2726 wsrep_thread_count 2 The log shows the following. tail -n 500/var/lib/mysql/o-packetfence02-tdv.wctnet.org.err | grep -v 'A mysqld process already exists' 2017-05-31 15:52:57 140202702010112 [Note] WSREP: declaring 6bac5196 at tcp://10.18.0.36:4567 stable 2017-05-31 15:52:57 140202702010112 [Note] WSREP: Node 6bac5196 state prim 2017-05-31 15:52:59 140202702010112 [Note] WSREP: (d53696c2, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.18.0.38:4567 2017-05-31 15:53:00 140202702010112 [Note] WSREP: (d53696c2, 'tcp://0.0.0.0:4567') reconnecting to ad66a165 (tcp://10.18.0.38:4567), attempt 0 2017-05-31 15:53:03 140202702010112 [Note] WSREP: declaring 6bac5196 at tcp://10.18.0.36:4567 stable 2017-05-31 15:53:03 140202702010112 [Note] WSREP: Node 6bac5196 state prim 2017-05-31 15:53:03 140202702010112 [Note] WSREP: view(view_id(PRIM,6bac5196,527) memb { 6bac5196,0 d53696c2,0 } joined { } left { } partitioned { }) 2017-05-31 15:53:03 140202702010112 [Note] WSREP: save pc into disk