Re: [ClusterLabs] PCSD Certificate

2017-07-10 Thread Tomas Jelinek

Dne 6.7.2017 v 07:41 BUVI napsal(a):

Hi,

I would like to know, why certiticate is created in pacemaker


Hi,

The certificate is not created by pacemaker. It's created by pcsd. It 
serves for encrypting network communication with pcsd, that is access to 
web UI and node-to-node communication.



and what will happen if it expires ?


I suppose your browser will complain about the certificate being 
expired. If that happens (or at any other time) you can replace the 
certificate with your own using the "pcs pcsd certkey" command. Or 
delete the certificate on one node and restart pcsd there to make it 
generate a fresh certificate and then sync it to other nodes with the 
"pcs pcsd sync-certificates" command.



Regards,
Tomas




Thanks and Regards,*

Bhuvanesh Kumar .G
*
Linux and Email Administrator*
*




___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org



___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] PCSD - Change Port 2224

2017-07-10 Thread Tomas Jelinek

Dne 6.7.2017 v 10:47 philipp.achmuel...@arz.at napsal(a):

Hi,

I would like to change default Port for webaccess - actually this is
"hardcoded" to 2224 - any plans to integrate this into any config file
so this could be changed more easy?


Hi,

Yes, we plan to make the port configurable. The feature request is 
tracked here:

https://bugzilla.redhat.com/show_bug.cgi?id=1415197

Regards,
Tomas



thank you!
regards
Philipp


___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] fence_vbox Unable to connect/login to fencing device

2017-07-10 Thread Marek Grac
On Fri, Jul 7, 2017 at 1:45 PM, ArekW  wrote:

> The reason for --force is:
> Error: missing required option(s): 'ipaddr, login, plug' for resource
> type: stonith:fence_vbox (use --force to override)
>

It looks like you use unreleased upstream of fence agents without a
similary new version of pcs (with the commit
7f85340b7aa4e8c016720012cf42c304e68dd1fe)


>
> I have selinux disabled on both nodes:
> [root@nfsnode1 ~]# cat /etc/sysconfig/selinux
> SELINUX=disabled
>
> pcs stonith update vbox-fencing verbose=true
> Error: resource option(s): 'verbose', are not recognized for resource
> type: 'stonith::fence_vbox' (use --force to override)
>

It shoulbe fixed in commit b47558331ba6615aa5720484301d644cc8e973fd (Jun 12)


>
>

>
> Jul  7 13:37:49 nfsnode1 fence_vbox: Unable to connect/login to fencing
> device
> Jul  7 13:37:49 nfsnode1 stonith-ng[2045]: warning: fence_vbox[4765]
> stderr: [ Running command: /usr/bin/ssh -4  AW23321@10.0.2.2 -i
> /root/.ssh/id_rsa -p 22 -t '/bin/bash -c "PS1=\\[EXPECT\\]#\  /bin/bash
> --noprofile --norc"' ]
>

ok, so sometimes it works and sometimes not. It looks like that our
timeouts are set quite strict for your environment. Try to increase
login_timeout from default 30s higher.

m,
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Active-Active NFS cluster failover test - system hangs (VirtualBox)

2017-07-10 Thread ArekW
Hi,
I've created 2-node active-active HA Cluster with NFS resource. The
resources are active on both nodes. The Cluster passes failover test with
pcs standby command but does not work when "real" node shutdown occure.

Test scenario with cluster standby:
- start cluster
- mount nfs share on client1
- start copy file from client1 to nfs share
- during the copy put node1/node2 to standby mode (pcs cluster standby
nfsnode2)
- the copy continue
- unstandby node1/node2
- the copy continue and the storage re-sync (drbd)
- the copy finish with no errors

I can standby and unstandby the cluster many times and it works. The
problem begins when I do a "true" failover test by hard-shutting down one
of the nodes. Test results:
- start cluster
- mount nfs share on client1
- start copy file from client1 to nfs share
- during the copy shutdown node2 by stoping the node's virtual machine
(hard stop)
- the system hangs:


# rsync -a --bwlimit=2000 /root/testfile.dat /mnt/nfsshare/



[root@nfsnode1 nfs]# ls -lah
razem 9,8M
drwxr-xr-x 2 root root 3,8K 07-10 11:07 .
drwxr-xr-x 4 root root 3,8K 07-10 08:20 ..
-rw-r--r-- 1 root root9 07-10 08:20 client1.txt
-rw-r- 1 root root0 07-10 11:07 .rmtab
-rw--- 1 root root 9,8M 07-10 11:07 .testfile.dat.9780fH

[root@nfsnode1 nfs]# pcs status
Cluster name: nfscluster
Stack: corosync
Current DC: nfsnode2 (version 1.1.15-11.el7_3.5-e174ec8) - partition with
quorum
Last updated: Mon Jul 10 11:07:29 2017  Last change: Mon Jul 10
10:28:12 2017 by root via crm_attribute on nfsnode1

2 nodes and 15 resources configured

Online: [ nfsnode1 nfsnode2 ]

Full list of resources:

 Master/Slave Set: StorageClone [Storage]
 Masters: [ nfsnode1 nfsnode2 ]
 Clone Set: dlm-clone [dlm]
 Started: [ nfsnode1 nfsnode2 ]
 vbox-fencing   (stonith:fence_vbox):   Started nfsnode1
 Clone Set: ClusterIP-clone [ClusterIP] (unique)
 ClusterIP:0(ocf::heartbeat:IPaddr2):   Started nfsnode2
 ClusterIP:1(ocf::heartbeat:IPaddr2):   Started nfsnode1
 Clone Set: StorageFS-clone [StorageFS]
 Started: [ nfsnode1 nfsnode2 ]
 Clone Set: WebSite-clone [WebSite]
 Started: [ nfsnode1 nfsnode2 ]
 Clone Set: nfs-group-clone [nfs-group]
 Started: [ nfsnode1 nfsnode2 ]



[root@nfsnode1 nfs]# pcs status
Cluster name: nfscluster
Stack: corosync
Current DC: nfsnode1 (version 1.1.15-11.el7_3.5-e174ec8) - partition with
quorum
Last updated: Mon Jul 10 11:07:43 2017  Last change: Mon Jul 10
10:28:12 2017 by root via crm_attribute on nfsnode1

2 nodes and 15 resources configured

Node nfsnode2: UNCLEAN (offline)
Online: [ nfsnode1 ]

Full list of resources:

 Master/Slave Set: StorageClone [Storage]
 Storage(ocf::linbit:drbd): Master nfsnode2 (UNCLEAN)
 Masters: [ nfsnode1 ]
 Clone Set: dlm-clone [dlm]
 dlm(ocf::pacemaker:controld):  Started nfsnode2 (UNCLEAN)
 Started: [ nfsnode1 ]
 vbox-fencing   (stonith:fence_vbox):   Started nfsnode1
 Clone Set: ClusterIP-clone [ClusterIP] (unique)
 ClusterIP:0(ocf::heartbeat:IPaddr2):   Started nfsnode2
(UNCLEAN)
 ClusterIP:1(ocf::heartbeat:IPaddr2):   Started nfsnode1
 Clone Set: StorageFS-clone [StorageFS]
 StorageFS  (ocf::heartbeat:Filesystem):Started nfsnode2 (UNCLEAN)
 Started: [ nfsnode1 ]
 Clone Set: WebSite-clone [WebSite]
 WebSite(ocf::heartbeat:apache):Started nfsnode2 (UNCLEAN)
 Started: [ nfsnode1 ]
 Clone Set: nfs-group-clone [nfs-group]
 Resource Group: nfs-group:1
 nfs(ocf::heartbeat:nfsserver): Started nfsnode2 (UNCLEAN)
 nfs-export (ocf::heartbeat:exportfs):  Started nfsnode2
(UNCLEAN)
 Started: [ nfsnode1 ]


[root@nfsnode1 nfs]# ls -lah



[root@nfsnode1 ~]# drbdadm status
storage role:Primary
  disk:UpToDate
  nfsnode2 connection:Connecting


[root@nfsnode1 ~]# exportfs
/mnt/drbd/nfs   10.0.2.0/255.255.255.0


login as: root
root@127.0.0.1's password:
Last login: Mon Jul 10 07:48:17 2017 from 10.0.2.2
# cd /mnt/
# ls


# mount
10.0.2.7:/ on /mnt/nfsshare type nfs4
(rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.2.20,local_lock=none,addr=10.0.2.7)




[root@nfsnode1 ~]# ls -lah
razem 9,8M
drwxr-xr-x 2 root root 3,8K 07-10 11:07 .
drwxr-xr-x 4 root root 3,8K 07-10 08:20 ..
-rw-r--r-- 1 root root9 07-10 08:20 client1.txt
-rw-r- 1 root root0 07-10 11:16 .rmtab
-rw--- 1 root root 9,8M 07-10 11:07 .testfile.dat.9780fH




[root@nfsnode1 ~]# pcs status
Cluster name: nfscluster
Stack: corosync
Current DC: nfsnode1 (version 1.1.15-11.el7_3.5-e174ec8) - partition with
quorum
Last updated: Mon Jul 10 11:17:19 2017  Last change: Mon Jul 10
10:28:12 2017 by root via crm_attribute on nfsnode1

2 nodes and 15 resources configured

Online: [ nfsnode1 nfsnode2 ]

Full list of resources:

 Master/Slave Set: StorageClone [Storage]
 Masters: [ nfsnode1 ]
 Stopped: [ nfsnode2 ]

Re: [ClusterLabs] Introducing the Anvil! Intelligent Availability platform

2017-07-10 Thread Kristoffer Grönlund
Digimer  writes:

> Hi all,
>
>   I suspect by now, many of you here have heard me talk about the Anvil!
> intelligent availability platform. Today, I am proud to announce that it
> is ready for general use!
>
> https://github.com/ClusterLabs/striker/releases/tag/v2.0.0
>

Cool, congratulations!

Cheers,
Kristoffer

>
>   Now, time to start working full time on version 3!
>
> -- 
> Digimer
> Papers and Projects: https://alteeve.com/w/
> "I am, somehow, less interested in the weight and convolutions of
> Einstein’s brain than in the near certainty that people of equal talent
> have lived and died in cotton fields and sweatshops." - Stephen Jay Gould
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org