Re: [ClusterLabs] Can't get nfs4 to work.

2016-06-01 Thread Dennis Jacobfeuerborn
On 01.06.2016 20:25, Stephano-Shachter, Dylan wrote:
> Hello all,
> 
> I have just finished setting up my HA nfs cluster and I am having a small
> problem. I would like to have nfs4 working but whenever I try to mount I
> get the following message,
> 
> mount: no type was given - I'll assume nfs because of the colon

I'm not sure if the type "nfs" is supposed to work with v4 as well but
on my systems the mounts use the explicit type "nfs4" so you can try
mounting with "-t nfs4".

Regards,
  Dennis


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Can't get nfs4 to work.

2016-06-01 Thread Stephano-Shachter, Dylan
Hello all,

I have just finished setting up my HA nfs cluster and I am having a small
problem. I would like to have nfs4 working but whenever I try to mount I
get the following message,

mount: no type was given - I'll assume nfs because of the colon
mount.nfs: timeout set for Wed Jun  1 10:08:45 2016
mount.nfs: trying text-based options
'vers=4,addr=10.243.16.116,clientaddr=10.243.18.22'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'addr=10.243.16.116'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: trying 10.243.16.116 prog 13 vers 3 prot TCP port 2049
mount.nfs: prog 15, trying vers=3, prot=17
mount.nfs: trying 10.243.16.116 prog 15 vers 3 prot UDP port 20048

I can not figure out why version 4 is not supported. My nfsserver resource
and export resource are

Resource: ch_nfs (class=ocf provider=heartbeat type=nfsserver)
  Attributes: nfs_shared_infodir=/data/nfs nfs_ip=10.243.16.116
  Operations: start interval=0s timeout=40 (ch_nfs-start-interval-0s)
  stop interval=0s timeout=20s (ch_nfs-stop-interval-0s)
  monitor interval=0s (ch_nfs-monitor-interval-30s)

Resource: ch_export_testdata_18 (class=ocf provider=heartbeat type=exportfs)
   Attributes: clientspec=10.243.18.0/255.255.255.0
options=rw,no_root_squash directory=/data/testdata fsid=1
   Operations: start interval=0s timeout=40
(ch_export_testdata_18-start-interval-0s)
   stop interval=0s timeout=120
(ch_export_testdata_18-stop-interval-0s)
   monitor interval=10 timeout=20
(ch_export_testdata_18-monitor-interval-10)
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] FYI: ocf:pacemaker:controld issue in rc3

2016-06-01 Thread Ken Gaillot
FYI, the ocf:pacemaker:controld (DLM) resource agent released with
Pacemaker 1.1.15-rc3 has an issue. It will work with an upstream patch
applied to DLM, but not with existing DLM versions.

This has been fixed as of commit 2c148ac, which will be in rc4. It
requires a stonith_admin enhancement, so to use it, you must compile the
entire pacemaker package, not just grab the agent.

Anyone not using the controld agent is still encouraged to download and
test rc3, which has many improvements and is fairly close to what the
final release will be.
-- 
Ken Gaillot 

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] FYI: Alert script permissions

2016-06-01 Thread Ken Gaillot
For anyone playing with the new alerts feature, there is one difference
from the old ClusterMon external scripts to be aware of.

Resource agents such as ClusterMon run as root, so ClusterMon's external
scripts also run as root.

The new alert scripts are run as the hacluster user. So if you are using
a ClusterMon script with the new alerts feature, be aware of permissions
issues. If an alert script needs elevated privileges, it is recommended
to use sudo. If you use SELinux, you may need to grant the hacluster
user access to files/devices/whatever needed by your script, as well as
the ability to execute the script itself.

The new approach has obvious security benefits but may be less
convenient in some cases. If there is a need, we may add the ability to
configure an alert script's run-as user in a future version.
-- 
Ken Gaillot 

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Q: status section of CIB: "last_0" IDs and "queue-time"

2016-06-01 Thread Ulrich Windl
Hello!

I have a question:
Inspecting the XML of our cluster, I noticed that there are several IDs ending 
with "last_0". So I wondered:
It seems those IDs are generated for start and stop operations, and I 
discovered one case where an ID is duplicate (the status is for different 
nodes, and one is a start operation, while the other is a stop 
operationhowever).

Background: I wrote some program that extarcts the runtimes of operations from 
the CIB, like this:
prm_r00_fs_last_0 13464 stop
prm_r00_fs_last_0 61 start
prm_r00_fs_monitor_30 34 monitor
prm_r00_fs_monitor_30 43 monitor

The first word is the "id" attribute, the second is the "exec-time" attribute, 
and the last one (added to help myself out of confusion) is the "operation" 
attribute. Values are converted to milliseconds.

Is the name of the id intentional, or is it some mistake?

And another question: For an operation with "start-delay" it seems the start 
delay is simple added to the queue time (as if the operation was waiting that 
long). Is that intentional?

Another program tried to extract queue and execution times for operations, and 
the sorted result looks like this then:

1 27 prm_nfs_home_exp_last_0 monitor
1 39 prm_q10_ip_2_monitor_6 monitor
1 42 prm_e10_ip_2_monitor_6 monitor
1 58 prm_s01_ip_last_0 stop
1 74 prm_nfs_cbw_trans_exp_last_0 start
30001 1180 prm_stonith_sbd_monitor_18 monitor
30001 178 prm_c11_ascs_ers_monitor_6 monitor
30002 165 prm_c11_ascs_ers_monitor_45000 monitor

Regards,
Ulrich



___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org