Ilan, you must create some type of authentication mechanism for CES to work 
properly first.  If you want a quick and dirty way that would just use your 
local /etc/passwd try this.

/usr/lpp/mmfs/bin/mmuserauth service create --data-access-method file --type 
userdefined

Mark

-----Original Message-----
From: Ilan Schwarts [mailto:[email protected]]
Sent: Monday, July 24, 2017 5:37 AM
To: gpfsug main discussion list <[email protected]>
Subject: [gpfsug-discuss] export nfs share on gpfs with no authentication

Hi,
I have gpfs with 2 Nodes (redhat).
I am trying to create NFS share - So I would be able to mount and access it 
from another linux machine.

I receive error: Current authentication: none is invalid.
What do i need to configure ?
PLEASE NOTE: I dont have the SMB package at the moment, I dont want 
authentication on the NFS export..

While trying to create NFS (I execute the following):
[root@LH20-GPFS1 ~]# mmnfs export add /fs_gpfs01 -c "* 
Access_Type=RW,Protocols=3:4,Squash=no_root_squash)"

I receive the following error:
[root@LH20-GPFS1 ~]# mmnfs export add /fs_gpfs01 -c 
"*(Access_Type=RW,Protocols=3:4,Squash=no_root_squash)"
mmcesfuncs.sh: Current authentication: none is invalid.
This operation can not be completed without correct Authentication 
configuration.
Configure authentication using:   mmuserauth
mmnfs export add: Command failed. Examine previous error messages to determine 
cause.


[root@LH20-GPFS1 ~]# mmuserauth service list FILE access not configured
PARAMETERS               VALUES
-------------------------------------------------

OBJECT access not configured
PARAMETERS               VALUES
-------------------------------------------------
[root@LH20-GPFS1 ~]#






Some additional information on cluster:
==============================
[root@LH20-GPFS1 ~]# mmlsmgr
file system      manager node
---------------- ------------------
fs_gpfs01        10.10.158.61 (LH20-GPFS1)
Cluster manager node: 10.10.158.61 (LH20-GPFS1)
[root@LH20-GPFS1 ~]# mmgetstate -a
 Node number  Node name        GPFS state
------------------------------------------
       1      LH20-GPFS1       active
       3      LH20-GPFS2       active
[root@LH20-GPFS1 ~]# mmlscluster

GPFS cluster information
========================
  GPFS cluster name:         LH20-GPFS1
  GPFS cluster id:           10777108240438931454
  GPFS UID domain:           LH20-GPFS1
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp
  Repository type:           CCR

 Node  Daemon node name  IP address    Admin node name  Designation
--------------------------------------------------------------------
   1   LH20-GPFS1        10.10.158.61  LH20-GPFS1       quorum-manager
   3   LH20-GPFS2        10.10.158.62  LH20-GPFS2       quorum
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
This message (including any attachments) is intended only for the use of the 
individual or entity to which it is addressed and may contain information that 
is non-public, proprietary, privileged, confidential, and exempt from 
disclosure under applicable law. If you are not the intended recipient, you are 
hereby notified that any use, dissemination, distribution, or copying of this 
communication is strictly prohibited. This message may be viewed by parties at 
Sirius Computer Solutions other than those named in the message header. This 
message does not contain an official representation of Sirius Computer 
Solutions. If you have received this communication in error, notify Sirius 
Computer Solutions immediately and (i) destroy this message if a facsimile or 
(ii) delete this message immediately if this is an electronic communication. 
Thank you.

Sirius Computer Solutions<http://www.siriuscom.com>
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to