Re: [gpfsug-discuss] Change to default for verbsRdmaMinBytes?

2017-09-01 Thread Truong Vu

The discrepancy between the mmlsconfig view and mmdiag has been fixed in
GFPS 4.2.3 version.  Note, mmdiag reports the correct default value.

Tru.



From:   gpfsug-discuss-requ...@spectrumscale.org
To: gpfsug-discuss@spectrumscale.org
Date:   09/01/2017 06:43 PM
Subject:gpfsug-discuss Digest, Vol 68, Issue 2
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Send gpfsug-discuss mailing list submissions to
 gpfsug-discuss@spectrumscale.org

To subscribe or unsubscribe via the World Wide Web, visit

https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=HQmkdQWQHoc1Nu6Mg_g8NVugim3OiUUy5n0QgLQcbkM=yK4FkYvJ21ubvurR6W1Pi3qvNw9ydj2XP0ghXPc7DUw=xZHUN9ZlFjvgBmBB8wnX2cQDQQV42R_q-xHubNA3JBM=

or, via email, send a message with subject or body 'help' to
 gpfsug-discuss-requ...@spectrumscale.org

You can reach the person managing the list at
 gpfsug-discuss-ow...@spectrumscale.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. Re: GPFS GUI Nodes > NSD no data (Sobey, Richard A)
   2. Change to default for verbsRdmaMinBytes? (Edward Wahl)
   3. Quorum managers (Joshua Akers)
   4. Re: Change to default for verbsRdmaMinBytes? (Sven Oehme)


--

Message: 1
Date: Fri, 1 Sep 2017 13:36:56 +
From: "Sobey, Richard A" 
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] GPFS GUI Nodes > NSD no data
Message-ID:




Content-Type: text/plain; charset="us-ascii"

Resolved this, guessed at changing GPFSNSDDisk.period to 5.

From: gpfsug-discuss-boun...@spectrumscale.org [
mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Sobey,
Richard A
Sent: 01 September 2017 09:45
To: 'gpfsug-discuss@spectrumscale.org' 
Subject: [gpfsug-discuss] GPFS GUI Nodes > NSD no data

For some time now if I go into the GUI, select Monitoring > Nodes > NSD
Server Nodes, the only columns with good data are Name, State and NSD
Count. Everything else e.g. Avg Disk Wait Read is listed "N/A".

Is this another config option I need to enable? It's been bugging me for a
while, I don't think I've seen it work since 4.2.1 which was the first time
I saw the GUI.

Cheers
Richard
-- next part --
An HTML attachment was scrubbed...
URL: <
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_pipermail_gpfsug-2Ddiscuss_attachments_20170901_2a4162e9_attachment-2D0001.html=DwICAg=jf_iaSHvJObTbx-siA1ZOg=HQmkdQWQHoc1Nu6Mg_g8NVugim3OiUUy5n0QgLQcbkM=yK4FkYvJ21ubvurR6W1Pi3qvNw9ydj2XP0ghXPc7DUw=jcPGl5zwtQFMbnEmBpNErsD43uwoVeKgKk_8j7ZeCJY=
 >

--

Message: 2
Date: Fri, 1 Sep 2017 16:56:25 -0400
From: Edward Wahl 
To: gpfsug main discussion list 
Subject: [gpfsug-discuss] Change to default for verbsRdmaMinBytes?
Message-ID: <20170901165625.6e4ed...@osc.edu>
Content-Type: text/plain; charset="US-ASCII"

Howdy.   Just noticed this change to min RDMA packet size and I don't seem
to
see it in any patch notes.  Maybe I just skipped the one where this
changed?

 mmlsconfig verbsRdmaMinBytes
verbsRdmaMinBytes 16384

(in case someone thinks we changed it)

[root@proj-nsd01 ~]# mmlsconfig |grep verbs
verbsRdma enable
verbsRdma disable
verbsRdmasPerConnection 14
verbsRdmasPerNode 1024
verbsPorts mlx5_3/1
verbsPorts mlx4_0
verbsPorts mlx5_0
verbsPorts mlx5_0 mlx5_1
verbsPorts mlx4_1/1
verbsPorts mlx4_1/2


Oddly I also see this in config, though I've seen these kinds of things
before.
mmdiag --config |grep verbsRdmaMinBytes
   verbsRdmaMinBytes 8192

We're on a recent efix.
Current GPFS build: "4.2.2.3 efix21 (1028007)".

--

Ed Wahl
Ohio Supercomputer Center
614-292-9302


--

Message: 3
Date: Fri, 01 Sep 2017 21:06:15 +
From: Joshua Akers 
To: "gpfsug-discuss@spectrumscale.org"
 
Subject: [gpfsug-discuss] Quorum managers
Message-ID:
 
Content-Type: text/plain; charset="utf-8"

Hi all,

I was wondering how most people set up quorum managers. We historically had
physical admin nodes be the quorum managers, but are switching to a
virtualized admin services infrastructure. We have been choosing a few
compute nodes to act as quorum managers in our client clusters, but have
considered using virtual machines instead. Has anyone else done this?

Regards,
Josh
--
*Joshua D. Akers*

*HPC Team Lead*
NI Systems Support (MC0214)
1700 Pratt Drive
Blacksburg, VA 24061
540-231-9506
-- next part 

Re: [gpfsug-discuss] Change to default for verbsRdmaMinBytes?

2017-09-01 Thread Sven Oehme
Hi Ed,

yes the defaults for that have changed for customers who had not overridden
the default settings. the reason we did this was that many systems in the
field including all ESS systems that come pre-tuned where manually changed
to 8k from the 16k default due to better performance that was confirmed in
multiple customer engagements and tests with various settings , therefore
we change the default to what it should be in the field so people are not
bothered to set it anymore (simplification) or get benefits by changing the
default to provides better performance.
all this happened when we did the communication code overhaul that did lead
to significant (think factors) of improved RPC performance for RDMA and
VERBS workloads.
there is another round of significant enhancements coming soon , that will
make even more parameters either obsolete or change some of the defaults
for better out of the box performance.
i see that we should probably enhance the communication of this changes,
not that i think this will have any negative effect compared to what your
performance was with the old setting i am actually pretty confident that
you get better performance with the new code, but by setting parameters
back to default on most 'manual tuned' probably makes your system even
faster.
if you have a Scale Client on 4.2.3+ you really shouldn't have anything set
beside maxfilestocache, pagepool, workerthreads and potential prefetch , if
you are a protocol node, this and settings specific to an  export (e.g.
SMB, NFS set some special settings) , pretty much everything else these
days should be set to default so the code can pick the correct parameters.,
if its not and you get better performance by manual tweaking something i
like to hear about it.
on the communication side in the next release will eliminate another set of
parameters that are now 'auto set' and we plan to work on NSD next.
i presented various slides about the communication and simplicity changes
in various forums, latest public non NDA slides i presented are here -->
http://files.gpfsug.org/presentations/2017/Manchester/08_Research_Topics.pdf

hope this helps .

Sven



On Fri, Sep 1, 2017 at 1:56 PM Edward Wahl  wrote:

> Howdy.   Just noticed this change to min RDMA packet size and I don't seem
> to
> see it in any patch notes.  Maybe I just skipped the one where this
> changed?
>
>  mmlsconfig verbsRdmaMinBytes
> verbsRdmaMinBytes 16384
>
> (in case someone thinks we changed it)
>
> [root@proj-nsd01 ~]# mmlsconfig |grep verbs
> verbsRdma enable
> verbsRdma disable
> verbsRdmasPerConnection 14
> verbsRdmasPerNode 1024
> verbsPorts mlx5_3/1
> verbsPorts mlx4_0
> verbsPorts mlx5_0
> verbsPorts mlx5_0 mlx5_1
> verbsPorts mlx4_1/1
> verbsPorts mlx4_1/2
>
>
> Oddly I also see this in config, though I've seen these kinds of things
> before.
> mmdiag --config |grep verbsRdmaMinBytes
>verbsRdmaMinBytes 8192
>
> We're on a recent efix.
> Current GPFS build: "4.2.2.3 efix21 (1028007)".
>
> --
>
> Ed Wahl
> Ohio Supercomputer Center
> 614-292-9302 <(614)%20292-9302>
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Quorum managers

2017-09-01 Thread Joshua Akers
Hi all,

I was wondering how most people set up quorum managers. We historically had
physical admin nodes be the quorum managers, but are switching to a
virtualized admin services infrastructure. We have been choosing a few
compute nodes to act as quorum managers in our client clusters, but have
considered using virtual machines instead. Has anyone else done this?

Regards,
Josh
-- 
*Joshua D. Akers*

*HPC Team Lead*
NI Systems Support (MC0214)
1700 Pratt Drive
Blacksburg, VA 24061
540-231-9506
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Change to default for verbsRdmaMinBytes?

2017-09-01 Thread Edward Wahl
Howdy.   Just noticed this change to min RDMA packet size and I don't seem to
see it in any patch notes.  Maybe I just skipped the one where this changed?

 mmlsconfig verbsRdmaMinBytes
verbsRdmaMinBytes 16384 

(in case someone thinks we changed it)

[root@proj-nsd01 ~]# mmlsconfig |grep verbs
verbsRdma enable
verbsRdma disable
verbsRdmasPerConnection 14
verbsRdmasPerNode 1024
verbsPorts mlx5_3/1
verbsPorts mlx4_0
verbsPorts mlx5_0
verbsPorts mlx5_0 mlx5_1
verbsPorts mlx4_1/1
verbsPorts mlx4_1/2


Oddly I also see this in config, though I've seen these kinds of things before.
mmdiag --config |grep verbsRdmaMinBytes
   verbsRdmaMinBytes 8192

We're on a recent efix. 
Current GPFS build: "4.2.2.3 efix21 (1028007)".

-- 

Ed Wahl
Ohio Supercomputer Center
614-292-9302
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS GUI Nodes > NSD no data

2017-09-01 Thread Sobey, Richard A
Resolved this, guessed at changing GPFSNSDDisk.period to 5.

From: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Sobey, Richard A
Sent: 01 September 2017 09:45
To: 'gpfsug-discuss@spectrumscale.org' 
Subject: [gpfsug-discuss] GPFS GUI Nodes > NSD no data

For some time now if I go into the GUI, select Monitoring > Nodes > NSD Server 
Nodes, the only columns with good data are Name, State and NSD Count. 
Everything else e.g. Avg Disk Wait Read is listed "N/A".

Is this another config option I need to enable? It's been bugging me for a 
while, I don't think I've seen it work since 4.2.1 which was the first time I 
saw the GUI.

Cheers
Richard
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] SMB2 leases - oplocks - growing files

2017-09-01 Thread Bart Van Damme
We are a company located in Belgium that mainly implements spectrum scale
clusters in the Media and broadcasting industry.

Currently we have a customer who wants to export the scale file system over
samba 4.5 and 4.6.
In these versions the SMB2 leases are activated by default for enhancing
the oplocks system.

The problem is when this option is not disabled Adobe (and probably
Windows) is not notified the size of the file have changed, resulting that
reading growing file in Adobe is not working, the timeline is not updated.

Does anybody had this issues before and know how to solve it.



This is the smb.conf file:




# Global options

smb2 leases = yes

client use spnego = yes

clustering = yes

unix extensions = no

mangled names = no

ea support = yes

store dos attributes = yes

map readonly = no

map archive = yes

map system = no

force unknown acl user = yes

obey pam restrictions = no

deadtime = 480

disable netbios = yes

server signing = disabled

server min protocol = SMB2

smb encrypt = off

# We do not allow guest usage.

guest ok = no

guest account = nobody

map to guest = bad user

# disable printing

load printers = no

printing = bsd

printcap name = /dev/null

disable spoolss = yes

# log settings

log file = /var/log/samba/log.%m

# max 500KB per log file, then rotate

max log size = 500

log level = 1 passdb:1 auth:1 winbind:1  idmap:1

# Share Definitions 

[pfs]

comment = GPFS

path = /gpfs/pfs

valid users = @ug_numpr

writeable = yes

inherit permissions = yes

create mask = 664

force create mode = 664

nfs4:chown = yes

nfs4:acedup = merge

nfs4:mode = special

fileid:algorithm = fsname

vfs objects = shadow_copy2 gpfs fileid full_audit

full_audit:prefix = %u|%I|%m|%S

full_audit:success = rename unlink rmdir

full_audit:failure = none

full_audit:facility = local6

full_audit:priority = NOTICE

shadow:fixinodes = yes

gpfs:sharemodes = yes

gpfs:winattr = yes

gpfs:leases = no

locking = yes

posix locking = yes

oplocks = yes

kernel oplocks = no


Grtz,

Bart

*Bart Van Damme *

*Customer Project Manager*

*SDNsquare*
Technologiepark 3,
9052 Zwijnaarde, Belgium
www.sdnsquare.com

T:  + 32 9 241 56 01
<09%20241%2056%2001>
M: + 32 496 59 23 09



*This email is confidential in that it is intended for the exclusive
attention of the addressee(s) indicated. If you are not the intended
recipient, this email should not be read or disclosed to any other person.
Please notify the sender immediately and delete this email from your
computer system. Any opinions expressed are not necessarily those of the
company from which this email was sent and, whilst to the best of our
knowledge no viruses or defects exist, no responsibility can be accepted
for any loss or damage arising from its receipt or subsequent use of this
email.*


Virusvrij.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] GPFS GUI Nodes > NSD no data

2017-09-01 Thread Sobey, Richard A
For some time now if I go into the GUI, select Monitoring > Nodes > NSD Server 
Nodes, the only columns with good data are Name, State and NSD Count. 
Everything else e.g. Avg Disk Wait Read is listed "N/A".

Is this another config option I need to enable? It's been bugging me for a 
while, I don't think I've seen it work since 4.2.1 which was the first time I 
saw the GUI.

Cheers
Richard
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss