Re: [gpfsug-discuss] How to use RHEL 7 mdadm NVMe devices with Spectrum Scale 4.2.3.10?

2018-11-15 Thread Greg.Lehmann
Hi Lance, We are doing it with beegfs (mdadm and NVMe drives in the same HW.) For GPFS have you updated the nsddevices sample script to look at the mdadm devices and put it in /var/mmfs/etc? BTW I'm interested to see how you go with that configuration. Cheers, Greg -Original

Re: [gpfsug-discuss] wondering about outage free protocols upgrades

2018-03-08 Thread Greg.Lehmann
That last little bit “not available today” gives me hope. It would be nice to get there “one day.” Our situation is we are using NFS for access to images that VMs run from. An outage means shutting down a lot of guests. An NFS outage of even short duration would result in the system disks of

Re: [gpfsug-discuss] wondering about outage free protocols upgrades

2018-03-07 Thread Greg.Lehmann
In theory it only affects SMB, but in practice if NFS depends on winbind for authorisation then it is affected too. I can understand the need for changes to happen every so often and that maybe outages will be required then. But, I would like to see some effort to avoid doing this

Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above?

2017-12-04 Thread Greg.Lehmann
We run GPFS client SW on SLES 12 SP2 which has a 4.4 kernel. It is only at 4.2.3-1 at present. -Original Message- From: gpfsug-discuss-boun...@spectrumscale.org [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of z@imperial.ac.uk Sent: Monday, 4 December 2017 8:38 PM To:

Re: [gpfsug-discuss] Online data migration tool

2017-11-26 Thread Greg.Lehmann
I personally don’t think lack of a migration tool is a problem. I do think that 2 format changes in such quick succession is a problem. I am willing to migrate occasionally, but then the amount of data we have in GPFS is still small. I do value my data, so I'd trust a manual migration using

Re: [gpfsug-discuss] el7.4 compatibility

2017-09-27 Thread Greg.Lehmann
I guess I may as well ask about SLES 12 SP3 as well! TIA. -Original Message- From: gpfsug-discuss-boun...@spectrumscale.org [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Kenneth Waegeman Sent: Wednesday, 27 September 2017 6:17 PM To: gpfsug-discuss@spectrumscale.org

Re: [gpfsug-discuss] NFS re-export with nfs-ganesha proxy?

2017-09-17 Thread Greg.Lehmann
I am interested too, so maybe keep it on list? From: gpfsug-discuss-boun...@spectrumscale.org [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of John Hearns Sent: Saturday, 16 September 2017 1:37 AM To: gpfsug main discussion list Subject:

Re: [gpfsug-discuss] Support for SLES 12 SP3

2017-09-12 Thread Greg.Lehmann
+1. We are interested in SLES 12 SP3 too. BTW had anybody done any comparisons of SLES 12 SP2 (4.4) kernel vs RHEL 7.3 in terms of GPFS IO performance? I would think the 4.4 kernel might give it an edge. I'll probably get around to comparing them myself one day, but if anyone else has some

Re: [gpfsug-discuss] what is mmnfs under the hood

2017-08-06 Thread Greg.Lehmann
It would be nice to know why you cannot use ganesha or mmsmb. You don't have to use protocols or CES. We are migrating to CES from doing our own thing with NFS and samba on Debian. Debian does not have support for CES, so we had to roll our own. We did not use CNFS either. To get to CES we had

Re: [gpfsug-discuss] connected v. datagram mode

2017-05-14 Thread Greg.Lehmann
I asked Mellanox about this nearly 2 years ago and was told around the 500 node mark there will be a tipping point and that datagram will be more useful after that. Memory utilisation was the issue. I've also seen references to smaller node counts more recently as well as generic

Re: [gpfsug-discuss] NFS issues

2017-04-25 Thread Greg.Lehmann
Are you using infiniband or Ethernet? I'm wondering if IBM have solved the gratuitous arp issue which we see with our non-protocols NFS implementation. -Original Message- From: gpfsug-discuss-boun...@spectrumscale.org [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Simon

Re: [gpfsug-discuss] question on viewing block distribution across NSDs

2017-03-29 Thread Greg.Lehmann
I was going to keep mmdf in mind, not gpfs.snap. I will now also keep in mind that mmdf can have an impact as at present we have spinning disk for metadata. The system I am playing around on is not production yet, so I am safe for the moment. Thanks again. From:

Re: [gpfsug-discuss] question on viewing block distribution across NSDs

2017-03-29 Thread Greg.Lehmann
Thanks. I don't have a snap. I'll keep that in mind for next time I do this. From: gpfsug-discuss-boun...@spectrumscale.org [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP] Sent: Thursday, 30 March 2017 9:45 AM To: gpfsug main

Re: [gpfsug-discuss] Two new whitepapers published

2016-10-31 Thread Greg.Lehmann
Thanks Yuri. These were great. I'm not trying to be impertinent, but I have one suggestion - If you can find the time, add some diagrams to help readers visualise the various data structures and layouts. I am thinking along the lines of what was in "The Magic Garden Explained" and "The Design

Re: [gpfsug-discuss] Virtualized Spectrum Scale

2016-10-25 Thread Greg.Lehmann
The srp work was done a few years ago now. We use the same srp code for both physical and virtual, so I am guessing it has nothing to do with the SRIOV side of things. Somebody else did the work, so I will try and get an answer for you. I agree performance and stability is good with physical

Re: [gpfsug-discuss] Virtualized Spectrum Scale

2016-10-25 Thread Greg.Lehmann
We use KVM running on a Debian host, with CentOS guests. Storage is zoned from our DDN Infiniband array to the host and then passed through to the guests. We would like to zone it directly to the guests SRIOV IB HCA, but srp seems to be a bit of dead code tree. We had to do a bit of work to get

Re: [gpfsug-discuss] Forcing which node gets expelled?

2016-10-25 Thread Greg.Lehmann
We hit something like this due to a bug in gskit. We all thought it was networking at first and it took me a fair bit of time to check all that. We have 7 nsd servers and around 400 clients running 4.2.0.4. We are just trying a workaround now that looks promising. The bug will be fixed at some

Re: [gpfsug-discuss] Blocksize

2016-09-28 Thread Greg.Lehmann
Are there any presentation available online that provide diagrams of the directory/file creation process and modifications in terms of how the blocks/inodes and indirect blocks etc are used. I would guess there are a few different cases that would need to be shown. This is the sort of thing

Re: [gpfsug-discuss] Blocksize

2016-09-28 Thread Greg.Lehmann
I am wondering what people use to produce a file size distribution report for their filesystems. Has everyone rolled their own or is there some goto app to use. Cheers, Greg From: gpfsug-discuss-boun...@spectrumscale.org [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of

Re: [gpfsug-discuss] mmcessmbchconfig command

2016-08-25 Thread Greg.Lehmann
I agree with an RFE. From: gpfsug-discuss-boun...@spectrumscale.org [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Bryan Banister Sent: Friday, 26 August 2016 2:47 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmcessmbchconfig

[gpfsug-discuss] 4.2.1 documentation

2016-08-03 Thread Greg.Lehmann
I see only 4 pdfs now with slightly different titles to the previous 5 pdfs available with 4.2.0. Just checking there are only supposed to be 4 now? Greg ___ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org

Re: [gpfsug-discuss] quota on secondary groups for a user?

2016-08-03 Thread Greg.Lehmann
The GID selection rules for account creation are Linux distribution specific. It sounds like you are familiar with Red Hat, where I think this idea of GID=UID started. sles12sp1-brc:/dev/disk/by-uuid # useradd testout sles12sp1-brc:/dev/disk/by-uuid # grep testout /etc/passwd

Re: [gpfsug-discuss] SLES 12 SP1 support for Spectrum Scale

2016-07-19 Thread Greg.Lehmann
You are right. An IBMer cleared it up for me. From: gpfsug-discuss-boun...@spectrumscale.org [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Tuesday, 19 July 2016 6:00 PM To: gpfsug main discussion list

[gpfsug-discuss] SLES 12 SP1 support for Spectrum Scale

2016-07-17 Thread Greg.Lehmann
Hi All, Given the issues with supporting RHEL 7.2 I am wondering about the latest SLES release and support. Is anybody running actually running it on SLES 12 SP1. I've seen reference to a kernel version that is in SLES 12 SP1, but I'm not sure I trust it as the same document also