[gpfsug-discuss] Spectrum Scale 4.2.1

2016-06-29 Thread Kenneth Waegeman
Hi all, On the User meeting in London a month ago, they announced that SS 4.2.1 would be released somewhere in June. Since I did not found any release notes or packages at the passport advantage portal, I guess it it is not yet released? Does someone knows what would be the timing for this?

Re: [gpfsug-discuss] Spectrum Scale 4.2.1

2016-06-30 Thread Kenneth Waegeman
___ From: gpfsug-discuss-boun...@spectrumscale.org [gpfsug-discuss-boun...@spectrumscale.org] on behalf of Kenneth Waegeman [kenneth.waege...@ugent.be] Sent: 29 June 2016 16:38 To: gpfsug-discuss@spectrumscale.org Subject: [gpfsug-discuss] Spectrum Scale 4.2.1 Hi all, On the User meeting in London a mont

[gpfsug-discuss] Upgrade from 4.1.1 to 4.2.1

2016-08-03 Thread Kenneth Waegeman
Hi, In the upgrade procedure (prerequisites) of 4.2.1, I read: "If you are coming from 4.1.1-X, you must first upgrade to 4.2.0-0. You may use this 4.2.1-0 package to perform a First Time Install or to upgrade from an existing 4.2.0-X level." What does this mean exactly. Should we just instal

Re: [gpfsug-discuss] 4.2.1 documentation

2016-08-04 Thread Kenneth Waegeman
This is new, it is explained how they are merged at http://www.ibm.com/support/knowledgecenter/STXKQY_4.2.1/com.ibm.spectrum.scale.v4r21.doc/bl1xx_soc.htm Cheers! K On 04/08/16 04:41, greg.lehm...@csiro.au wrote: I see only 4 pdfs now with slightly different titles to the previous 5 pdfs av

[gpfsug-discuss] Disk can't be recovered due to uncorrectable read error in vdisk (GSS)

2016-10-17 Thread Kenneth Waegeman
Hi, Currently our file system is down due to down/unrecovered disks. We try to start the disks again with mmchdisk, but when we do this, we see this error in our mmfs.log: Mon Oct 17 15:28:18.122 2016: [E] smallRead VIO failed due to uncorrectable read error in vdisk nsd11_MetaData_8M_3p_2 v

Re: [gpfsug-discuss] bizarre performance behavior

2017-04-20 Thread Kenneth Waegeman
Hi, Having an issue that looks the same as this one: We can do sequential writes to the filesystem at 7,8 GB/s total , which is the expected speed for our current storage backend. While we have even better performance with sequential reads on raw storage LUNS, using GPFS we can only reach 1G

Re: [gpfsug-discuss] bizarre performance behavior

2017-04-21 Thread Kenneth Waegeman
tem+(GPFS)/page/Testing+network+performance+with+nsdperf -Aaron On April 20, 2017 at 10:53:47 EDT, Kenneth Waegeman wrote: Hi, Having an issue that looks the same as this one: We can do sequential writes to the filesystem at 7,8 GB/s total , which is the expected speed for our current

Re: [gpfsug-discuss] bizarre performance behavior

2017-04-21 Thread Kenneth Waegeman
From: Kenneth Waegeman To: gpfsug main discussion list Date: 04/20/2017 04:53 PM Subject:Re: [gpfsug-discuss] bizarre performance behavior Sent by:gpfsug-discuss-boun...@spectrumscale.org Hi, Having an issue that looks the same as this one: We can do sequential writes to

Re: [gpfsug-discuss] bizarre performance behavior

2017-04-21 Thread Kenneth Waegeman
: +49 371 6978 2165 Mobile: +49 175 575 2877 E-Mail: uwefa...@de.ibm.com --- IBM Deutschland Business & Technology Services GmbH / Geschäftsführung: Andreas Hasse, Thorsten Moehring Sitz der Gesellschaft: Ehningen / Registergericht: Amtsge

Re: [gpfsug-discuss] bizarre performance behavior

2017-04-21 Thread Kenneth Waegeman
crew governor to up the frequency (which can affect throughout). If your frequency scaling governor isn't kicking up the frequency of your CPUs I've seen that cause this behavior in my testing. -Aaron On April 21, 2017 at 05:43:40 EDT, Kenneth Waegeman wrote: Hi, We are running a t

[gpfsug-discuss] restrict user quota on specific filesets

2017-08-04 Thread Kenneth Waegeman
Hi, Is it possible to let users only write data in filesets where some quota is explicitly set ? We have independent filesets with quota defined for users that should have access in a specific fileset. The problem is when users using another fileset give eg global write access on their directo

Re: [gpfsug-discuss] Change to default for verbsRdmaMinBytes?

2017-09-06 Thread Kenneth Waegeman
Hi Sven, I see two parameters that we have set to non-default values that are not in your list of options still to configure. verbsRdmasPerConnection (256) and socketMaxListenConnections (1024) I remember we had to set socketMaxListenConnections because our cluster consist of +550 nodes. A

[gpfsug-discuss] el7.4 compatibility

2017-09-27 Thread Kenneth Waegeman
Hi, Is there already some information available of gpfs (and protocols) on el7.4 ? Thanks! Kenneth ___ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Re: [gpfsug-discuss] el7.4 compatibility

2017-09-28 Thread Kenneth Waegeman
s I may as well ask about SLES 12 SP3 as well! TIA. -Original Message- From: gpfsug-discuss-boun...@spectrumscale.org [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Kenneth Waegeman Sent: Wednesday, 27 September 2017 6:17 PM To: gpfsug-discuss@spectrumscale.org Subject: [

[gpfsug-discuss] system.log pool on client nodes for HAWC

2018-08-28 Thread Kenneth Waegeman
Hi all, I was looking into HAWC , using the 'distributed fast storage in client nodes' method ( https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_hawc_using.htm ) This is achieved by putting  a local device on the clients in the system.log p

Re: [gpfsug-discuss] system.log pool on client nodes for HAWC

2018-09-03 Thread Kenneth Waegeman
essential. HTH, -- Vasily Tarasov, Research Staff Member, Storage Systems Research, IBM Research - Almaden - Original message - From: Kenneth Waegeman Sent by: gpfsug-discuss-boun...@spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss

[gpfsug-discuss] mmfsck output

2018-11-26 Thread Kenneth Waegeman
Hi all, We had some leftover files with IO errors on a GPFS FS, so we ran a mmfsck. Does someone know what these mmfsck errors mean: Error in inode 38422 snap 0: has nlink field as 1 Error in inode 281057 snap 0: is unreferenced  Attach inode to lost+found of fileset root filesetId 0? no Th

[gpfsug-discuss] support for 7.8 in 5.0.4.4?

2020-05-04 Thread Kenneth Waegeman
Hi all, I didn't see any updates in the faq yet about the new 5.0.4.4 release. Does this release support rhel 7.8 ? Thank you! Kenneth ___ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-disc

Re: [gpfsug-discuss] support for 7.8 in 5.0.4.4?

2020-05-04 Thread Kenneth Waegeman
On 04/05/2020 12:13, Jonathan Buzzard wrote: On 04/05/2020 10:11, Kenneth Waegeman wrote: Hi all, I didn't see any updates in the faq yet about the new 5.0.4.4 release. Does this release support rhel 7.8 ? Has the fix for 7.7 with a kernel >= 3.10.0-1062.18.1.el7 been released ye