[gpfsug-discuss] IO performance of replicated GPFS filesystem

2015-11-30 Thread tomasz.wol...@ts.fujitsu.com
Hi All, I could use some help of the experts here :) Please correct me if I'm wrong: I suspect that GPFS filesystem READ performance is better when filesystem is replicated to i.e. two failure groups, where these failure groups are placed on separate RAID controllers. In this case WRITE perform

[gpfsug-discuss] Max NSD/filesystem size allowed in GPFS 4.1

2016-01-22 Thread tomasz.wol...@ts.fujitsu.com
Hi All, Could you please tell me, what is the maximum allowed size of NSD in GPFS 4.1? Also, are there any limitations in filesystem size? Thanks, Tomasz Wolski ___ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mail

[gpfsug-discuss] DMAPI multi-thread safe

2016-02-03 Thread tomasz.wol...@ts.fujitsu.com
Hi Experts :) Could you please tell me if the DMAPI implementation for GPFS is multi-thread safe? Are there any limitation towards using multiple threads within a single DM application process? For example: DM events are processed by multiple threads, which call dm* functions for manipulating f

[gpfsug-discuss] GPFS update path 4.1.1 -> 4.2.3

2017-05-29 Thread tomasz.wol...@ts.fujitsu.com
Hello, We are planning to integrate new IBM Spectrum Scale version 4.2.3 into our software, but in our current software release we have version 4.1.1 integrated. We are worried how would node-at-a-time updates look like when our customer wanted to update his cluster from 4.1.1 to 4.2.3 version.

Re: [gpfsug-discuss] GPFS update path 4.1.1 -> 4.2.3

2017-05-31 Thread tomasz.wol...@ts.fujitsu.com
455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: "tomasz.wol...@ts.fujitsu.com<mailto:tomasz.wol...@ts.fujitsu.com>" mailto:tomasz.wol...@ts.fujitsu.com>> To: "gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrums

[gpfsug-discuss] Missing gpfs filesystem device under /dev/

2017-05-31 Thread tomasz.wol...@ts.fujitsu.com
Hello All, It seems that GPFS 4.2.3 does not create block device under /dev for new filesystems anymore - is this behavior intended? In manuals, there's nothing mentioned about this change. For example, having GPFS filesystem gpfs100 with mountpoint /cache/100, /proc/mounts has following entry:

[gpfsug-discuss] Is GPFS starting NSDs automatically?

2017-08-09 Thread tomasz.wol...@ts.fujitsu.com
awiam Tomasz Wolski Development Engineer NDC Eternus CS HE (ET2) [cid:image002.gif@01CE62B9.8ACFA960] FUJITSU Fujitsu Technology Solutions Sp. z o.o. Textorial Park Bldg C, ul. Fabryczna 17 90-344 Lodz, Poland E-mail: tomasz.wol...@ts.fujitsu.com<mailto:tomasz.wol...@ts.fujitsu.com>

Re: [gpfsug-discuss] FW: [EXTERNAL] FLASH: IBM Spectrum Scale (GPFS) V4.1 and 4.2 levels: network reconnect function may result in file system corruption or undetected file data corruption (2017.10.09

2017-10-10 Thread tomasz.wol...@ts.fujitsu.com
Hi, From what I understand this does not affect FC SAN cluster configuration, but mostly NSD IO communication? Best regards, Tomasz Wolski From: gpfsug-discuss-boun...@spectrumscale.org [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Ben De Luca Sent: Wednesday, October 11, 2017

Re: [gpfsug-discuss] FW: [EXTERNAL] FLASH: IBM Spectrum Scale (GPFS) V4.1 and 4.2 levels: network reconnect function may result in file system corruption or undetected file data corruption (2017.10.09

2017-10-25 Thread tomasz.wol...@ts.fujitsu.com
IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: "

[gpfsug-discuss] Inode scan optimization

2018-02-08 Thread tomasz.wol...@ts.fujitsu.com
Hello All, A full backup of an 2 billion inodes spectrum scale file system on V4.1.1.16 takes 60 days. We try to optimize and using inode scans seems to improve, even when we are using a directory scan and the inode scan just for having a better performance concerning stat (using gpfs_stat_ino

[gpfsug-discuss] [item 435]: GPFS fails to start - NSD thread configuration needs more threads

2018-03-16 Thread tomasz.wol...@ts.fujitsu.com
Hello GPFS Team, We are observing strange behavior of GPFS during startup on SLES12 node. In our test cluster, we reinstalled VLP1 node with SLES 12 SP3 as a base and when GPFS starts for the first time on this node, it complains about too little NSD threads: .. 2018-03-16_13:11:28.947+0100: GPF

Re: [gpfsug-discuss] [item 435]: GPFS fails to start - NSD thread configuration needs more threads

2018-03-16 Thread tomasz.wol...@ts.fujitsu.com
, you > were logging in and startng it via ssh the limit may be different than if its > started from the gpfs.service unit because mmfsd effectively is running in > different cgroups in each case. > > Hope that helps! > > -Aaron > > On 3/16/18 10:25 AM, tomasz.wol...@ts