Hi All,
I could use some help of the experts here :) Please correct me if I'm wrong: I
suspect that GPFS filesystem READ performance is better when filesystem is
replicated to i.e. two failure groups, where these failure groups are placed on
separate RAID controllers. In this case WRITE perform
Hi All,
Could you please tell me, what is the maximum allowed size of NSD in GPFS 4.1?
Also, are there any limitations in filesystem size?
Thanks,
Tomasz Wolski
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mail
Hi Experts :)
Could you please tell me if the DMAPI implementation for GPFS is multi-thread
safe? Are there any limitation towards using multiple threads within a single
DM application process?
For example: DM events are processed by multiple threads, which call dm*
functions for manipulating f
Hello,
We are planning to integrate new IBM Spectrum Scale version 4.2.3 into our
software, but in our current software release we have version 4.1.1 integrated.
We are worried how would node-at-a-time updates look like when our customer
wanted to update his cluster from 4.1.1 to 4.2.3 version.
455 South Rd, Poughkeepsie, NY 12601
(845) 433-9314 T/L 293-9314
From:
"tomasz.wol...@ts.fujitsu.com<mailto:tomasz.wol...@ts.fujitsu.com>"
mailto:tomasz.wol...@ts.fujitsu.com>>
To:
"gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrums
Hello All,
It seems that GPFS 4.2.3 does not create block device under /dev for new
filesystems anymore - is this behavior intended?
In manuals, there's nothing mentioned about this change.
For example, having GPFS filesystem gpfs100 with mountpoint /cache/100,
/proc/mounts has following entry:
awiam
Tomasz Wolski
Development Engineer
NDC Eternus CS HE (ET2)
[cid:image002.gif@01CE62B9.8ACFA960]
FUJITSU
Fujitsu Technology Solutions Sp. z o.o.
Textorial Park Bldg C, ul. Fabryczna 17
90-344 Lodz, Poland
E-mail: tomasz.wol...@ts.fujitsu.com<mailto:tomasz.wol...@ts.fujitsu.com>
Hi, From what I understand this does not affect FC SAN cluster configuration,
but mostly NSD IO communication?
Best regards,
Tomasz Wolski
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Ben De Luca
Sent: Wednesday, October 11, 2017
IBM software maintenance contract please contact 1-800-237-5511 in
the United States or your local IBM Service Center in other countries.
The forum is informally monitored as time permits and should not be used for
priority messages to the Spectrum Scale (GPFS) team.
From:
"
Hello All,
A full backup of an 2 billion inodes spectrum scale file system on V4.1.1.16
takes 60 days.
We try to optimize and using inode scans seems to improve, even when we are
using a directory scan and the inode scan just for having a better performance
concerning stat (using gpfs_stat_ino
Hello GPFS Team,
We are observing strange behavior of GPFS during startup on SLES12 node.
In our test cluster, we reinstalled VLP1 node with SLES 12 SP3 as a base and
when GPFS starts for the first time on this node, it complains about
too little NSD threads:
..
2018-03-16_13:11:28.947+0100: GPF
, you
> were logging in and startng it via ssh the limit may be different than if its
> started from the gpfs.service unit because mmfsd effectively is running in
> different cgroups in each case.
>
> Hope that helps!
>
> -Aaron
>
> On 3/16/18 10:25 AM, tomasz.wol...@ts
12 matches
Mail list logo