Re: [gpfsug-discuss] Running the Spectrum Scale on a Compute-only Cluster ?

2022-03-11 Thread Kumaran Rajaram
ets + show their health status. Cheers, -Kums Kumaran Rajaram [cid:image001.png@01D83549.C3601B90] From: gpfsug-discuss-boun...@spectrumscale.org On Behalf Of Kidger, Daniel Sent: Friday, March 11, 2022 12:34 PM To: gpfsug-discuss@spectrumscale.org Subject: [gpfsug-discuss] Running the Spectrum S

Re: [gpfsug-discuss] IO sizes

2022-02-24 Thread Kumaran Rajaram
137710.43 137709.96 275420.39 --- --- --- Total 137711.70 137713.23 275424.92 My two cents, -Kums Kumaran Rajaram [cid:image001.png@01D82960.6A9860C0] From: gpfsug-discuss-boun...@spectrumscale.org On Beha

Re: [gpfsug-discuss] du --apparent-size and quota

2021-06-01 Thread Kumaran Rajaram
Hi, >> If I'm not mistaken even with SS5 created filesystems, 1 MiB FS block size >> implies 32 kiB sub blocks (32 sub-blocks). Just to add: The /srcfilesys seemed to have been created with GPFS version 4.x which supports only 32 sub-blocks per block. -T /srcfilesys

Re: [gpfsug-discuss] Spectrum Scale - how to get RPO=0

2021-05-24 Thread Kumaran Rajaram
Hi Tom, >>we are trying to implement a mixed linux/windows environment and we have one >>thing at the top - is there any global method to avoid asynchronous I/O and >>write everything in >>synchronous mode? If the local and remote sites have good inter-site network bandwidth and low-latency,

Re: [gpfsug-discuss] Client Latency and High NSD Server Load Average

2020-06-04 Thread Kumaran Rajaram
dg_performanceissues.htm Thanks and Regards, -Kums Kumaran Rajaram Spectrum Scale Development, IBM Systems k...@us.ibm.com From: "Frederick Stock" To: gpfsug-discuss@spectrumscale.org Cc: gpfsug-discuss@spectrumscale.org Date: 06/04/2020 07:08 AM Subject:[EXT

Re: [gpfsug-discuss] How to prove that data is in inode

2019-07-17 Thread Kumaran Rajaram
Hi, >> How can I prove that data of a small file is stored in the inode (and not on a data nsd)? You may use echo "inode file_inode_number" | tsdbfs fs_device | grep indirectionLevel and if it points to INODE, then the file is stored in the inodes # 4K Inode Size # mmlsfs gpfs3a | grep 'Inode

Re: [gpfsug-discuss] verbs status not working in 5.0.2

2019-06-11 Thread Kumaran Rajaram
Hi, This issue is resolved in the latest 5.0.3.1 release. # mmfsadm dump version | grep Build Build branch "5.0.3.1 ". # mmfsadm test verbs status VERBS RDMA status: started Regards, -Kums From: Ryan Novosielski To: "gpfsug-discuss@spectrumscale.org" Date:

Re: [gpfsug-discuss] NSD network checksums (nsdCksumTraditional)

2018-10-29 Thread Kumaran Rajaram
s has used it as an ESS up-sell opportunity. -- Stephen On Oct 29, 2018, at 3:56 PM, Kumaran Rajaram wrote: Hi, >>How can it be that the I/O performance degradation warning only seems to accompany the nsdCksumTraditional setting and not GNR? >>Why is there such a penalty for &quo

Re: [gpfsug-discuss] NSD network checksums (nsdCksumTraditional)

2018-10-29 Thread Kumaran Rajaram
Hi, >>How can it be that the I/O performance degradation warning only seems to accompany the nsdCksumTraditional setting and not GNR? >>Why is there such a penalty for "traditional" environments? In GNR IO/NSD servers (ESS IO nodes), the checksums are computed in parallel for a NSD (storage

Re: [gpfsug-discuss] Tuning: single client, single thread, small files - native Scale vs NFS

2018-10-15 Thread Kumaran Rajaram
Hi Alexander, 1. >>When writing to GPFS directly I'm able to write ~1800 files / second in a test setup. >>This is roughly the same on the protocol nodes (NSD client), as well as on the ESS IO nodes (NSD server). 2. >> When writing to the NFS export on the protocol node itself (to avoid any

Re: [gpfsug-discuss] What NSDs does a file have blocks on?

2018-07-09 Thread Kumaran Rajaram
Hi Kevin, >>I want to know what NSDs a single file has its’ blocks on? You may use /usr/lpp/mmfs/samples/fpo/mmgetlocationto obtain the file-to-NSD block layout map. Use the -h option for this tools usage ( mmgetlocation -h). Sample output is below: # File-system block size is 4MiB and

Re: [gpfsug-discuss] Lroc on NVME

2018-06-12 Thread Kumaran Rajaram
Hi,   >>Yes, older versions of GPFS don't recognize /dev/nvme*. So you would need /var/mmfs/etc/nsddevices user exit. >>On newer GPFS versions, the nvme devices are also generic but has anyone else tried to get lroc running on nvme and how well does it work.   IMHO, the support to recognize

Re: [gpfsug-discuss] GPFS 4.2.3.4 question

2017-08-27 Thread Kumaran Rajaram
Hi Kevin, >> Thanks - important followup question … does 4.2.3.4 contain the fix for the mmrestripefs data loss bug that was announced last week? Thanks again… I presume, by "mmrestripefs data loss bug" you are referring to APAR IV98609 (link below)? If yes, 4.2.3.4 contains the fix for APAR

Re: [gpfsug-discuss] Shared nothing (FPO) throughout / bandwidth sizing

2017-08-25 Thread Kumaran Rajaram
Hi, >>I was wondering if there are any good performance sizing guides for a spectrum scale shared nothing architecture (FPO)? >> I don't have any production experience using spectrum scale in a "shared nothing configuration " and was hoping for bandwidth / throughput sizing guidance. Please

Re: [gpfsug-discuss] Baseline testing GPFS with gpfsperf

2017-07-26 Thread Kumaran Rajaram
ad performance if your workload is comprised of large file access - if your users are actually doing a lot of medium or small files, that changes the results dramatically as you end up possibly pounding on metadata more than the actual data [attachment "att0twxd.dat" deleted by Kuma

Re: [gpfsug-discuss] get free space in GSS

2017-07-09 Thread Kumaran Rajaram
Hi Atmane, >> I can not find the free space Based on your output below, your setup currently has two recovery groups BB1RGL and BB1RGR. Issue "mmlsrecoverygroup BB1RGL -L" and "mmlsrecoverygroup BB1RGR -L" to obtain free space in each DA. Based on your "mmlsrecoverygroup BB1RGL -L" output

Re: [gpfsug-discuss] IO prioritisation / throttling?

2017-06-23 Thread Kumaran Rajaram
Hi John, >>We have a GPFS Setup using Fujitsu filers and Mellanox infiniband. >>The desire it to set up an environment for test and development where if IO ‘runs wild’ it will not bring down >>the production storage. You may use the Spectrum Scale Quality of Service for I/O "mmchqos" command

Re: [gpfsug-discuss] 4.2.3.x and sub-block size

2017-06-14 Thread Kumaran Rajaram
Hi, >>Back at SC16 I was told that GPFS 4.2.3.x would remove the “a sub-block is 1/32nd of the block size” restriction. However, I have installed GPFS 4.2.3.1 on my test cluster and in the man page for mmcrfs I still see: >>So has the restriction been removed? If not, is there an update on

Re: [gpfsug-discuss] Well, this is the pits...

2017-05-04 Thread Kumaran Rajaram
anager it is set to zero. No matter how many times I add zero to zero I don’t get a value > 31! ;-) So I take it that zero has some sort of unspecified significance? Thanks… Kevin On May 4, 2017, at 11:49 AM, Kumaran Rajaram <k...@us.ibm.com> wrote: Hi, >>I’m running 4.2.2.3

Re: [gpfsug-discuss] Well, this is the pits...

2017-05-04 Thread Kumaran Rajaram
Hi, >>I’m running 4.2.2.3 on my GPFS servers (some clients are on 4.2.1.1 or 4.2.0.3 and are gradually being upgraded). What version of GPFS fixes this? With what I’m doing I need the ability to run mmrestripefs. GPFS version 4.2.3.0 (and above) fixes this issue and supports "sum of

Re: [gpfsug-discuss] RAID config for SSD's - potential pitfalls

2017-04-19 Thread Kumaran Rajaram
Hi, >> As I've mentioned before, RAID choices for GPFS are not so simple. Here are a couple points to consider, I'm sure there's more. And if I'm wrong, someone will please correct me - but I believe the two biggest pitfalls are: >>Some RAID configurations (classically 5 and 6) work best

Re: [gpfsug-discuss] question on viewing block distribution across NSDs

2017-03-30 Thread Kumaran Rajaram
Hi, Yes, you could use "mmdf" to obtain file-system "usage" across the NSDs (comprising the file-system). If you want to obtain "data block distribution corresponding to a file across the NSDs", then there is a utility "mmgetlocation" in /usr/lpp/mmfs/samples/fpo that can be used to get