ets + show their health status.
Cheers,
-Kums
Kumaran Rajaram
[cid:image001.png@01D83549.C3601B90]
From: gpfsug-discuss-boun...@spectrumscale.org
On Behalf Of Kidger, Daniel
Sent: Friday, March 11, 2022 12:34 PM
To: gpfsug-discuss@spectrumscale.org
Subject: [gpfsug-discuss] Running the Spectrum S
137710.43 137709.96 275420.39
--- --- ---
Total 137711.70 137713.23 275424.92
My two cents,
-Kums
Kumaran Rajaram
[cid:image001.png@01D82960.6A9860C0]
From: gpfsug-discuss-boun...@spectrumscale.org
On Beha
Hi,
>> If I'm not mistaken even with SS5 created filesystems, 1 MiB FS block size
>> implies 32 kiB sub blocks (32 sub-blocks).
Just to add: The /srcfilesys seemed to have been created with GPFS version 4.x
which supports only 32 sub-blocks per block.
-T /srcfilesys
Hi Tom,
>>we are trying to implement a mixed linux/windows environment and we have one
>>thing at the top - is there any global method to avoid asynchronous I/O and
>>write everything in >>synchronous mode?
If the local and remote sites have good inter-site network bandwidth and
low-latency,
dg_performanceissues.htm
Thanks and Regards,
-Kums
Kumaran Rajaram
Spectrum Scale Development, IBM Systems
k...@us.ibm.com
From: "Frederick Stock"
To: gpfsug-discuss@spectrumscale.org
Cc: gpfsug-discuss@spectrumscale.org
Date: 06/04/2020 07:08 AM
Subject:[EXT
Hi,
>> How can I prove that data of a small file is stored in the inode (and
not on a data nsd)?
You may use echo "inode file_inode_number" | tsdbfs fs_device | grep
indirectionLevel and if it points to INODE, then the file is stored in the
inodes
# 4K Inode Size
# mmlsfs gpfs3a | grep 'Inode
Hi,
This issue is resolved in the latest 5.0.3.1 release.
# mmfsadm dump version | grep Build
Build branch "5.0.3.1 ".
# mmfsadm test verbs status
VERBS RDMA status: started
Regards,
-Kums
From: Ryan Novosielski
To: "gpfsug-discuss@spectrumscale.org"
Date:
s has used it as an ESS up-sell
opportunity.
--
Stephen
On Oct 29, 2018, at 3:56 PM, Kumaran Rajaram wrote:
Hi,
>>How can it be that the I/O performance degradation warning only seems to
accompany the nsdCksumTraditional setting and not GNR?
>>Why is there such a penalty for &quo
Hi,
>>How can it be that the I/O performance degradation warning only seems to
accompany the nsdCksumTraditional setting and not GNR?
>>Why is there such a penalty for "traditional" environments?
In GNR IO/NSD servers (ESS IO nodes), the checksums are computed in
parallel for a NSD (storage
Hi Alexander,
1. >>When writing to GPFS directly I'm able to write ~1800 files / second
in a test setup.
>>This is roughly the same on the protocol nodes (NSD client), as well as
on the ESS IO nodes (NSD server).
2. >> When writing to the NFS export on the protocol node itself (to avoid
any
Hi Kevin,
>>I want to know what NSDs a single file has its’ blocks on?
You may use /usr/lpp/mmfs/samples/fpo/mmgetlocationto obtain the
file-to-NSD block layout map. Use the -h option for this tools usage (
mmgetlocation -h).
Sample output is below:
# File-system block size is 4MiB and
Hi,
>>Yes, older versions of GPFS don't recognize /dev/nvme*. So you would need /var/mmfs/etc/nsddevices user exit. >>On newer GPFS versions, the nvme devices are also generic
but has anyone else tried to get lroc running on nvme and how well does it work.
IMHO, the support to recognize
Hi Kevin,
>> Thanks - important followup question … does 4.2.3.4 contain the fix for
the mmrestripefs data loss bug that was announced last week? Thanks
again…
I presume, by "mmrestripefs data loss bug" you are referring to APAR
IV98609 (link below)? If yes, 4.2.3.4 contains the fix for APAR
Hi,
>>I was wondering if there are any good performance sizing guides for a
spectrum scale shared nothing architecture (FPO)?
>> I don't have any production experience using spectrum scale in a
"shared nothing configuration " and was hoping for bandwidth / throughput
sizing guidance.
Please
ad performance
if
your workload is comprised of large file access - if your users are
actually
doing a lot of medium or small files, that changes the results
dramatically
as you end up possibly pounding on metadata more than the actual data
[attachment "att0twxd.dat" deleted by Kuma
Hi Atmane,
>> I can not find the free space
Based on your output below, your setup currently has two recovery groups
BB1RGL and BB1RGR.
Issue "mmlsrecoverygroup BB1RGL -L" and "mmlsrecoverygroup BB1RGR -L" to
obtain free space in each DA.
Based on your "mmlsrecoverygroup BB1RGL -L" output
Hi John,
>>We have a GPFS Setup using Fujitsu filers and Mellanox infiniband.
>>The desire it to set up an environment for test and development where if
IO ‘runs wild’ it will not bring down
>>the production storage.
You may use the Spectrum Scale Quality of Service for I/O "mmchqos"
command
Hi,
>>Back at SC16 I was told that GPFS 4.2.3.x would remove the “a sub-block
is 1/32nd of the block size” restriction. However, I have installed GPFS
4.2.3.1 on my test cluster and in the man page for mmcrfs I still see:
>>So has the restriction been removed? If not, is there an update on
anager it is set to zero. No matter how many
times I add zero to zero I don’t get a value > 31! ;-) So I take it that
zero has some sort of unspecified significance? Thanks…
Kevin
On May 4, 2017, at 11:49 AM, Kumaran Rajaram <k...@us.ibm.com> wrote:
Hi,
>>I’m running 4.2.2.3
Hi,
>>I’m running 4.2.2.3 on my GPFS servers (some clients are on 4.2.1.1 or
4.2.0.3 and are gradually being upgraded). What version of GPFS fixes
this? With what I’m doing I need the ability to run mmrestripefs.
GPFS version 4.2.3.0 (and above) fixes this issue and supports "sum of
Hi,
>> As I've mentioned before, RAID choices for GPFS are not so simple. Here
are a couple points to consider, I'm sure there's more. And if I'm
wrong, someone will please correct me - but I believe the two biggest
pitfalls are:
>>Some RAID configurations (classically 5 and 6) work best
Hi,
Yes, you could use "mmdf" to obtain file-system "usage" across the NSDs
(comprising the file-system).
If you want to obtain "data block distribution corresponding to a file
across the NSDs", then there is a utility "mmgetlocation" in
/usr/lpp/mmfs/samples/fpo that can be used to get
22 matches
Mail list logo