,
Alvise
Da: gpfsug-discuss-boun...@spectrumscale.org
Per conto di Dorigo Alvise (PSI)
Inviato: lunedì 13 dicembre 2021 11:50
A: gpfsug main discussion list
Oggetto: [gpfsug-discuss] R: Question on changing mode on many files
I am definitely going to try this solution with mmfind.
Thank you also
I am definitely going to try this solution with mmfind.
Thank you also for the command line and several hints… I’ll be back with the
outcome soon.
Alvise
Da: gpfsug-discuss-boun...@spectrumscale.org
Per conto di Alec
Inviato: domenica 12 dicembre 2021 23:04
A: gpfsug main discussion list
Development Advocacy | 720-430-8821
sto...@us.ibm.com<mailto:sto...@us.ibm.com>
- Original message -
From: "Dorigo Alvise (PSI)" mailto:alvise.dor...@psi.ch>>
Sent by:
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
Dear users/developers/support,
I'd like to ask if there is a fast way to manipulate the permission mask of
many files (millions).
I tried on 900k files and a recursive chmod (chmod 0### -R path) takes about
1000s, with about 50% usage of mmfsd daemon.
I tried with the perl's internal function
...@lists.psi.ch
Per conto di Dorigo Alvise (PSI)
Inviato: martedì 20 aprile 2021 14:38
A: 'sls-htc-adm...@lists.psi.ch' ; 'Henson
Jr.,Larry J' ; 'gpfsug main discussion list'
Cc: 'gpfsug-discuss-boun...@spectrumscale.org'
Oggetto: [sls-htc-admins] R: [EXT] [gpfsug-discuss] R: NFSIO metrics
Team
Office (832) 750-1403
Cell (713) 702-4896
[cid:image001.png@01D735F2.BFA83A70]
From:
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
mailto:gpfsug-discuss-boun...@spectrumscale.org>>
On Behalf Of Dorigo Alvise (PSI)
Sent: Tuesday,
gericht Stuttgart, HRB14562
[Inactive hide details for "Dorigo Alvise (PSI)" ---20.04.2021 12:19:44---Dear
Community, I've activated CES-related metrics by]"Dorigo Alvise (PSI)"
---20.04.2021 12:19:44---Dear Community, I've activated CES-related metrics by
simply doing:
From
metrics absent in
pmcollector
Have you installed the gpfs.pm-ganesha package, and do you have any active NFS
exports/clients ?
-jf
On Tue, Apr 20, 2021 at 12:19 PM Dorigo Alvise (PSI)
mailto:alvise.dor...@psi.ch>> wrote:
Dear Community,
I’ve activated CES-related metrics by simply
Dear Community,
I've activated CES-related metrics by simply doing:
[root@xbl-ces-91 ~]# mmperfmon config show |egrep -A4 'NFS|SMB'
name = "NFSIO"
period = 5
},
{
name = "SMBGlobalStats"
period = 5
},
{
name = "SMBStats"
period = 5
}
Despite that,
.
Append - file have been appended, but not replicated yet. For directory this is
complete bit which indicates that readddir was performed.
~Venkat (vpuvv...@in.ibm.com)
From:"Dorigo Alvise (PSI)"
To:gpfsug main discussion list
Date:09/21/2020 04:02
Dear GPFS users,
I know that through a policy one can know if a file is still being transferred
from the cache to your home by AFM.
I wonder if there is another method @cache or @home side, faster and less
invasive (a policy, as far as I know, can put some pressure on the system when
there are
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
sto...@us.ibm.com
- Original message -----
From: "Dorigo Alvise (PSI)"
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: gpfsug main discussion list
Cc:
Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR
clus
197e2c-4277-4ec9-808b-ad6abe1e1b16/public_url>
On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote:
Thank Anderson for the material. In principle our idea was to scratch the
filesystem in the GL2, put its NSD on a dedicated pool and then merge it into
the Filesystem which would remain on V4. I do
Phone: 55-19-2132-4317
E-mail: ano...@br.ibm.com<mailto:ano...@br.ibm.com> [IBM]
----- Original message -
From: "Dorigo Alvise (PSI)"
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: "gpfsug-discuss@spectrumscale.org"
Cc:
Subject:
got a new ESS.. (no data on it) .. simply unconfigure cluster .. ..
add the nodes to your existing cluster.. and then start configuring the RGs
From:"Dorigo Alvise (PSI)"
To:"gpfsug-discuss@spectrumscale.org"
Date:12/03/2019 09:35 AM
Subject:
Hello everyone,
I have:
- A NetApp system with hardware RAID
- SpectrumScale 4.2.3-13 running on top of the NetApp
- A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1)
What I need to do is to merge the GL2 in the other GPFS cluster (running on the
NetApp) without loosing, of course,
Hello folks,
recently I observed that calling every 5 minutes the command "lsscsi -g" on a
Lenovo I/O node (a X3650 M5 connected to D3284 enclosures, part of a DSS-G220
system) can seriously compromise the GPFS I/O performance.
(The motivation of running lsscsi every 5 minutes is a bit out of
know if you are using kernel nfs or Ganesha and on which release? In my
opinion Kernel NFS should work.
Thanks,
Abhishek, Dave
[Inactive hide details for "Dorigo Alvise (PSI)" ---08/12/2019 06:40:34
PM---Dear GPFS users, does anybody know if AFM behaves c]"Dorigo Alvise (PSI)"
-
Dear GPFS users,
does anybody know if AFM behaves correctly if the AFM gateway has SELinux
"Disabled" and NFS server has SElinux "Enforcing" ?
thanks,
Alvise
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
...@spectrumscale.org
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Dorigo Alvise (PSI)
[alvise.dor...@psi.ch]
Sent: Friday, June 28, 2019 9:25 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Problem with GUI reporting gui_refresh_task_failed
The tarball 5.0.2-3 we have doesn't have the .7
go Alvise (PSI)" ---26.06.2019
11:25:39---Hello, after upgrading my GL2 to ESS 5.3.2-1 I started]"Dorigo
Alvise (PSI)" ---26.06.2019 11:25:39---Hello, after upgrading my GL2 to ESS
5.3.2-1 I started to periodically get this warning from the GUI
From: "Dorigo Alvis
Hello,
after upgrading my GL2 to ESS 5.3.2-1 I started to periodically get this
warning from the GUI:
sf-gpfs.psi.ch sf-ems1.psi.ch gui_refresh_task_failed NODEsf-ems1.psi.ch
WARNING The following GUI refresh task(s) failed: HEALTH_TRIGGERED;HW_INVENTORY
The upgrade procedure was
Hello,
to get the list (and size) of files that fit into inodes what I do, using a
policy, is listing "online" (not evicted) files that have zero allocated KB.
Is this correct or there could be some exception I'm missing ?
Does it exists a smarter/faster way ?
thanks,
Alvise
Hi Marc,
"Indirect block size" is well explained in this presentation:
http://files.gpfsug.org/presentations/2016/south-bank/D2_P2_A_spectrum_scale_metadata_dark_V2a.pdf
pages 37-41
Cheers,
Alvise
From: gpfsug-discuss-boun...@spectrumscale.org
-- ddj
Dave Johnson
On Mar 21, 2019, at 9:22 AM, Dorigo Alvise (PSI)
mailto:alvise.dor...@psi.ch>> wrote:
Hi,
I'm a little bit puzzled about different meanings of blocksize for different
GPFS installation (standard and gnr).
>From this page
>https://www.ibm.com/developerworks/co
Hi,
I'm a little bit puzzled about different meanings of blocksize for different
GPFS installation (standard and gnr).
>From this page
>https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/File%20System%20Planning
I read:
*
__
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
sto...@us.ibm.com
- Original message -
From: "Dorigo Alvise (PSI)"
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: "gpfsug-discuss@spectrumscale.org"
Cc:
Subject: [gpfsug-discuss] Calculate evicted spac
Dear users,
is there a way (through a policy) to list the files (and their size) that are
actually completely evicted by AFM from the cache filesystem ?
I used a policy with the clause KB_ALLOCATED=0, but it is clearly not precise,
because it also includes files that are not evicted, but are so
.
I look to /proc/$pid/status to find the memory used by a proc RSS + Swap +
kernel page tables.
Jim
On Wednesday, March 6, 2019, 4:25:48 AM EST, Dorigo Alvise (PSI)
wrote:
Hello to everyone,
Here a PSI we're observing something that in principle seems strange (at least
to me).
We run a Java
Hello to everyone,
Here a PSI we're observing something that in principle seems strange (at least
to me).
We run a Java application writing into disk by mean of a standard
AsynchronousFileChannel, whose I do not the details.
There are two instances of this application: one runs on a node writing
space, and
allocated vdisks.
thanks,
Alvise
From: Sandeep Naik1 [sanna...@in.ibm.com]
Sent: Tuesday, February 12, 2019 8:50 PM
To: gpfsug main discussion list; Dorigo Alvise (PSI)
Subject: Re: [gpfsug-discuss] Unbalanced pdisk free space
Hi Alvise,
Here
Hello,
I've a Lenovo Spectrum Scale system DSS-G220 (software dss-g-2.0a) composed of
2x x3560 M5 IO server nodes
1x x3550 M5 client/support node
2x disk enclosures D3284
GPFS/GNR 4.2.3-7
Can anybody tell me if it is normal that all the pdisks of both my recovery
groups, residing on the same
Perry
Scalable I/O Development (Spectrum Scale)
email: t...@il.ibm.com
1 Azrieli Center, Tel Aviv 67021, Israel
Global Tel:+1 720 3422758
Israel Tel: +972 3 9188625
Mobile: +972 52 2554625
From:"Dorigo Alvise (PSI)"
To:gpfsug main discussion list
Hi Tomer,
"changed" makes me suppose that it is still possible, but in a different way...
am I correct ? if yes, what it is ?
thanks,
Alvise
From: gpfsug-discuss-boun...@spectrumscale.org
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Tomer Perry
Hi, in GPFS 5.0.2 I cannot run anymore "mmfsadm test verbs status":
[root@sf-dss-1 ~]# mmdiag --version ; mmfsadm test verbs status
=== mmdiag: version ===
Current GPFS build: "4.2.3.7 ".
Built on Feb 15 2018 at 11:38:38
Running 62 days 11 hours 24 minutes 35 secs, pid 7510
VERBS RDMA status:
infiniband using zimon sensors ?
Thanks,
Alvise
From: Karthik G Iyer1 [karthik.i...@in.ibm.com] on behalf of IBM Spectrum Scale
[sc...@us.ibm.com]
Sent: Thursday, December 06, 2018 12:35 PM
To: Mathias Dietz
Cc: gpfsug main discussion list; Dorigo Alvise (PSI
Good morning,
I wonder if pmsensors 5.x is supported with a 4.2.3 collector.
Since I've upgraded a couple of nodes (afm gateways) to GPFS 5, while the rest
of the cluster is still running 4.2.3-7 (including the collector), I haven't
got anymore metrics from the upgraded nodes.
Further, I do
Good evening,
I'm following this guide:
https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectrum.scale.v4r23.doc/bl1ins_paralleldatatransfersafm.htm
to setup AFM parallel transfer.
Why the following command (grabbed directly from the web page above) fires out
that error ?
...@de.ibm.com]
Sent: Thursday, November 15, 2018 9:35 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Wrong behavior of mmperfmon
ntp running / time correct ?
From:"Dorigo Alvise (PSI)"
To:"gpfsug-discuss@spectrumscale.org"
Date:
Hello,
I'm using mmperfmon to get writing stats on NSD during a write activity on a
GPFS filesystem (Lenovo system with dss-g-2.0a).
I use this command:
# mmperfmon query 'sf-dssio-1.psi.ch|GPFSNSDFS|RAW|gpfs_nsdfs_bytes_written'
--number-buckets 48 -b 1
to get the stats. What it returns is a
Dear experts,
during a intensive writing into a GPFS FS (~9.5 GB/s), if I run mmperfmon to
collect performance data I get many "null" strings instead of real data::
[root@sf-dss-1 ~]# date;mmperfmon query
'sf-dssio-.*.psi.ch|GPFSNSDFS|RAW|gpfs_nsdfs_bytes_written' --short
--number-buckets 10
e used for
priority messages to the Spectrum Scale (GPFS) team.
[Inactive hide details for "Dorigo Alvise (PSI)" ---13.07.2018 12:08:59---Hi,
I've a GL2 cluster based on gpfs 4.2.3-6, with 1 s]"Dorigo Alvise (PSI)"
---13.07.2018 12:08:59---Hi, I've a GL2 cluster based on gpf
t you may
have an issue with redundancy or loss of bandwidth if you do not have every
port cabled and configured correctly.
Regards
Andrew Beattie
Software Defined Storage - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com<mailto:abeat...@au1.ibm.com>
- Original mess
Dear members,
at PSI I'm trying to integrate the CES service with our AD authentication
system.
My understanding, after talking to expert people here, is that I should use the
RFC2307 model for ID mapping (described here: https://goo.gl/XvqHDH). The
problem is that our ID schema is slightly
ops sorry! wrong window!
please remove it...
sorry.
Alvise Dorigo
From: Dorigo Alvise (PSI)
Sent: Wednesday, May 23, 2018 9:41 AM
To: gpfsug main discussion list
Subject: RE: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71
Hi Felix,
yes please
Hi Atmane,
in my Gl2 system I can do:
# mmlsenclosure all -L|grep -B2 tempSensor
component type serial number component idfailed value unit
properties
-- - -- -
--
tempSensor XDCM_0A
Hi Olaf,
yes we have separate vdisks for MD: 2 vdisks, each is 100GBytes large, 1MBytes
blocksize, 3WayReplication.
A
From: gpfsug-discuss-boun...@spectrumscale.org
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Olaf Weiser
[olaf.wei...@de.ibm.com]
47 matches
Mail list logo