Re: [gpfsug-discuss] immutable folder

2022-02-23 Thread Frederick Stock
Paul, what version of Spectrum Scale are you using? Fred___Fred Stock | Spectrum Scale Development Advocacy | 720-430-8821sto...@us.ibm.com     - Original message -From: "Paul Ward" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo:

Re: [gpfsug-discuss] R: Question on changing mode on many files

2021-12-07 Thread Frederick Stock
Yes Fred___Fred Stock | Spectrum Scale Development Advocacy | 720-430-8821sto...@us.ibm.com     - Original message -From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: "gpfsug main discussion list" Cc:Subject:

Re: [gpfsug-discuss] Question on changing mode on many files

2021-12-07 Thread Frederick Stock
If you are running on a more recent version of Scale you might want to look at the mmfind command.  It provides a find-like wrapper around the execution of policy rules. Fred___Fred Stock | Spectrum Scale Development Advocacy |

Re: [gpfsug-discuss] mmsysmon exception with pmcollector socket being absent

2021-11-10 Thread Frederick Stock
I am curious to know if you upgraded by manually applying rpms or if you used the Spectrum Scale deployment tool (spectrumscale command) to apply the upgrade? Fred___Fred Stock | Spectrum Scale Development Advocacy | 720-430-8821sto...@us.ibm.com

Re: [gpfsug-discuss] alphafold and mmap performance

2021-10-19 Thread Frederick Stock
FYI Scale 4.2.3 went out of support in September 2020.  DDN may still support it but there are no updates/enhancements being made to that code stream.  It is quite old.  Scale 5.0.x goes end of support at the end of April 2022.  Scale 5.1.2 was just released and if possible I suggest you upgrade

Re: [gpfsug-discuss] mmbackup with own policy

2021-06-23 Thread Frederick Stock
The only requirement for your own backup policy is that it finds the files you want to back up and skips those that you do not want to back up.  It is no different than any policy that you would use with the GPFS policy engine. Fred___Fred Stock

Re: [gpfsug-discuss] Quick delete of huge tree

2021-04-20 Thread Frederick Stock
Assuming your metadata storage is not already at its limit of throughput you may get improved performance by temporarily increasing the value of maxBackgroundDeletionThreads.  You can check its current value with "mmlsconfig maxBackgroundDeletionThreads" and change it with the command "mmchconfig

Re: [gpfsug-discuss] Policy scan of symbolic links with contents?

2021-03-08 Thread Frederick Stock
ile system are of interest, just some of them.   I can’t say for sure where in the file system they are (and I don’t care).     Bob Oesterlin Sr Principal Storage Engineer, Nuance     From: on behalf of Frederick Stock Reply-To: gpfsug main discussion list Date: Monday, March 8, 2021 at 9:29 AMTo: "

Re: [gpfsug-discuss] Policy scan of symbolic links with contents?

2021-03-08 Thread Frederick Stock
Could you use the PATHNAME LIKE statement to limit the location to the files of interest? Fred___Fred Stock | Spectrum Scale Development Advocacy | 720-430-8821sto...@us.ibm.com     - Original message -From: "Oesterlin, Robert" Sent by:

Re: [gpfsug-discuss] TSM errors restoring files with ACL's

2021-03-05 Thread Frederick Stock
-Von: gpfsug-discuss-boun...@spectrumscale.org Im Auftrag von Jonathan BuzzardGesendet: Freitag, 5. März 2021 14:08An: gpfsug-discuss@spectrumscale.orgBetreff: Re: [gpfsug-discuss] TSM errors restoring files with ACL'sOn 05/03/2021 12:15, Frederick Stock wrote:> CAUTION: This email originated

Re: [gpfsug-discuss] TSM errors restoring files with ACL's

2021-03-05 Thread Frederick Stock
Have you checked to see if Spectrum Protect (TSM) has addressed this problem.  There recently was an issue with Protect and how it used the GPFS API for ACLs.  If I recall Protect was not properly handling a return code.  I do not know if it is relevant to your problem but  it seemed worth

Re: [gpfsug-discuss] Using setfacl vs. mmputacl

2021-03-01 Thread Frederick Stock
To add to Olaf's response, Scale 4.2 is now out of support, as of October 1, 2020.  I do not know if this behavior would change with a more recent release of Scale but it is worth giving that a try if you can.  The most current release of Scale is 5.1.0.2.

Re: [gpfsug-discuss] policy ilm features?

2021-02-02 Thread Frederick Stock
Hello Ed.  Jordan contacted me about the question you are posing so I am responding to your message.  Could you please provide clarification as to why the existence of the unbalanced flag is of a concern, or why you would want to know all the files that have this flag set?  The flag would be

Re: [gpfsug-discuss] Migrate/syncronize data from Isilon to Scale over NFS?

2020-11-16 Thread Frederick Stock
Have you considered using the AFM feature of Spectrum Scale?  I doubt it will provide any speed improvement but it would allow for data to be accessed as it was being migrated. Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com    

Re: [gpfsug-discuss] Poor client performance with high cpu usage of mmfsd process

2020-11-13 Thread Frederick Stock
Scale 4.2.3 was end of service as of September 30, 2020.  As for waiters the mmdiag --waiters command only shows waiters on the node upon which the command is executed.  You should use the command, mmlsnode -N waiters -L, to see all the waiters in the cluster, which may be more revealing as to the

Re: [gpfsug-discuss] tsgskkm stuck

2020-08-28 Thread Frederick Stock
Not sure that Spectrum Scale has stated it supports the AMD epyc (Rome?) processors.  You may want to open a help case to determine the cause of this problem.   Note that Spectrum Scale 4.2.x goes out of service on September 30, 2020 so you may want to consider upgrading your cluster.  And should

Re: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA

2020-07-23 Thread Frederick Stock
And what version of ESS/Scale are you running on your systems (mmdiag --version)? Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     - Original message -From: "Yaron Daniel" Sent by:

Re: [gpfsug-discuss] dependent versus independent filesets

2020-07-07 Thread Frederick Stock
One comment about inode preallocation.  There was a time when inode creation was performance challenged but in my opinion that is no longer the case, unless you have need for file creates to complete at extreme speed.  In my experience it is the rare customer that requires extremely fast file

Re: [gpfsug-discuss] Dedicated filesystem for cesSharedRoot

2020-06-25 Thread Frederick Stock
Generally these file systems are configured with a block size of 256KB.  As for inodes I would not pre-allocate any and set the initial maximum size to value such as 5000 since it can be increased if necessary. Fred__Fred Stock | IBM Pittsburgh Lab |

Re: [gpfsug-discuss] Client Latency and High NSD Server Load Average

2020-06-04 Thread Frederick Stock
: Contents of gpfsug-discuss digest..."Today's Topics:   1. Introducing SSUG::Digital  (Simon Thompson (Spectrum Scale User Group Chair))   2. Client Latency and High NSD Server Load Average  (Saula, Oluwasijibomi)   3. Re: Client Latency and High NSD S

Re: [gpfsug-discuss] Client Latency and High NSD Server Load Average

2020-06-03 Thread Frederick Stock
Does the output of mmdf show that data is evenly distributed across your NSDs?  If not that could be contributing to your problem.  Also, are your NSDs evenly distributed across your NSD servers, and the NSD configured so the first NSD server for each is not the same one?

Re: [gpfsug-discuss] Immutible attribute

2020-06-03 Thread Frederick Stock
Could you please provide the exact Scale version, or was it really 4.2.3.0? Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     - Original message -From: Jonathan Buzzard Sent by:

Re: [gpfsug-discuss] Importing a Spectrum Scale a filesystem from 4.2.3 cluster to 5.0.4.3 cluster

2020-06-01 Thread Frederick Stock
Chris, it was not clear to me if the file system you imported had files migrated to Spectrum Protect, that is stub files in GPFS.  If the file system does contain files migrated to Spectrum Protect with just a stub file in the file system, have you tried to recall any of them to see if that still

Re: [gpfsug-discuss] Scale 4.2.3.22 with support for RHEL 7.8 is now on Fix Central

2020-05-14 Thread Frederick Stock
Regarding RH 7.8 support in Scale 5.0.x, we expect it will be supported in the 5.0.5 release, due out very soon, but it may slip to the first PTF on 5.0.5. Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     - Original

Re: [gpfsug-discuss] wait for mount during gpfs startup

2020-04-28 Thread Frederick Stock
Have you looked a the mmaddcallback command and specifically the file system mount callbacks? Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     - Original message -From: Ulrich Sibiller Sent by:

Re: [gpfsug-discuss] gpfs filesets question

2020-04-16 Thread Frederick Stock
 9728 ( 0%)                -                         ---(pool total)      13473044480                           13472497664 (100%)         83712 ( 0%)   More or less empty.   Interesting...    On Thu, Apr 16, 2020 at 1:11 PM Frederick Stock <sto.

Re: [gpfsug-discuss] gpfs filesets question

2020-04-16 Thread Frederick Stock
Do you have more than one GPFS storage pool in the system?  If you do and they align with the filesets then that might explain why moving data from one fileset to another is causing increased IO operations. Fred__Fred Stock | IBM Pittsburgh Lab |

Re: [gpfsug-discuss] maxStatCache and maxFilesToCache: Tip"gpfs_maxstatcache_low".

2020-03-13 Thread Frederick Stock
As you have learned there is no simple formula for setting the maxStatToCache, or for that matter the maxFilesToCache, configuration values.  Memory is certainly one consideration but another is directory listing operations.  The information kept in the stat cache is sufficient for fulfilling

Re: [gpfsug-discuss] AFM Alternative?

2020-02-26 Thread Frederick Stock
What sources are you using to help you with configuring AFM? Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     - Original message -From: Andi Christiansen To: Frederick Stock , gpfsug-discuss

Re: [gpfsug-discuss] AFM Alternative?

2020-02-26 Thread Frederick Stock
Andi, what version of Spectrum Scale do you have installed? Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     - Original message -From: "Olaf Weiser" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo:

Re: [gpfsug-discuss] Thousands of CLOSE_WAIT IPV6 connections on CES

2020-02-25 Thread Frederick Stock
Colleagues of mine have communicated that this has been seen in the past due to interaction between the Spectrum Scale performance monitor (zimon) and Grafana.  Are you using Grafana?  Normally zimon is configured to use local port 9094 so if that is the port which the CLOSE_WAIT is attached then

Re: [gpfsug-discuss] Upgrade GPFS 3.5 to Spectrum Scale 5.0.3

2020-02-21 Thread Frederick Stock
Assuming you want to maintain your cluster and the file systems you have created you would need to upgrade to Spectrum Scale 4.2.3.x (4.1.x is no longer supported).  I think an upgrade from 3.5 to 4.2 is supported.  Once you have upgraded to 4.2.3.x (the latest PTF is 20, 4.2.3.20) you can then

Re: [gpfsug-discuss] GPFS 5 and supported rhel OS

2020-02-20 Thread Frederick Stock
This is a bit off the point of this discussion but it seemed like an appropriate context for me to post this question.  IMHO the state of software is such that it is expected to change rather frequently, for example the OS on your laptop/tablet/smartphone and your web browser.  It is correct to

Re: [gpfsug-discuss] How to upgrade GPFS 4.2.3.2 version?

2020-01-27 Thread Frederick Stock
Note that Spectrum Scale 4.2.x will be end of service in September 2020.  I strongly suggest you consider upgrading to Spectrum Scale 5.0.3 or later. Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     - Original message

Re: [gpfsug-discuss] mmapplypolicy - listing policy gets occasionally get stuck.

2020-01-14 Thread Frederick Stock
When this occurs have you run the command, mmlsnode -N waiters -L, to see the list of waiters in the cluster?  That may provide clues as to why the policy seems stuck. Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     -

Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster

2019-12-05 Thread Frederick Stock
If you plan to replace all the storage then why did you choose to integrate a ESS GL2 rather than use another storage option?  Perhaps you had already purchased the ESS system? Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com    

Re: [gpfsug-discuss] GPFS on RHEL 8.1

2019-11-11 Thread Frederick Stock
I was reminded by a colleague that RHEL 8.1 support is expected in the first quarter of 2020. Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     - Original message -From: "Frederick Stock" Sent by: gpfsug-di

Re: [gpfsug-discuss] GPFS on RHEL 8.1

2019-11-11 Thread Frederick Stock
RHEL 8.1 is not yet supported so the mmbuildgpl error is not unexpected.  I do not recall when RHEL 8.1 will be supported. Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     - Original message -From: "Jakobs, Julian"

Re: [gpfsug-discuss] ESS - Considerations when adding NSD space?

2019-10-24 Thread Frederick Stock
Bob as I understand having different size NSDs is still not considered ideal even for ESS.  I had another customer recently add storage to an ESS system and they were advised to first check the size of their current vdisks and size the new vdisks to be the same. 

Re: [gpfsug-discuss] mmbackup questions

2019-10-17 Thread Frederick Stock
Jonathan the "objects inspected" refers to the number of file system objects that matched the policy rules used for the backup.  These rules are influenced by TSM server and client settings, e.g. the dsm.sys file.  So not all objects in the file system are actually inspected.   As for tuning I

Re: [gpfsug-discuss] default owner and group for POSIX ACLs

2019-10-16 Thread Frederick Stock
Paul in regards to your question I would think you want to use NFSv4 ACLs and set the chmodAndUpdateAcl option on the fileset (see mmcrfileset/mmchfileset). Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     - Original

Re: [gpfsug-discuss] default owner and group for POSIX ACLs

2019-10-15 Thread Frederick Stock
Thanks Paul.  Could you please clarify which ACL you changed, the GPFS NFSv4 ACL or the POSIX ACL? Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     - Original message -From: Paul Ward Sent by:

Re: [gpfsug-discuss] default owner and group for POSIX ACLs

2019-10-15 Thread Frederick Stock
As I understand if you change only the POSIX attributes on a file then you are correct that TSM will only backup the file metadata, actually just the POSIX relevant metadata.  However, if you change ACLs or other GPFS specific metadata then TSM will backup the entire file, TSM does not keep all

Re: [gpfsug-discuss] Backup question

2019-08-29 Thread Frederick Stock
The only integrated solution, that is using mmbackup, is with Spectrum Protect.  However, you can use other products to backup GPFS file systems assuming they use POSIX commands/interfaces to do the backup.  I think CommVault has a backup client that makes use of the GPFS policy engine so it runs

Re: [gpfsug-discuss] Checking for Stale File Handles

2019-08-09 Thread Frederick Stock
Are you able to explain why you want to check for stale file handles?  Are you attempting to detect failures of some sort, and why do the existing mechanisms in GPFS not provide the functionality you require? Fred__Fred Stock | IBM Pittsburgh Lab |

Re: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question

2019-05-20 Thread Frederick Stock
Todd I am not aware of any tool that provides the out of band recall that you propose, though it would be quite useful.  However, I wanted to note that as I understand the reason the the Mac client initiates the file recalls is because the Mac SMB client ignores the archive bit, indicating a file

Re: [gpfsug-discuss] GPFS v5: Blocksizes and subblocks

2019-03-27 Thread Frederick Stock
Kevin you are correct, it is one "system" storage pool per file system not cluster. Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     - Original message -From: "Buterbaugh, Kevin L" Sent by:

Re: [gpfsug-discuss] mmlsquota output

2019-03-25 Thread Frederick Stock
It seems like a defect so I suggest you submit a help case for it.  If you are parsing the output you should consider using the -Y option since that should simplify any parsing.  I do not know if the mmrepquota command would be helpful to you but it is worth taking a look.

Re: [gpfsug-discuss] Calculate evicted space with a policy

2019-03-19 Thread Frederick Stock
You can scan for files using the MISC_ATTRIBUTES and look for those that are not cached, that is without the 'u' setting, and track their file size.  I think that should work. Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com    

Re: [gpfsug-discuss] Systemd configuration to wait for mount of SS filesystem

2019-03-14 Thread Frederick Stock
FS-filesystem-based commands I really want, I don't think mmaddcallback is the droid I'm looking for.   Stephen R. Wall BuchananSr. IT SpecialistIBM Data & AI North America Government Expert Labs+1 (571) 299-4601stephen.bucha...@us.ibm.com     - Original message -From: "Freder

Re: [gpfsug-discuss] Systemd configuration to wait for mount of SS filesystem

2019-03-14 Thread Frederick Stock
It is not systemd based but you might want to look at the user callback feature in GPFS (mmaddcallback).  There is a file system mount callback you could register. Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     -

Re: [gpfsug-discuss] mmbackup: how to keep list(expiredFiles, updatedFiles) files

2019-03-12 Thread Frederick Stock
In the mmbackup man page look at the settings for the DEBUGmmbackup variable.  There is a value that will keep the temporary files. Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     - Original message -From: "Jaime

Re: [gpfsug-discuss] Migrating billions of files?

2019-03-06 Thread Frederick Stock
Does Aspera require a license?  Fred__Fred Stock | IBM Pittsburgh Lab | 720-430-8821sto...@us.ibm.com     - Original message -From: "Yaron Daniel" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject:

Re: [gpfsug-discuss] Clarification of mmdiag --iohist output

2019-02-21 Thread Frederick Stock
Kevin I'm assuming you have seen the article on IBM developerWorks about the GPFS NSD queues.  It provides useful background for analyzing the dump nsd information.  Here I'll list some thoughts for items that you can investigate/consider.   If your NSD servers are doing both large (greater than

Re: [gpfsug-discuss] Filesystem automount issues

2019-01-16 Thread Frederick Stock
What does the output of "mmlsmount all -L" show? Fred __ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 sto...@us.ibm.com From: KG To: gpfsug main discussion list Date: 01/16/2019 11:19 AM Subject:[gpfsug-discuss] Filesystem

Re: [gpfsug-discuss] GPFS nodes crashing during policy scan

2018-12-06 Thread Frederick Stock
something like this or have any suggestions on what to do to avoid it? Thanks. John Ratliff | Pervasive Technology Institute | UITS | Research Storage – Indiana University | http://pti.iu.edu [attachment "smime.p7s" deleted by Frederick Stock/Pittsburgh/IBM]

Re: [gpfsug-discuss] Long I/O's on client but not on NSD server(s)

2018-10-04 Thread Frederick Stock
My first guess would be the network between the NSD client and NSD server. netstat and ethtool may help to determine where the cause may lie, if it is on the NSD client. Obviously a switch on the network could be another source of the problem. Fred

Re: [gpfsug-discuss] Optimal range on inode count for a single folder

2018-09-11 Thread Frederick Stock
I am not sure I can provide you an optimal range but I can list some factors to consider. In general the guideline is to keep directories to 500K files or so. Keeping your metadata on separate NSDs, and preferably fast NSDs, helps especially with directory listings. And running the latest

Re: [gpfsug-discuss] RAID type for system pool

2018-09-10 Thread Frederick Stock
My guess is that the "metadata" IO is for either for directory data since directories are considered metadata, or fileset metadata. Fred __ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 sto...@us.ibm.com From: "Buterbaugh, Kevin L" To:

Re: [gpfsug-discuss] Problem with mmlscluster and callback scripts

2018-09-07 Thread Frederick Stock
Are you really running version 5.0.2? If so then I presume you have a beta version since it has not yet been released. For beta problems there is a specific feedback mechanism that should be used to report problems. Fred __ Fred Stock | IBM

Re: [gpfsug-discuss] mmbackup failed

2018-09-05 Thread Frederick Stock
There are options in the mmbackup command to rebuild the shadowDB file from data kept in TSM. Be aware that using this option will take time to rebuild the shadowDB file, i.e. it is not a fast procedure. Fred __ Fred Stock | IBM Pittsburgh Lab |

Re: [gpfsug-discuss] RAID type for system pool

2018-09-05 Thread Frederick Stock
Another option for saving space is to not keep 2 copies of the metadata within GPFS. The SSDs are mirrored so you have two copies though very likely they share a possible single point of failure and that could be a deal breaker. I have my doubts that RAID5 will perform well for the reasons

Re: [gpfsug-discuss] Rebalancing with mmrestripefs -P

2018-08-20 Thread Frederick Stock
That should do what you want. Be aware that mmrestripefs generates significant IO load so you should either use the QoS feature to mitigate its impact or run the command when the system is not very busy. Note you have two additional NSDs in the 33 failure group than you do in the 23 failure

Re: [gpfsug-discuss] Same file opened by many nodes / processes

2018-07-23 Thread Frederick Stock
Have you considered keeping the 1G network for daemon traffic and moving the data traffic to another network? Given the description of your configuration with only 2 manager nodes handling mmbackup and other tasks my guess is that is where the problem lies regarding performance when mmbackup

Re: [gpfsug-discuss] preventing HSM tape recall storms

2018-07-09 Thread Frederick Stock
Another option is to request Apple to support the OFFLINE flag in the SMB protocol. The more Mac customers making such a request (I have asked others to do likewise) might convince Apple to add this checking to their SMB client. Fred __ Fred

Re: [gpfsug-discuss] High I/O wait times

2018-07-03 Thread Frederick Stock
SD servers. And a while back I did try increase the page pool on them (very slightly) and ended up causing problems because then they ran out of physical RAM. Thoughts? Followup questions? Thanks! Kevin On Jul 3, 2018, at 3:11 PM, Frederick Stock wrote: Are you seeing similar values for all

Re: [gpfsug-discuss] High I/O wait times

2018-07-03 Thread Frederick Stock
Are you seeing similar values for all the nodes or just some of them? One possible issue is how the NSD queues are configured on the NSD servers. You can see this with the output of "mmfsadm dump nsd". There are queues for LARGE IOs (greater than 64K) and queues for SMALL IOs (64K or less).

Re: [gpfsug-discuss] Thousands of CLOSE_WAIT connections

2018-06-15 Thread Frederick Stock
Assuming CentOS 7.5 parallels RHEL 7.5 then you would need Spectrum Scale 4.2.3.9 because that is the release version (along with 5.0.1 PTF1) that supports RHEL 7.5. Fred __ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 sto...@us.ibm.com From:

Re: [gpfsug-discuss] RHEL updated to 7.5 instead of 7.4

2018-06-11 Thread Frederick Stock
Spectrum Scale 4.2.3.9 does support RHEL 7.5. Fred __ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 sto...@us.ibm.com From: "Sobey, Richard A" To: gpfsug main discussion list Date: 06/11/2018 06:59 AM Subject:Re:

Re: [gpfsug-discuss] pool-metadata_high_error

2018-05-14 Thread Frederick Stock
The difference in your inode information is presumably because the fileset you reference is an independent fileset and it has its own inode space distinct from the indoe space used for the "root" fileset (file system). Fred __ Fred Stock | IBM

Re: [gpfsug-discuss] mmapplypolicy on nested filesets ...

2018-04-18 Thread Frederick Stock
Would the PATH_NAME LIKE option work? Fred __ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 sto...@us.ibm.com From: "Jaime Pinto" To: "gpfsug main discussion list" Date:

Re: [gpfsug-discuss] GPFS autoload - wait for IB ports tobecomeactive

2018-03-15 Thread Frederick Stock
shutdown" if it has failed to connect IB ? On Thu, Mar 8, 2018 at 1:42 PM, Frederick Stock <sto...@us.ibm.com> wrote: You could also use the GPFS prestartup callback (mmaddcallback) to execute a script synchronously that waits for the IB ports to become available before returning and a

Re: [gpfsug-discuss] GPFS autoload - wait for IB ports to becomeactive

2018-03-08 Thread Frederick Stock
You could also use the GPFS prestartup callback (mmaddcallback) to execute a script synchronously that waits for the IB ports to become available before returning and allowing GPFS to continue. Not systemd integrated but it should work. Fred __

Re: [gpfsug-discuss] Inode scan optimization

2018-02-08 Thread Frederick Stock
You mention that all the NSDs are metadata and data but you do not say how many NSDs are defined or the type of storage used, that is are these on SAS or NL-SAS storage? I'm assuming they are not on SSDs/flash storage. Have you considered moving the metadata to separate NSDs, preferably

Re: [gpfsug-discuss] Metadata only system pool

2018-01-23 Thread Frederick Stock
018, at 11:27 AM, Alex Chekholko <a...@calicolabs.com> wrote: 2.8TB seems quite high for only 350M inodes. Are you sure you only have metadata in there? On Tue, Jan 23, 2018 at 9:25 AM, Frederick Stock <sto...@us.ibm.com> wrote: One possibility is the creation/expansion of directories

Re: [gpfsug-discuss] gpfs 4.2.3.5 and RHEL 7.4...

2017-12-18 Thread Frederick Stock
Yes the integrated protocols are the Samba and Ganesha that are bundled with Spectrum Scale. These require the use of the CES component for monitoring the protocols. If you do use them then you need to wait for a release of Spectrum Scale in which the integrated protocols are also supported

Re: [gpfsug-discuss] Specifying nodes in commands

2017-11-10 Thread Frederick Stock
How do you determine if mmapplypolicy is running on a node? Normally mmapplypolicy as a process runs on a single node but its helper processes, policy-help or something similar, run on all the nodes which are referenced by the -N option. Fred __

Re: [gpfsug-discuss] mmrestripefs "No space left on device"

2017-11-02 Thread Frederick Stock
earlier configuration change the file system is no longer properly replicated. I thought a 'mmrestripe -r' would fix this, not that I have to fix it first before restriping? jbh On Thu, Nov 2, 2017 at 9:45 AM, Frederick Stock <sto...@us.ibm.com> wrote: Assuming you are replicating data and

Re: [gpfsug-discuss] mmrestripefs "No space left on device"

2017-11-02 Thread Frederick Stock
Assuming you are replicating data and metadata have you confirmed that all failure groups have the same free space? That is could it be that one of your failure groups has less space than the others? You can verify this with the output of mmdf and look at the NSD sizes and space available.

Re: [gpfsug-discuss] Checking a file-system for errors

2017-10-11 Thread Frederick Stock
Generally you should not run mmfsck unless you see MMFS_FSSTRUCT errors in your system logs. To my knowledge online mmfsck only checks for a subset of problems, notably lost blocks, but that situation does not indicate any problems with the file system. Fred

Re: [gpfsug-discuss] GPFS 4.2.3.4 question

2017-08-26 Thread Frederick Stock
The only change missing is the change delivered in 4.2.3 PTF3 efix3 which was provided on August 22. The problem had to do with NSD deletion and creation. Fred __ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 sto...@us.ibm.com From:

Re: [gpfsug-discuss] Question regarding "scanning file system metadata"bug

2017-08-22 Thread Frederick Stock
My understanding is that the problem is not with the policy engine scanning but with the commands that move data, for example mmrestripefs. So if you are using the policy engine for other purposes you are not impacted by the problem. Fred __ Fred

Re: [gpfsug-discuss] gpfs waiters debugging

2017-06-06 Thread Frederick Stock
-t' ' We've found it useful - if you have 1 waiter on one node that's 1278 seconds old, and 3 other nodes have waiters that are 1275 seconds old, it's a good chance the other 3 nodes waiters are waiting on the first node's waiter to resolve itself [attachment "attltepl.dat

Re: [gpfsug-discuss] gpfs waiters debugging

2017-06-06 Thread Frederick Stock
Realize that generally any waiter under 1 second should be ignored. In an active GPFS system there are always waiters and the greater the use of the system likely the more waiters you will see. The point is waiters themselves are not an indication your system is having problems. As for

Re: [gpfsug-discuss] Policy scan against billion files for ILM/HSM

2017-04-11 Thread Frederick Stock
As Zachary noted the location of your metadata is the key and for the scanning you have planned flash is necessary. If you have the resources you may consider setting up your flash in a mirrored RAID configuration (RAID1/RAID10) and have GPFS only keep one copy of metadata since the

Re: [gpfsug-discuss] fix mmrepquota report format during grace periods

2017-03-28 Thread Frederick Stock
My understanding is that with the upcoming 4.2.3 release the -Y option will be documented for many commands, but perhaps not all. Fred __ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 sto...@us.ibm.com From: "Buterbaugh, Kevin L"

Re: [gpfsug-discuss] Problem installin CES - 4.2.2-2

2017-03-08 Thread Frederick Stock
What version of python do you have installed? I think you need at least version 2.7. Fred __ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 sto...@us.ibm.com From: "Oesterlin, Robert" To: gpfsug main