[gpfsug-discuss] QoS question

2016-06-14 Thread Buterbaugh, Kevin L
Hi All, We have recently upgraded to GPFS 4.2.0-3 and so I am getting ready to dive into my first attempts at using the new QoS features. I want to make sure I am understanding the documentation: "The IOPS values that you set in an mmchqos command apply to all I/O operations that are issued

Re: [gpfsug-discuss] Snapshots / Windows previous versions

2016-06-20 Thread Buterbaugh, Kevin L
Hi Richard, I can’t answer your question but I can tell you that we have experienced either the exact same thing you are or something very similar. It occurred for us after upgrading from GPFS 3.5 to 4.1.0.8 and it persists even after upgraded to GPFS 4.2.0.3 and the very latest sernet-samba.

[gpfsug-discuss] Initial file placement

2016-06-17 Thread Buterbaugh, Kevin L
rbaugh, Kevin L <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> wrote: Hi All, I am aware that with the mmfileid command I can determine which files have blocks on a given NSD. But is there a way to query a particular file to see which NSD(s) is has

Re: [gpfsug-discuss] Can you query which NSDs a given file resides on?

2016-06-17 Thread Buterbaugh, Kevin L
it resides? If you're looking for the specific disks, how are you using the mmlsattr command to accomplish that? Jamie Jamie Davis GPFS Functional Verification Test (FVT) jamieda...@us.ibm.com<mailto:jamieda...@us.ibm.com> - Original message - From: "Buterbaugh, K

[gpfsug-discuss] make InstallImages errors

2016-04-22 Thread Buterbaugh, Kevin L
Hi All, We have a small test cluster that I am upgrading from GPFS 4.1.0.8 (efix21) to GPFS 4.2.0.2. I noticed that on 2 of my 3 NSD servers I received the following errors: /usr/lpp/mmfs/src root@testnsd3# make InstallImages (cd gpl-linux; /usr/bin/make InstallImages; \ exit $?) || exit 1

Re: [gpfsug-discuss] CNFS and multiple IP addresses

2016-05-03 Thread Buterbaugh, Kevin L
mount from an old address. Hope that helps! -Bryan From: gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org> [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Buterbaugh, Kevin L Sent: Tuesday, May 03, 2016 3:20 PM To: gpfsug ma

Re: [gpfsug-discuss] Aggregating filesystem performance

2016-07-13 Thread Buterbaugh, Kevin L
Hi Bob, I am also in the process of setting up monitoring under GPFS (and it will always be GPFS) 4.2 on our test cluster right now and would also be interested in the experiences of others more experienced and knowledgeable than myself. Would you considering posting to the list? Or is there

[gpfsug-discuss] mmchqos and already running maintenance commands

2016-07-29 Thread Buterbaugh, Kevin L
Hi All, Looking for a little clarification here … in the man page for mmchqos I see: * When you change allocations or mount the file system, a brief delay due to reconfiguration occurs before QoS starts applying allocations. If I’m already running a maintenance command and then I run an

[gpfsug-discuss] User group meeting at SC16?

2016-08-10 Thread Buterbaugh, Kevin L
Hi All, Just got an e-mail from DDN announcing that they are holding their user group meeting at SC16 on Monday afternoon like they always do, which is prompting me to inquire if IBM is going to be holding a meeting at SC16? Last year in Austin the IBM meeting was on Sunday afternoon, which

Re: [gpfsug-discuss] quota on secondary groups for a user?

2016-08-03 Thread Buterbaugh, Kevin L
, but if I need to I’ll test it on our test cluster later this week. Kevin On Aug 3, 2016, at 1:30 PM, Jaime Pinto <pi...@scinet.utoronto.ca<mailto:pi...@scinet.utoronto.ca>> wrote: Quoting "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vande

Re: [gpfsug-discuss] quota on secondary groups for a user?

2016-08-03 Thread Buterbaugh, Kevin L
a chgrp then GPFS should subtract from one group and add to another. Kevin On Aug 3, 2016, at 1:46 PM, Jonathan Buzzard <jonat...@buzzard.me.uk<mailto:jonat...@buzzard.me.uk>> wrote: On 03/08/16 19:06, Buterbaugh, Kevin L wrote: Hi Sven, Wait - am I misunderstanding something here

Re: [gpfsug-discuss] Snapshots / Windows previous versions

2016-07-06 Thread Buterbaugh, Kevin L
to blame that, but who knows. If (when) I find out I’ll let everyone know. Richard From: gpfsug-discuss-boun...@spectrumscale.org [ mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Buterbaugh, Kevin L Sent: 20 June 2016 15:56 To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.

Re: [gpfsug-discuss] Minor GPFS versions coexistence problems?

2016-08-15 Thread Buterbaugh, Kevin L
Richard, I will second what Bob said with one caveat … on one occasion we had an issue with our multi-cluster setup because the PTF’s were incompatible. However, that was clearly documented in the release notes, which we obviously hadn’t read carefully enough. While we generally do rolling

Re: [gpfsug-discuss] mmrepquota and group names in GPFS 4.2.2.x

2017-01-23 Thread Buterbaugh, Kevin L
re or less likely to be the default group of the owning UID? Can you translate the GID other ways? Like with ls? (I think this was in the original problem description, but I don’t remember the answer.) What is you just turn of nscd? -- Stephen On Jan 20, 2017, at 10:09 AM, Buterbaugh, Kevin

[gpfsug-discuss] mmlsquota output question

2017-01-26 Thread Buterbaugh, Kevin L
Hi All, We had 3 local GPFS filesystems on our cluster … let’s call them gpfs0, gpfs1, and gpfs2. gpfs0 is for project space (i.e. groups can buy quota in 1 TB increments there). gpfs1 is scratch and gpfs2 is home. We are combining gpfs0 and gpfs1 into one new filesystem (gpfs3) … we’re

Re: [gpfsug-discuss] 200 filesets and AFM

2017-02-20 Thread Buterbaugh, Kevin L
Hi Mark, Are you referring to this? http://www.spectrumscale.org/pipermail/gpfsug-discuss/2012-October/000169.html It’s not magical, but it’s pretty good! ;-) Seriously, we use it any time we want to move stuff around in our GPFS filesystems. Kevin On Feb 20, 2017, at 9:35 AM,

Re: [gpfsug-discuss] Manager nodes

2017-01-24 Thread Buterbaugh, Kevin L
Hi Simon, FWIW, we have two servers dedicated to cluster and filesystem management functions (and 8 NSD servers). I guess you would describe our cluster as small to medium sized … ~700 nodes and a little over 1 PB of storage. Our two managers have 2 quad core (3 GHz) CPU’s and 64 GB RAM.

[gpfsug-discuss] mmrepquota and group names in GPFS 4.2.2.x

2017-01-18 Thread Buterbaugh, Kevin L
Hi All, We recently upgraded our cluster (well, the servers are all upgraded; the clients are still in progress) from GPFS 4.2.1.1 to GPFS 4.2.2.1 and there appears to be a change in how mmrepquota handles group names in its’ output. I’m trying to get a handle on it, because it is messing

Re: [gpfsug-discuss] mmrepquota and group names in GPFS 4.2.2.x

2017-01-19 Thread Buterbaugh, Kevin L
uota and group names in GPFS 4.2.2.x Sent by: gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org> Just leting know, I see the same problem with 4.2.2.1 version. mmrepquota resolves only some of group names. O

Re: [gpfsug-discuss] mmrepquota and group names in GPFS 4.2.2.x

2017-01-20 Thread Buterbaugh, Kevin L
Steve, I just opened a PMR - thanks… Kevin On Jan 20, 2017, at 8:14 AM, Steve Duersch > wrote: Kevin, Please go ahead and open a PMR. Cursorily, we don't know of an obvious known bug. Thank you. Steve Duersch Spectrum Scale 845-433-7902 IBM

Re: [gpfsug-discuss] mmrepquota and group names in GPFS 4.2.2.x

2017-01-19 Thread Buterbaugh, Kevin L
... Kevin On Jan 19, 2017, at 2:45 AM, Olaf Weiser <olaf.wei...@de.ibm.com<mailto:olaf.wei...@de.ibm.com>> wrote: have you checked, where th fsmgr runs as you have nodes with different code levels mmlsmgr From: "Buterbaugh, Kevin L" <kevin.

Re: [gpfsug-discuss] mmrepquota and group names in GPFS 4.2.2.x

2017-01-19 Thread Buterbaugh, Kevin L
to much time in a version-mismatch issue.. finish the rolling migration, especially RHEL .. and then we continue meanwhile -I'll try to find a way for me here to setup up an 4.2.2. cluster cheers From: "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<mailto:kev

Re: [gpfsug-discuss] quota on secondary groups for a user?

2016-08-04 Thread Buterbaugh, Kevin L
this clears the waters a bit. I still have to solve my puzzle. Thanks everyone for the feedback. Jaime Quoting "Jaime Pinto" <pi...@scinet.utoronto.ca<mailto:pi...@scinet.utoronto.ca>>: Quoting "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<mail

Re: [gpfsug-discuss] big difference between output of 'mmlsquota' and 'du'?

2016-09-12 Thread Buterbaugh, Kevin L
Hi Alex, While the numbers don’t match exactly, they’re close enough to prompt me to ask if data replication is possibly set to two? Thanks… Kevin On Sep 12, 2016, at 2:03 PM, Alex Chekholko > wrote: Hi, For a fileset with a quota on it, we

Re: [gpfsug-discuss] Blocksize

2016-09-24 Thread Buterbaugh, Kevin L
Hi Sven, I am confused by your statement that the metadata block size should be 1 MB and am very interested in learning the rationale behind this as I am currently looking at all aspects of our current GPFS configuration and the possibility of making major changes. If you have a filesystem

[gpfsug-discuss] Fwd: Blocksize

2016-09-29 Thread Buterbaugh, Kevin L
Resending from the right e-mail address... Begin forwarded message: From: gpfsug-discuss-ow...@spectrumscale.org Subject: Re: [gpfsug-discuss] Blocksize Date: September 29, 2016 at 10:00:36 AM CDT To:

Re: [gpfsug-discuss] Blocksize

2016-09-27 Thread Buterbaugh, Kevin L
cy is something that can be answered conclusively though. yuri "Buterbaugh, Kevin L" ---09/24/2016 07:19:09 AM---Hi Sven, I am confused by your statement that the metadata block size should be 1 MB and am very int From: "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<

Re: [gpfsug-discuss] Tiers

2016-12-15 Thread Buterbaugh, Kevin L
system? From: <gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>> on behalf of "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> Reply-To: gpfsug main discussion list <gpf

Re: [gpfsug-discuss] Tiers

2016-12-15 Thread Buterbaugh, Kevin L
Hi Mark, We’re a “traditional” university HPC center with a very untraditional policy on our scratch filesystem … we don’t purge it and we sell quota there. Ultimately, a lot of that disk space is taken up by stuff that, let’s just say, isn’t exactly in active use. So what we’ve done, for

Re: [gpfsug-discuss] Upgrading kernel on RHEL

2016-11-29 Thread Buterbaugh, Kevin L
Hi Richard, I would echo the previous comment about having a test cluster where you at least do some basic functionality testing. Also, as I’m sure you’re well aware, a kernel upgrade - whether or not you’re upgrading GPFS versions - is an especially good idea on RHEL systems right now thanks

Re: [gpfsug-discuss] replicating ACLs across GPFS's?

2017-01-05 Thread Buterbaugh, Kevin L
Hi Jaime, IBM developed a patch for rsync that can replicate ACL’s … we’ve used it and it works great … can’t remember where we downloaded it from, though. Maybe someone else on the list who *isn’t* having a senior moment can point you to it… Kevin > On Jan 5, 2017, at 3:53 PM, Jaime Pinto

Re: [gpfsug-discuss] Tiers

2016-12-19 Thread Buterbaugh, Kevin L
y to move data from SSD to HDD (and vice versa)? Do you nightly move large/old files to HDD or wait until the fast tier hit some capacity limit? Do you use QOS to limit the migration from SSD to HDD i.e. try not to kill the file system with migration work? Thanks, Brian Marshall

Re: [gpfsug-discuss] translating /dev device into nsd name

2016-12-19 Thread Buterbaugh, Kevin L
server order. -- Stephen On Dec 19, 2016, at 10:58 AM, Buterbaugh, Kevin L <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> wrote: Hi Ken, Umm, wouldn’t that make that server the primary NSD server for all those NSDs? Granted, you run the mmcrnsd command fr

Re: [gpfsug-discuss] translating /dev device into nsd name

2016-12-19 Thread Buterbaugh, Kevin L
Hi Ken, Umm, wouldn’t that make that server the primary NSD server for all those NSDs? Granted, you run the mmcrnsd command from one arbitrarily chosen server, but as long as you have the proper device name for the NSD from the NSD server you want to be primary for it, I’ve never had a

Re: [gpfsug-discuss] mmlsdisk performance impact

2016-12-20 Thread Buterbaugh, Kevin L
Hi Brian, If I’m not mistaken, once you run the mmlsdisk command on one client any other client running it will produce the exact same output. Therefore, what we do is run it once, output that to a file, and propagate that file to any node that needs it. HTHAL… Kevin On Dec 20, 2016, at

Re: [gpfsug-discuss] fix mmrepquota report format during grace periods

2017-03-28 Thread Buterbaugh, Kevin L
Ugh! Of course, I’m wrong … mmlsquota does support the “-Y” option … it’s just not documented. Why not Kevin > On Mar 28, 2017, at 10:00 AM, Buterbaugh, Kevin L > <kevin.buterba...@vanderbilt.edu> wrote: > > Hi Bob, Jaime, and GPFS team, > > That’s great for m

Re: [gpfsug-discuss] fix mmrepquota report format during grace periods

2017-03-28 Thread Buterbaugh, Kevin L
Hi Bob, Jaime, and GPFS team, That’s great for mmrepquota, but mmlsquota does not have a similar option AFAICT. That has really caused me grief … for example, I’ve got a Perl script that takes mmlsquota output for a user and does two things: 1) converts it into something easier for them to

Re: [gpfsug-discuss] -Y option for many commands, precious few officially!

2017-03-28 Thread Buterbaugh, Kevin L
Agreed. From what has been said on this thread about “-Y” being unsupported and the command output could change from release to release … well, that’s a “known unknown” that can be dealt with. But the fact that “-Y” was completely undocumented (at least as far as mmrepquota / mmlsquota are

Re: [gpfsug-discuss] mmcrfs issue

2017-03-15 Thread Buterbaugh, Kevin L
gt; >> https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_nsddevices.htm >>>> >> >>>> >> >>>> >> >>>> >> GPFS has a limited set of device search spec

[gpfsug-discuss] Can't delete filesystem

2017-04-05 Thread Buterbaugh, Kevin L
Hi All, First off, I can open a PMR on this if I need to… I am trying to delete a GPFS filesystem but mmdelfs is telling me that the filesystem is still mounted on 14 nodes and therefore can’t be deleted. 10 of those nodes are my 10 GPFS servers and they have an “internal mount” still

Re: [gpfsug-discuss] mmapplypolicy didn't migrate everything it should have - why not?

2017-04-17 Thread Buterbaugh, Kevin L
you give the cluster-wide space reckoning protocols time to see the changes? mmdf is usually "behind" by some non-neglible amount of time. What else is going on? If you're moving or deleting or creating data by other means while mmapplypolicy is running -- it doesn't "know" abou

[gpfsug-discuss] RAID config for SSD's used for data

2017-04-19 Thread Buterbaugh, Kevin L
Hi All, We currently have what I believe is a fairly typical setup … metadata for our GPFS filesystems is the only thing in the system pool and it’s on SSD, while data is on spinning disk (RAID 6 LUNs). Everything connected via 8 Gb FC SAN. 8 NSD servers. Roughly 1 PB usable space. Now

[gpfsug-discuss] mmapplypolicy didn't migrate everything it should have - why not?

2017-04-16 Thread Buterbaugh, Kevin L
Hi All, First off, I can open a PMR for this if I need to. Second, I am far from an mmapplypolicy guru. With that out of the way … I have an mmapplypolicy job that didn’t migrate anywhere close to what it could / should have. From the log file I have it create, here is the part where it

Re: [gpfsug-discuss] mmapplypolicy didn't migrate everything it should have - why not?

2017-04-18 Thread Buterbaugh, Kevin L
%) 128.8G ( 0%) Ideas? Or is it time for me to open a PMR? Thanks… Kevin On Apr 17, 2017, at 4:16 PM, Buterbaugh, Kevin L <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> wrote: Hi Marc, Alex, all, Thank you for the responses. To answer Alex’s que

Re: [gpfsug-discuss] mmapplypolicy didn't migrate everything it should have - why not?

2017-04-18 Thread Buterbaugh, Kevin L
Hi Marc, Two things: 1. I have a PMR open now. 2. You *may* have identified the problem … I’m still checking … but files with hard links may be our problem. I wrote a simple Perl script to interate over the log file I had mmapplypolicy create. Here’s the code (don’t laugh, I’m a

Re: [gpfsug-discuss] mmapplypolicy didn't migrate everything it should have - why not?

2017-04-19 Thread Buterbaugh, Kevin L
Hi All, I think we *may* be able to wrap this saga up… ;-) Dave - in regards to your question, all I know is that the tail end of the log file is “normal” for all the successful pool migrations I’ve done in the past few years. It looks like the hard links were the problem. We have one group

[gpfsug-discuss] mmcrfs issue

2017-03-10 Thread Buterbaugh, Kevin L
Hi All, We are testing out some flash storage. I created a couple of NSDs successfully (?): root@nec:~/gpfs# mmlsnsd -F File system Disk nameNSD servers --- (free disk) nsd1 nec (free disk) nsd2

Re: [gpfsug-discuss] mmcrfs issue

2017-03-10 Thread Buterbaugh, Kevin L
allowWriteAffinity=no %pool: pool=gpfsdata blockSize=1M usage=dataOnly layoutMap=scatter allowWriteAffinity=no > On Mar 10, 2017, at 2:54 PM, valdis.kletni...@vt.edu wrote: > > On Fri, 10 Mar 2017 20:43:37 +, "Buterbaugh, Kevin L" said: > >> So I tried to create a

Re: [gpfsug-discuss] mmcrfs issue

2017-03-13 Thread Buterbaugh, Kevin L
> yourself using the user exit script at /var/mmfs/etc/nsddevices. >> >> >> >> *From:*gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org> >> [mailto:gpfsug-discuss-boun...@spectrumscale.org] *On Behalf Of >> *Buterbaugh,

Re: [gpfsug-discuss] mmapplypolicy didn't migrate everything it should have - why not?

2017-04-17 Thread Buterbaugh, Kevin L
cified. Let's see where those steps take us... -- marc of Spectrum Scale (né GPFS) From:"Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> To:gpfsug main discussion list <gpfsug-discuss@spectrumscale.org<mailt

Re: [gpfsug-discuss] Quota and hardlimit enforcement

2017-07-31 Thread Buterbaugh, Kevin L
Jaime, That’s heavily workload dependent. We run a traditional HPC cluster and have a 7 day grace on home and 14 days on scratch. By setting the soft and hard limits appropriately we’ve slammed the door on many a runaway user / group / fileset. YMMV… Kevin On Jul 31, 2017, at 3:03 PM,

[gpfsug-discuss] patched rsync question

2017-05-10 Thread Buterbaugh, Kevin L
Hi All, We are using the patched version of rsync: rsync version 3.0.9 protocol version 30 Copyright (C) 1996-2011 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 64-bit timestamps, 64-bit long ints,

[gpfsug-discuss] Replication settings when running mmapplypolicy

2017-06-23 Thread Buterbaugh, Kevin L
Hi All, I haven’t been able to find this explicitly documented, so I’m just wanting to confirm that the behavior that I’m expecting is what GPFS is going to do in this scenario… I have a filesystem with data replication set to two. I’m creating a capacity type pool for it right now which

[gpfsug-discuss] SAN problem ... multipathd ... mmunlinkfileset ... ???

2017-06-15 Thread Buterbaugh, Kevin L
Hi All, I’ve got some very weird problems going on here (and I do have a PMR open with IBM). On Monday I attempted to unlink a fileset, something that I’ve done many times with no issues. This time, however, it hung up the filesystem. I was able to clear things up by shutting down GPFS on

Re: [gpfsug-discuss] SAN problem ... multipathd ... mmunlinkfileset ... ???

2017-06-15 Thread Buterbaugh, Kevin L
. Yes, correlation is not causation. And sometimes coincidences do happen. I’ll monitor to see if this is one of those occasions. Thanks… Kevin > On Jun 15, 2017, at 3:50 PM, Edward Wahl <ew...@osc.edu> wrote: > > On Thu, 15 Jun 2017 20:00:47 +0000 > "Buterbaugh,

Re: [gpfsug-discuss] Well, this is the pits...

2017-05-04 Thread Buterbaugh, Kevin L
e, if "number of nodes participating in the mmrestripefs" is 6 then adjust "mmchconfig pitWorkerThreadsPerNode=5 -N ". GPFS would need to be restarted for this parameter to take effect on the participating_nodes (verify with mmfsadm dump config | grep pitWorkerThreadsPe

[gpfsug-discuss] Well, this is the pits...

2017-05-04 Thread Buterbaugh, Kevin L
Hi All, Another one of those, “I can open a PMR if I need to” type questions… We are in the process of combining two large GPFS filesystems into one new filesystem (for various reasons I won’t get into here). Therefore, I’m doing a lot of mmrestripe’s, mmdeldisk’s, and mmadddisk’s. Yesterday

[gpfsug-discuss] CCR cluster down for the count?

2017-09-19 Thread Buterbaugh, Kevin L
Hi All, We have a small test cluster that is CCR enabled. It only had/has 3 NSD servers (testnsd1, 2, and 3) and maybe 3-6 clients. testnsd3 died a while back. I did nothing about it at the time because it was due to be life-cycled as soon as I finished a couple of higher priority projects.

Re: [gpfsug-discuss] CCR cluster down for the count?

2017-09-21 Thread Buterbaugh, Kevin L
ed mmstartup to see if it teases out any more info from the error? Ed On Wed, 20 Sep 2017 16:27:48 + "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> wrote: Hi Ed, Thanks for the suggestion … that’s basically what I

Re: [gpfsug-discuss] CCR cluster down for the count?

2017-09-20 Thread Buterbaugh, Kevin L
recent 4.2 release nodes. Bob Oesterlin Sr Principal Storage Engineer, Nuance From: <gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>> on behalf of "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba.

[gpfsug-discuss] Permissions issue in GPFS 4.2.3-4?

2017-08-30 Thread Buterbaugh, Kevin L
Hi All, We have a script that takes the output of mmlsfs and mmlsquota and formats a users’ GPFS quota usage into something a little “nicer” than what mmlsquota displays (and doesn’t display 50 irrelevant lines of output for filesets they don’t have access to). After upgrading to 4.2.3-4 over

[gpfsug-discuss] GPFS 4.2.3.4 question

2017-08-26 Thread Buterbaugh, Kevin L
Hi All, Does anybody know if GPFS 4.2.3.4, which came out today, contains all the patches that are in GPFS 4.2.3.3 efix3? If anybody does, and can respond, I’d greatly appreciate it. Our cluster is in a very, very bad state right now and we may need to just take it down and bring it back up.

Re: [gpfsug-discuss] GPFS 4.2.3.4 question

2017-08-27 Thread Buterbaugh, Kevin L
bm.com> From: "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> To:gpfsug main discussion list <gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>> Date:08/26/2017 03:40 PM Subject

Re: [gpfsug-discuss] GPFS 4.2.3.4 question

2017-08-30 Thread Buterbaugh, Kevin L
> [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Buterbaugh, Kevin L Sent: 30 August 2017 14:55 To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>> Subject: Re: [gpfsug-discuss] GPFS 4.2.3.4 question Hi Richard,

Re: [gpfsug-discuss] GPFS 4.2.3.4 question

2017-08-29 Thread Buterbaugh, Kevin L
1 sto...@us.ibm.com<mailto:sto...@us.ibm.com> From:"Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> To:gpfsug main discussion list <gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscal

[gpfsug-discuss] Rainy days and Mondays and GPFS lying to me always get me down...

2017-10-23 Thread Buterbaugh, Kevin L
Hi All, And I’m not really down, but it is a rainy Monday morning here and GPFS did give me a scare in the last hour, so I thought that was a funny subject line. So I have a >1 PB filesystem with 3 pools: 1) the system pool, which contains metadata only, 2) the data pool, which is where all

Re: [gpfsug-discuss] 5.0 features?

2017-11-29 Thread Buterbaugh, Kevin L
Simon in correct … I’d love to be able to support a larger block size for my users who have sane workflows while still not wasting a ton of space for the biomedical folks…. ;-) A question … will the new, much improved, much faster mmrestripefs that was touted at SC17 require a filesystem that

Re: [gpfsug-discuss] Online data migration tool

2017-11-29 Thread Buterbaugh, Kevin L
Hi All, Well, actually a year ago we started the process of doing pretty much what Richard describes below … the exception being that we rsync’d data over to the new filesystem group by group. It was no fun but it worked. And now GPFS (and it will always be GPFS … it will never be Spectrum

[gpfsug-discuss] mmbackup log file size after GPFS 4.2.3.5 upgrade

2017-12-14 Thread Buterbaugh, Kevin L
Hi All, 26 mmbackupDors-20171023.log 26 mmbackupDors-20171024.log 26 mmbackupDors-20171025.log 26 mmbackupDors-20171026.log 2922752 mmbackupDors-20171027.log 137 mmbackupDors-20171028.log 59328 mmbackupDors-20171029.log 2748095

Re: [gpfsug-discuss] FW: Spectrum Scale 5.0 now available on Fix Central

2017-12-18 Thread Buterbaugh, Kevin L
Hi All, GPFS 5.0 was announced on Friday … and today: IBM Spectrum Scale : IBM Spectrum Scale: NFS operations may fail with

Re: [gpfsug-discuss] Password to GUI forgotten

2017-12-18 Thread Buterbaugh, Kevin L
g-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>> on behalf of "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> Reply-To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org<m

Re: [gpfsug-discuss] Password to GUI forgotten

2017-12-18 Thread Buterbaugh, Kevin L
scuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>> on behalf of Buterbaugh, Kevin L <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> Sent: Wednesday, December 6, 2017 10:41:12 PM To: gpfsug main discussion list Subject

[gpfsug-discuss] Password to GUI forgotten

2017-12-06 Thread Buterbaugh, Kevin L
Hi All, So this is embarrassing to admit but I was playing around with setting up the GPFS GUI on our test cluster earlier this fall. However, I was gone pretty much the entire month of November for a combination of vacation and SC17 and the vacation was so relaxing that I’ve forgotten the

Re: [gpfsug-discuss] Password to GUI forgotten

2017-12-06 Thread Buterbaugh, Kevin L
From: <gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>> on behalf of "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> Reply-To: gpfsug main discussion list <gpfsug-discuss@spec

Re: [gpfsug-discuss] Password to GUI forgotten

2017-12-06 Thread Buterbaugh, Kevin L
n be changed via command line using chuser. /usr/lpp/mmfs/gui/cli/chuser Usage is as follows (where userID = admin) chuser userID {-p | -l | -a | -d | -g | --expirePassword} [-o ] Josh K On Dec 6, 2017, at 4:56 PM, Buterbaugh, Kevin L <kevin.buterba...@vanderbilt.edu<ma

[gpfsug-discuss] Not recommended, but why not?

2018-05-04 Thread Buterbaugh, Kevin L
Hi All, In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers … but I’ve not found any detailed explanation of why not. I understand that CES, especially if you enable SMB, can

Re: [gpfsug-discuss] Not recommended, but why not?

2018-05-04 Thread Buterbaugh, Kevin L
ems Lab Services [community_general_lab_services] Phone: 55-19-2132-4317 E-mail: ano...@br.ibm.com<mailto:ano...@br.ibm.com> [IBM] - Original message ----- From: "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbi

Re: [gpfsug-discuss] Not recommended, but why not?

2018-05-07 Thread Buterbaugh, Kevin L
the node, i probably wouldn't limit memory but CPU as this is the more critical resource to prevent expels and other time sensitive issues. sven On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> wrote: Hi All,

Re: [gpfsug-discuss] Node list error

2018-05-10 Thread Buterbaugh, Kevin L
What does `mmlsnodeclass -N ` give you? -B From:gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org> [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Buterbaugh, Kevin L Sent: Tuesday, May 08, 2018 1:24 PM To: gp

Re: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out

2018-05-11 Thread Buterbaugh, Kevin L
On the other hand, we are very excited by this (from the README): File systems: Traditional NSD nodes and servers can use checksums NSD clients and servers that are configured with IBM Spectrum Scale can use checksums to verify data integrity and detect network

[gpfsug-discuss] Node list error

2018-05-08 Thread Buterbaugh, Kevin L
Hi All, I can open a PMR for this if necessary, but does anyone know offhand what the following messages mean: 2018-05-08_12:16:39.567-0500: [I] Calling user exit script mmNodeRoleChange: event ccrFileChange, Async command /usr/lpp/mmfs/bin/mmsysmonc. 2018-05-08_12:16:39.719-0500: [I] Calling

Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7

2018-05-15 Thread Buterbaugh, Kevin L
All, I have to kind of agree with Andrew … it seems that there is a broad range of takes on kernel upgrades … everything from “install the latest kernel the day it comes out” to “stick with this kernel, we know it works.” Related to that, let me throw out this question … what about those who

[gpfsug-discuss] Capacity pool filling

2018-06-07 Thread Buterbaugh, Kevin L
Hi All, First off, I’m on day 8 of dealing with two different mini-catastrophes at work and am therefore very sleep deprived and possibly missing something obvious … with that disclaimer out of the way… We have a filesystem with 3 pools: 1) system (metadata only), 2) gpfs23data (the default

Re: [gpfsug-discuss] Capacity pool filling

2018-06-07 Thread Buterbaugh, Kevin L
; I think the restore is is bringing back a lot of material with atime > 90, so > it is passing-trough gpfs23data and going directly to gpfs23capacity. > > I also think you may not have stopped the crontab script as you believe you > did. > > Jaime > > Quoting

Re: [gpfsug-discuss] Capacity pool filling

2018-06-07 Thread Buterbaugh, Kevin L
7, 2018 at 8:17 AM -0600, "Buterbaugh, Kevin L" mailto:kevin.buterba...@vanderbilt.edu>> wrote: Hi All, First off, I’m on day 8 of dealing with two different mini-catastrophes at work and am therefore very sleep deprived and possibly missing something obvious … with tha

Re: [gpfsug-discuss] Capacity pool filling

2018-06-07 Thread Buterbaugh, Kevin L
Hi Uwe, Thanks for your response. So our restore software lays down the metadata first, then the data. While it has no specific knowledge of the extended attributes, it does back them up and restore them. So the only explanation that makes sense to me is that since the inode for the file

Re: [gpfsug-discuss] High I/O wait times

2018-07-03 Thread Buterbaugh, Kevin L
0-8821 sto...@us.ibm.com From:"Buterbaugh, Kevin L" To:gpfsug main discussion list Date:07/03/2018 03:49 PM Subject:[gpfsug-discuss] High I/O wait times Sent by:gpfsug-discuss-boun...@spectrumscale.org Hi all, We a

Re: [gpfsug-discuss] High I/O wait times

2018-07-03 Thread Buterbaugh, Kevin L
> From: "Buterbaugh, Kevin L" mailto:kevin.buterba...@vanderbilt.edu>> To:gpfsug main discussion list mailto:gpfsug-discuss@spectrumscale.org>> Date:07/03/2018 05:41 PM Subject:Re: [gpfsug-discuss] High I/O wait times Sent by: gpfsug

Re: [gpfsug-discuss] Password to GUI forgotten

2018-01-05 Thread Buterbaugh, Kevin L
PFS) team. From: "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> To:"Hanley, Jesse A." <hanle...@ornl.gov<mailto:hanle...@ornl.gov>> Cc:gpfsug main discussion list <gpfsug-discuss

Re: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS

2018-01-08 Thread Buterbaugh, Kevin L
Scale (GPFS) team. "Buterbaugh, Kevin L" ---01/04/2018 01:11:59 PM---Happy New Year everyone, I’m sure that everyone is aware of Meltdown and Spectre by now … we, like m From: "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.ed

Re: [gpfsug-discuss] GPFS best practises : end user standpoint

2018-01-17 Thread Buterbaugh, Kevin L
than.buzz...@strath.ac.uk<mailto:jonathan.buzz...@strath.ac.uk>> wrote: On Tue, 2018-01-16 at 16:35 +, Buterbaugh, Kevin L wrote: [SNIP] I am quite sure someone storing 1PB has to pay more than someone storing 1TB, so why should someone storing 20 million files not have to pay more than so

Re: [gpfsug-discuss] GPFS best practises : end user standpoint

2018-01-16 Thread Buterbaugh, Kevin L
Hi Jonathan, Comments / questions inline. Thanks! Kevin > On Jan 16, 2018, at 10:08 AM, Jonathan Buzzard > wrote: > > On Tue, 2018-01-16 at 15:47 +, Carl Zetie wrote: >> Maybe this would make for a good session at a future user group >> meeting -- perhaps

[gpfsug-discuss] mmchdisk suspend / stop

2018-02-08 Thread Buterbaugh, Kevin L
Hi All, We are in a bit of a difficult situation right now with one of our non-IBM hardware vendors (I know, I know, I KNOW - buy IBM hardware! ) and are looking for some advice on how to deal with this unfortunate situation. We have a non-IBM FC storage array with dual-“redundant”

Re: [gpfsug-discuss] hdisk suspend / stop (Buterbaugh, Kevin L)

2018-02-08 Thread Buterbaugh, Kevin L
Thu, 8 Feb 2018 15:59:44 + > From: "Buterbaugh, Kevin L" > <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> > To: gpfsug main discussion list > <gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>

Re: [gpfsug-discuss] mmchdisk suspend / stop

2018-02-09 Thread Buterbaugh, Kevin L
Hi All, Since several people have made this same suggestion, let me respond to that. We did ask the vendor - twice - to do that. Their response boils down to, “No, the older version has bugs and we won’t send you a controller with firmware that we know has bugs in it.” We have not had a

Re: [gpfsug-discuss] mmchdisk suspend / stop

2018-02-13 Thread Buterbaugh, Kevin L
were unaware that “major version” firmware upgrades could not be done live on our storage, but we’ve got a plan to work around this this time. Kevin > On Feb 13, 2018, at 7:43 AM, Jonathan Buzzard <jonathan.buzz...@strath.ac.uk> > wrote: > > On Fri, 2018-02-09 at 15:07 +, B

Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-02 Thread Buterbaugh, Kevin L
Hi All, Thanks for all the responses on this, although I have the sneaking suspicion that the most significant thing that is going to come out of this thread is the knowledge that Sven has left IBM for DDN. ;-) or :-( or :-O depending on your perspective. Anyway … we have done some testing

Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-02 Thread Buterbaugh, Kevin L
kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu> - (615)875-9633 On Aug 2, 2018, at 3:31 PM, Buterbaugh, Kevin L mailto:kevin.buterba...@vanderbilt.edu>> wrote: Hi All, Thanks for all the responses on this, although I have the sneaking suspicion that the most signi

Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-03 Thread Buterbaugh, Kevin L
arguments to mmcrfs. My apologies… Kevin On Aug 3, 2018, at 1:01 AM, Olaf Weiser mailto:olaf.wei...@de.ibm.com>> wrote: Can u share your stanza file ? Von meinem iPhone gesendet Am 02.08.2018 um 23:15 schrieb Buterbaugh, Kevin L mailto:kevin.buterba...@vanderbilt.edu>>: OK, so h

Re: [gpfsug-discuss] Power9 / GPFS

2018-07-27 Thread Buterbaugh, Kevin L
Hi Simon, Have you tried running it with the “—silent” flag, too? Kevin — Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education kevin.buterba...@vanderbilt.edu - (615)875-9633 On Jul

  1   2   >