So I have an ESS today and it's nearing end of life (just our own
timeline/depreciation etc) and I will be purchasing a new ESS. I'm working
through the logistics of this. Here is my thinking so far:
This is just a big Data Lake and not an HPC environment
Option A
Purchase new ESS and set it
I have a test cluster I setup months ago and then did nothing with. Now I need
it again but for the life of me I can't remember the admin password to the GUI.
Is there an easy way to reset it under the covers? I would hate to uninstall
everything and start over. I can certainly admin
Is it possible (albeit not advisable) to mirror LUNs that are NSD's to another
storage array in another site basically for DR purposes? Once it's mirrored to
a new cluster elsewhere what would be the step to get the filesystem back up
and running. I know that AFM-DR is meant for this but in
I've noticed this in my test cluster both in 4.2.3.4 and 5.0.0.0 that in the
GUI on the monitoring screen with the default view the NSD Server Throughput
graph shows "Performance Collector did not return any data". I've seen that in
other items (SMB before for example) but never for NSD. Is
I have a windows 10 machine that is part of my local domain. I have a separate
SpecScale test cluster that has local (not part of my AD domain) ldap and CES
(NFS/SMB) running. I cannot get my local workstation to connect to a SMB share
at all. When I get the logon prompt I'm using IBM
TS(creation_time) || '|' ||
char(misc_attributes,1) || '|'
)
-- cut here --
Bob Oesterlin
Sr Principal Storage Engineer, Nuance
From:
<gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>>
on
Does someone already have a policy that can extract typical file audit items
(user_id, last written, opened/accessed, modified, deleted, etc)? Am I barking
up the wrong tree for this is there a better way to get this type of data from
a Spectrum Scale filesystem?
Mark
This message (including
d keep looking at the systems with access
to those LUNs and see what commands/operations could have been run.
Bob Oesterlin
Sr Principal Storage Engineer, Nuance
From:
<gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>>
on behalf of
I have a client has had an issue where all of the nsd disks disappeared in the
cluster recently. Not sure if it's due to a back end disk issue or if it's a
reboot that did it. But in their PMR they were told that all that data is lost
now and that the disk headers didn't appear as GPFS disk
Ilan, you must create some type of authentication mechanism for CES to work
properly first. If you want a quick and dirty way that would just use your
local /etc/passwd try this.
/usr/lpp/mmfs/bin/mmuserauth service create --data-access-method file --type
userdefined
Mark
-Original
I’m looking to improve performance of the SMB stack. My workload unfortunately
has smallish files but in total it will still be large amount. I’m wondering
if LROC/HAWC would be one way to speed things up. Is there a precedent for
using this with protocol nodes in a cluster? Anyone else
For what it’s worth, I have been running 4.2.3 DM for a few weeks in a test lab
and didn’t have the gpfs.license.dm package installed and everything worked
fine (GUI, CES, etc, etc). Here’s what rpm says about itself
[root@node1 ~]# rpm -qpl gpfs.license.dm-4.2.3-0.x86_64.rpm
/usr/lpp/mmfs
I have a customer who is struggling (they already have a PMR open and it’s
being actively worked on now). I’m simply seeking understanding of potential
places to look. They have an ESS with a few CES nodes in front. Clients
connect via SMB to the CES nodes. One fileset has about 300k
sug-discuss] Perfmon and GUI
No worries Mark. We don’t use NFS here (yet) so I can’t help there.
Glad I could help.
Richard
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Mark Bush
Sent: 25 April 2017 15:29
To: gpfsug main dis
to
troubleshoot.
From: Mark Bush <mark.b...@siriuscom.com>
Reply-To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date: Tuesday, April 25, 2017 at 9:13 AM
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Subject: Re: [gpfsug-discuss] Perfmon an
llector.cfg
“
Richard
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Mark Bush
Sent: 25 April 2017 14:28
To: gpfsug-discuss@spectrumscale.org
Subject: [gpfsug-discuss] Perfmon and GUI
Anyone know why in the GUI when I go to look at
Anyone know why in the GUI when I go to look at things like nodes and select a
protocol node and then pick NFS or SMB why it has the boxes where a graph is
supposed to be and it has a Red circled X and says “Performance collector did
not return any data”?
I’ve added the things from the link
trs" shows file allocation size as zero if data prefetch not yet completed
on them.
~Venkat (vpuvv...@in.ibm.com)
From:Mark Bush <mark.b...@siriuscom.com>
To:gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:04/06/2017 07:24 AM
Su
18 matches
Mail list logo