We have on our DSS-G …
Have you looked at:
https://access.redhat.com/solutions/238533
?
Simon
From: on behalf of "Sobey, Richard
A"
Reply-To: "gpfsug-discuss@spectrumscale.org"
Date: Monday, 11 June 2018 at 11:46
To: "gpfsug-discuss@spectrumscale.org"
Subject: [gpfsug-discuss] RHEL
(845) 433-9314 T/L 293-9314
[Inactive hide details for "Simon Thompson (IT Research Support)" ---06/04/2018
07:21:56 AM---So … I have another question on s]"Simon Thompson (IT Research
Support)" ---06/04/2018 07:21:56 AM---So … I have another question on support.
We’ve just
So … I have another question on support.
We’ve just ordered some Power 9 nodes, now my understanding is that with 7.4,
they require the -ALT kernel
(https://www.ibm.com/support/knowledgecenter/en/linuxonibm/liaam/liaamdistros.htm)
which is 4.x based. I don’t see any reference in the Spectrum
b64 at the beginning of
LD_LIBRARY_PATH?
(Not to get started on using LD_LIBRARY_PATH in the first place…)
— Peter
> On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support)
wrote:
>
> Hi All,
>
> We have a file-set which is an
p.s.
I wasn’t sure if afmDirLookupRefreshInterval and afmFileLookupRefreshInterval
would be the right thing if it’s a file/directory that doesn’t exist?
Simon
From: on behalf of "Simon Thompson
(IT Research Support)"
Reply-To: "gpfsug-discuss@spectrumscale.org"
Date: We
Hi All,
We have a file-set which is an AFM fileset and contains installed software.
We’ve been experiencing some performance issues with workloads when this is
running and think this is down to LD_LIBRARY_PATH being set to the software
installed in the AFM cache, e.g.
May 2018 21:37
To: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862
On 25/05/18 21:06, Simon Thompson (IT Research Support) wrote:
> Hi Richard,
>
> Ours run on 7.4 without issue. We had one upgrade to 7.5 packages
> (didn't reboot int
Hi Richard,
Ours run on 7.4 without issue. We had one upgrade to 7.5 packages (didn't
reboot into new kernel) and it broke, reverting it back to a 7.4 release fixed
it, so when support comes along, do it with care!
Simon
From:
You can change them using the normal SMB commands, from the appropriate bin
directory, whether this is supported is another matter.
We have one parameter set this way but I forgot which.
Simkn
From: gpfsug-discuss-boun...@spectrumscale.org
I wondered if it came from the object RPMs maybe… I haven’t actually checked,
but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I
think!) and that typically requires newer RPMs if using RDO packages so maybe
it came that route?
Simon
From:
than 2 remote file systems.
Regards,
Lohit
On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support)
<s.j.thomp...@bham.ac.uk>, wrote:
You have been able to do this for some time, though I think it's only just
supported.
We've been exporting remote mounts since CES was added.
At some
Our charging model for disk storage assumes that a percentage of it is really
HSM’d, though in practise we aren’t heavily doing this.
My (personal) view on tape really is that anything on tape is FoC, that way
people can play games to recall/keep it hot it if they want, but it eats their
FoC
Hi,
We are looking at providing an AFM cache of a home which has a number of cloned
files. From the docs:
https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1ins_afmandafmdrlimitations.htm
· We can see that “The mmclone command is not
You have been able to do this for some time, though I think it's only just
supported.
We've been exporting remote mounts since CES was added.
At some point we've had two storage clusters supplying data and at least 3
remote file-systems exported over NFS and SMB.
One thing to watch, be
Hi all,
We'd like to move some data from a non replicated pool to another pool, but
keep replication at 1 (the fs default is 2).
When using an ILM policy, is the default to keep the current replication or use
the fs default?
I.e.just wondering if I need to include a "REPLICATE(1)" clause.
My very unconsidered and unsupported suggestion would be to edit mmfsfuncs on
your test cluster and see if it’s actually implemented further in the code
Simon
From: on behalf of
"luke.raimb...@googlemail.com"
Reply-To:
Sorry, it was a typo from my side.
The talks that are missing we are chasing for copies of the slides that we can
release.
Simon
From: on behalf of
"renar.grunenb...@huk-coburg.de"
Reply-To:
gpfsug-discuss-boun...@spectrumscale.org> on behalf of "Simon Thompson
(IT Research Support)" <s.j.thomp...@bham.ac.uk>
Reply-To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date: Friday, April 13, 2018 at 3:06 PM
To: "gpfsug-discuss@spectrum
Yeah that was my thoughts too given Bob said you can update the server list for
an NSD device in 5.0. I also thought that bringing up a second nic and changing
the name etc could bring a whole world or danger from having split routing and
rp_filter (been there, had the weirdness, RDMA traffic
It’s now just two weeks until the UK meeting and we are down to our last few
places available. If you were planning on attending, please register now!
Simon
From: on behalf of
"ch...@spectrumscale.org"
Reply-To:
John,
I think we might need the decrypt key ...
Simon
On 26/03/2018, 13:29, "gpfsug-discuss-boun...@spectrumscale.org on behalf of
john.hea...@asml.com" wrote:
Fbeel Tnergu. Pnaabg nqq nalguvta hfrshy urer.
Not always true.
1. Use them with socket licenses as HAWC or LROC is OK on a client.
2. Have data management edition and capacity license the amount of storage.
Simon
From: gpfsug-discuss-boun...@spectrumscale.org
I would look at using LROC and possibly using HAWC ...
Note you need to be a bit careful with HAWC client side and failure group
placement.
Simon
On 14/03/2018, 09:28, "gpfsug-discuss-boun...@spectrumscale.org on behalf of
xhejt...@ics.muni.cz"
I can’t comment on mmfind vs perl, but have you looked at trying “tsfindinode” ?
Simon
From: on behalf of "Buterbaugh,
Kevin L"
Reply-To: "gpfsug-discuss@spectrumscale.org"
Date:
We are looking at setting a value for tscCmdPortRange so that we can apply
firewalls to a small number of GPFS nodes in one of our clusters.
The docs don’t give an indication on the number of ports that are required to
be in the range. Could anyone make a suggestion on this?
It doesn’t appear
Is it an AFM cache? We see this sort of behaviour occasionally where the cache
has an "old" view of the directory. Doing an ls, it evidently goes back to home
but by then you already have weird stuff. The next ls is usually fine.
Simon
From:
If you mean adding storage shelves to increase capacity to an ESS, then no I
don't believe it is supported. I think it is supported on the Lenovo DSS-G
models, though you have to have a separate DA for each shelf increment so the
performance may different between an upgraded Vs complete
This is assuming you directly have “IBM” licenses (as opposed to OEM licenses
where the route is different … or where the licenses are held by your VAR
rather than you …)
You need to have an IBM account which is attached to a (Passport Advantage) PA
site that has current support for the
Maybe it would have been a good idea to make this clear in the “What’s new in
5.0” slide decks used at SC. I don’t recall it being there. And the lack of
forward public notification on this is not great, particularly for those not in
NYC. Sure most of my clusters are on EL7 now, but I still
Stuart,
Have you looked at using filesets instead an using fileset quotas to achieve
this?
This is what we do and the max number of filesets (currently) isn't an issue
for us.
Simon
From: gpfsug-discuss-boun...@spectrumscale.org
The FAQ at:
https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html#linux
Lists support with (e.g. Ubutu 16.04.2) with kernel 4.4.0-62, so likely it
would work with a build your own kernel, but that doesn’t mean it is
**supported**
Simon
On 04/12/2017, 09:52,
Um no, you are talking GPFS protocol between cluster nodes still in
multicluster. Contact nodes are where the remote cluster goes to start with,
but after that it's just normal node to node gpfs traffic (not just the contact
nodes).
At least that is my understanding.
If you want traffic
You can in place upgrade.
I think what people are referring to is likely things like the new sub block
sizing for **new** filesystems.
Simon
From: gpfsug-discuss-boun...@spectrumscale.org
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of
0479.
If your query concerns a potential software error in Spectrum Scale (GPFS) and
you have an IBM software maintenance contract please contact 1-800-237-5511 in
the United States or your local IBM Service Center in other countries.
The forum is informally monitored as time permits and should
We were looking at adding some callbacks to notify us when file-sets go
over their inode limit by implementing it as a soft inode quota.
In the docs:
https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectru
m.scale.v4r23.doc/bl1adm_mmaddcallback.htm#mmaddcallback__Table1
Just picking up this old thread, but...
October updates:
https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html#
linux
7.4 is now listed as supported with min scale version of 4.1.1.17 or
4.2.3.4
(incidentally 4.2.3.5 looks to have been released today).
Simon
On
The slides from the Manchester meeting are at:
http://files.gpfsug.org/presentations/2017/Manchester/09_licensing-update.p
df
We moved all of our socket licenses to per TB DME earlier this year, and
then also have DME per drive for our Lenovo DSS-G system, which for
various reasons is in a
Chemnitz
>Phone: +49 371 6978 2165
>Mobile: +49 175 575 2877
>E-Mail: uwefa...@de.ibm.com
>--
>-
>IBM Deutschland Business & Technology Services GmbH / Geschäftsführung:
>Thomas Wolter, Sven Schooß
>Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stu
Yes I get we should only be doing this if we think we have a problem.
And the answer is, right now, we're not entirely clear.
We have a couple of issues our users are reporting to us, and its not
clear to us if they are related, an FS problem or ACLs getting in the way.
We do have users who are
>IBM Deutschland
>Rathausstr. 7
>09111 Chemnitz
>Phone: +49 371 6978 2165
>Mobile: +49 175 575 2877
>E-Mail: uwefa...@de.ibm.com
>--
>-
>IBM Deutschland Business & Technology Services GmbH / Geschäftsführung:
>Thomas Wolter, Sven Schooß
>
I'm just wondering if anyone could share any views on checking a
file-system for errors.
For example, we could use mmfsck in online and offline mode. Does online
mode detect errors (but not fix) things that would be found in offline
mode?
And then were does mmrestripefs -c fit into this?
"-c
I think you really want a PMR for this. There are some files you could
potentially edit and copy around, but given its cluster configuration, I
wouldn't be doing this on a cluster I cared about with explicit instruction
from IBM support.
So I suggest log a ticket with IBM.
Simon
From:
rds-projects-2017 file IDs [5389550.5389550.-1.-1,R] name remote
>error 5
Traces are needed to verify IO errors. Also try disabling the parallel IO and
see if replication speed improves.
mmchfileset device fileset -p afmParallelWriteThreshold=disable
~Venkat (vpuvv...@in.ibm.com<mailto:vpuvv...
e "-i/-I" options?
>
>-Aaron
>
>On 10/10/17 8:30 AM, Simon Thompson (IT Research Support) wrote:
>> Yes, but obviously only when you recycle mmfsd on the node after adding
>>it
>> to the node class, e.g. page pool cannot be changed online.
>>
>>
Use mmchconfig and change the defaults, and then have a node class for
"not the defaults"?
Apply settings to a node class and add all new clients to the node class?
Note there was some version of Scale where node classes were enumerated
when the config was set for the node class, but in (4.2.3
Hi All,
We're having fun (ok not fun ...) with AFM.
We have a file-set where the queue length isn't shortening, watching it
over 5 sec periods, the queue length increases by ~600-1000 items, and the
numExec goes up by about 15k.
The queues are steadily rising and we've seen them over 100
fix packs on top of that (which I am
>in fact doing now).
>
>John H
>
>
>
>
>
>-Original Message-
>From: gpfsug-discuss-boun...@spectrumscale.org
>[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Simon
>Thompson (IT Research Support)
>Sent: Thu
Aren't listed as tested
Sorry ...
4.2.3.4 we have used with 7.4 as well, efix2 includes a fix for an AFM
issue we have.
Simon
On 28/09/2017, 15:36, "kenneth.waege...@ugent.be"
<kenneth.waege...@ugent.be> wrote:
>
>
>On 28/09/17 16:23, Simon Thompson (IT Research S
The 7.4 kernels are listed as having been tested by IBM.
Having said that, we have clients running 7.4 kernel and its OK, but we
are 4.2.3.4efix2, so bump versions...
Simon
On 28/09/2017, 15:18, "gpfsug-discuss-boun...@spectrumscale.org on behalf
of Jeffrey R. Lang"
Filesets in filesets are fine. BUT if you use scoped backups with TSM... Er
Spectrum Protect, then there are restrictions on creating an IFS inside an IFS
...
Simon
From:
>
on behalf of
modes does not support preallocation of blocks
(ex. using fallocate()). This feature is supported only in AFM DR.
~Venkat (vpuvv...@in.ibm.com<mailto:vpuvv...@in.ibm.com>)
From:"Simon Thompson (IT Research Support)"
<s.j.thomp...@bham.ac.uk<mailto:s.j.thomp...@bham.ac.u
n of blocks
(ex. using fallocate()). This feature is supported only in AFM DR.
~Venkat (vpuvv...@in.ibm.com<mailto:vpuvv...@in.ibm.com>)
From:"Simon Thompson (IT Research Support)"
<s.j.thomp...@bham.ac.uk<mailto:s.j.thomp...@bham.ac.uk>>
To:gpfsug main d
OK so I checked and if I run directly on the "AFM" FS in a different "non
AFM" directory, it works fine, so its something AFM related ...
Simon
On 23/08/2017, 11:11, "gpfsug-discuss-boun...@spectrumscale.org on behalf
of Simon Thompson (IT Research Suppo
We're using an AFM cache from our HPC nodes to access data in another GPFS
cluster, mostly this seems to be working fine, but we've just come across
an interesting problem with a user using gfortran from the GCC 5.2.0
toolset.
When linking their code, they get a "no space left on device" error
enter/en/STXKQY_4.2.2/com.ibm.spectrum.scale.v4r22.doc/bl1adv_cnfssetup.htm
>
>
>
>
> On Sun, Aug 6, 2017 at 12:10 PM, Simon Thompson (IT Research Support)
> <s.j.thomp...@bham.ac.uk> wrote:
>> Under the hood, the NFS services are provided by IBM supplied Ganesha rpms.
>> It's then
Under the hood, the NFS services are provided by IBM supplied Ganesha rpms.
It's then fully supported by IBM, e.g. the GPFS VFS later to handle locking,
ACLs, quota etc...
Note it's different from using the cnfs support in Spectrum Scale which uses
Kernel NFS AFAIK. Using user space Ganesha
So just following up on my questions from January.
We tried to do 2. I.e. Restore to a new file-system with different block sizes.
It got part way through creating the file-sets on the new SOBAR file-system and
then GPFS asserts and crashes... We weren't actually intentionally trying to
move
Hi all,
Members of this group may be particularly interested in the role "Research
Storage Systems Senior Specialist/Specialist"...
As part of the University of Birmingham's investment in our ability to
support outstanding research by providing technical computing facilities,
we are expanding
IBM code comes from either IBM Passport Advantage (where you sign in with
a corporate account that lists your product associations), or from IBM Fix
Central (google it). Fix Central is supposed to be for service updates.
Give the lack of experience, you may want to look at the install toolkit
AFAIK. Always.
We have had the service eat itself BTW by having different code releases and
trying this.
Yes its a PITA that we have to get a change approval for it (so we don't do it
as often as we should)...
The upgrade process upgrades the SMB registry, we have also seen the CTDB lock
There used to be issues with the CX-3 cards and specific ports for if you
wanted to use IB and Eth, but that went away in later firmwares, as did a whole
load of bits with it being slow to detect media type, so see if you are running
an up to date Mellanox firmware (assuming it's a VPI card).
, but in the case where you want to remove
a node from service but keep it running for testing it’s not ideal.
I think you can set the IP address balancing policy to none which might do what
we want.
From:
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.or
mmces node suspend -N
Is what you want. This will move the address and stop it being assigned one,
otherwise the rebalance will occur. I think you can change the way it balances,
but the default is to distribute.
Simon
From:
We've seen exactly this behaviour.
Removing and readding the lroc nsd device worked for us.
Simon
From: gpfsug-discuss-boun...@spectrumscale.org
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of d...@milk-vfx.com
[d...@milk-vfx.com]
Sent: 05 June
Hi All,
I forgot that I never circulated, as part of the RCUK Working Group on
Cloud, we produced a report on using Scale with Cloud/Undercloud ...
You can download the report from:
https://cloud.ac.uk/reports/spectrumscale/
We had some input from various IBM people whilst writing, and bear
t;gpfsug-discuss-boun...@spectrumscale.org on behalf of
Simon Thompson (IT Research Support)" <gpfsug-discuss-boun...@spectrumscale.org
on behalf of s.j.thomp...@bham.ac.uk> wrote:
Hi All,
Normally we never use the install toolkit, but deploy GPFS from a config
Hi All,
Normally we never use the install toolkit, but deploy GPFS from a config
management tool.
I see there are now RPMs such as gpfs.license.dm, are these actually
required to be installed? Everything seems to work well without them, so
just interested. Maybe the GUI uses them?
Thanks
Simon
Just following up on some discussions we had at the UG this week. I
mentioned a few weeks back that we were having issues with failover of
NFS, and we figured a work around to our clients for this so that failover
works great now (plus there is some code fixes coming down the line as
well to
to be restarted.
you need to also turn HAWC on as well, i assume you just left this out of the
email , just changing log size doesn't turn it on :-)
On Thu, May 4, 2017 at 6:15 AM Simon Thompson (IT Research Support)
<s.j.thomp...@bham.ac.uk<mailto:s.j.thomp...@bham.ac.uk>> wrote:
Hi,
I hav
Hi All,
Just to note that we need to send final numbers of the venue today for lunches
etc, so if you are planning to attend, please register NOW! (otherwise you
might not get lunch/entry to the evening event)
Thanks
Simon
From:
discussion list
Subject: Re: [gpfsug-discuss] NFS issues
> On 2017 Apr 26 Wed, at 16:20, Simon Thompson (IT Research Support)
> <s.j.thomp...@bham.ac.uk> wrote:
>
> Nope, the clients are all L3 connected, so not an arp issue.
...not on the client, but the server-facing L3
>From: gpfsug-discuss-boun...@spectrumscale.org
>[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Simon
>Thompson (IT Research Support)
>Sent: Wednesday, 26 April 2017 3:31 AM
>To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
>Subject
to netstat, but these connections were not visible from
netstat on NFS-server side.
Unfortunately I don't remember what the fix was..
-jf
tir. 25. apr. 2017 kl. 16.06 skrev Simon Thompson (IT Research Support)
<s.j.thomp...@bham.ac.uk<mailto:s.j.thomp...@bham.ac.uk>>:
Hi,
>From wha
see more debug information with tcpdump.
>
>
>Kind regards,
>
>Jaap Jan Ouwehand
>ICT Specialist (Storage & Linux)
>VUmc - ICT
>E: jj.ouweh...@vumc.nl
>W: www.vumc.com
>
>
>
>-----Oorspronkelijk bericht-
>Van: gpfsug-discuss-boun...@spectru
Hi,
We have recently started deploying NFS in addition our existing SMB
exports on our protocol nodes.
We use a RR DNS name that points to 4 VIPs for SMB services and failover
seems to work fine with SMB clients. We figured we could use the same name
and IPs and run Ganesha on the protocol
75 matches
Mail list logo