I'm in Lab Services at IBM - just joining and happy to help any way I can.
Kevin D. Johnson, MBA, MAFMSpectrum Computing, Senior Managing ConsultantIBM Certified Deployment Professional - Spectrum Scale V4.1.1IBM Certified Deployment Professional - Cloud Object Storage V3.8720.349.6199 -
All,
If I setup a filesystem to have data replication of 2 (2 copies of data),
does the data get replicated at the NSD Server or at the client? i.e. Does
the client send 2 copies over the network or does the NSD Server get a
single copy and then replicate on storage NSDs?
I couldn't find a
The NSD Client handles the replication and will, as you stated, write one copy
to one NSD (using the primary server for this NSD) and one to a different NSD
in a different GPFS failure group (using quite likely, but not necessarily, a
different NSD server that is the primary server for this
Does anyone know if/when we might see gpfs native raid opened up for the
masses on non-IBM hardware? It's hard to answer the question of "why
can't GPFS do this? Lustre can" in regards to Lustre's integration with
ZFS and support for RAID on commodity hardware.
-Aaron
--
Aaron Knister
NASA
Thanks Christopher. I've tried GPFS on zvols a couple times and the
write throughput I get is terrible because of the required sync=always
parameter. Perhaps a couple of SSD's could help get the number up, though.
-Aaron
On 8/30/16 12:47 PM, Christopher Maestas wrote:
Interestingly enough,
Interestingly enough, Spectrum Scale can run on zvols. Check out:
http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf
-cdm
On Aug 30, 2016, 9:17:05 AM, aaron.s.knis...@nasa.gov wrote:
From: aaron.s.knis...@nasa.gov
To: gpfsug-discuss@spectrumscale.org
Thanks. This confirms the numbers that I am seeing.
Brian
On Tue, Aug 30, 2016 at 2:50 PM, Laurence Horrocks-Barlow <
laure...@qsplace.co.uk> wrote:
> Its the client that does all the synchronous replication, this way the
> cluster is able to scale as the clients do the leg work (so to
RHEL 6.8/2.6.32-642 requires 4.1.1.8 or 4.2.1. You can either go to 6.7 for GPFS 3.5 or bump it up to 7.0/7.1.
See Table 13, here:
http://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html?view=kc#linuxq
Kevin D. Johnson, MBA, MAFMSpectrum Computing, Senior Managing
Hello,
On Mon, Aug 29, 2016 at 09:20:46AM +0200, Frank Kraemer wrote:
> Find the paper here:
>
> https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Petascale%20Data%20Protection
thank you for the paper, I appreciate it.
However, I wonder
In the message dated: Tue, 30 Aug 2016 22:39:18 +0200,
The pithy ruminations from Lukas Hejtmanek on
<[gpfsug-discuss] GPFS 3.5.0 on RHEL 6.8> were:
=> Hello,
GPFS 3.5.0.[23..3-0] work for me under [CentOS|ScientificLinux] 6.8,
but at kernel 2.6.32-573 and lower.
I've found kernel bugs in
Thanks for reading the paper. I agree that the restore of a large number of
files is a challenge today. The restore is the focus area for future
enhancements for the integration between IBM Spectrum Scale and IBM
Spectrum Protect. If something will be available that helps to improve the
restore
there're multiple dependencies , the
performance of MD scan is related toas a rule of thumb... the total amount of IOPS you need to
scan your MD is highly dependent on the metadata blocksize, inode size
(assuming default 4K ) ( and the total number Inodes.. ;-) ) the time it takes to answer
so lets start with some simple questions.
when you say mmbackup takes ages, what version of gpfs code are you running
?
how do you execute the mmbackup command ? exact parameters would be useful
.
what HW are you using for the metadata disks ?
how much capacity (df -h) and how many inodes (df -i)
Just want to add on to one of the points Sven touched on regarding metadata HW.
We have a modest SSD infrastructure for our metadata disks and we can scan 500M
inodes in parallel in about 5 hours if my memory serves me right (and I believe
we could go faster if we really wanted to). I think
Hi all,
It's Tuesday morning and that means question time :)
So from
http://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adv.doc/bl1adv_cesnetworkconfig.htm,
I've extracted the following:
How to use an alias
To use an alias address for CES, you need to provide
You only need a static address for your ifcfg-ethX on all nodes, and can
then have CES manage multiple floating addresses in that subnet.
Also, it doesn't matter much what your interfaces are named (ethX, vlanX,
bondX, ethX.5), GPFS will just find the interface that covers the floating
address in
Ace thanks jf.
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Jan-Frode
Myklebust
Sent: 30 August 2016 10:55
To: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] CES network aliases
You only need a static address for
17 matches
Mail list logo