Re: [gpfsug-discuss] *New* IBM Spectrum Protect Whitepaper "Petascale Data Protection" (Dominic Mueller-Wicke)

2016-08-30 Thread Dominic Mueller-Wicke01
Thanks for reading the paper. I agree that the restore of a large number of
files is a challenge today. The restore is the focus area for future
enhancements for the integration between IBM Spectrum Scale and IBM
Spectrum Protect. If something will be available that helps to improve the
restore capabilities the paper will be updated with this information.

Greetings, Dominic.




From:   gpfsug-discuss-requ...@spectrumscale.org
To: gpfsug-discuss@spectrumscale.org
Date:   31.08.2016 01:25
Subject:gpfsug-discuss Digest, Vol 55, Issue 55
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Send gpfsug-discuss mailing list submissions to
 gpfsug-discuss@spectrumscale.org

To subscribe or unsubscribe via the World Wide Web, visit
 http://gpfsug.org/mailman/listinfo/gpfsug-discuss
or, via email, send a message with subject or body 'help' to
 gpfsug-discuss-requ...@spectrumscale.org

You can reach the person managing the list at
 gpfsug-discuss-ow...@spectrumscale.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."
Today's Topics:

   1. Maximum value for data replication?
  (Simon Thompson (Research Computing - IT Services))
   2. greetings (Kevin D Johnson)
   3. GPFS 3.5.0 on RHEL 6.8 (Lukas Hejtmanek)
   4. Re: GPFS 3.5.0 on RHEL 6.8 (Kevin D Johnson)
   5. Re: GPFS 3.5.0 on RHEL 6.8 (mark.berg...@uphs.upenn.edu)
   6. Re: *New* IBM Spectrum Protect Whitepaper "Petascale Data
  Protection" (Lukas Hejtmanek)
   7. Re: *New* IBM Spectrum Protect Whitepaper "Petascale Data
  Protection" (Sven Oehme)

- Message from "Simon Thompson (Research Computing - IT Services)"
 on Tue, 30 Aug 2016 19:09:05 + -

  To: "gpfsug-discuss@spectrumscale.org"
  

 Subject: [gpfsug-discuss] Maximum value for data replication?  


Is there a maximum value for data replication in Spectrum Scale?

I have a number of nsd servers which have local storage and Id like each
node to have a full copy of all the data in the file-system, say this value
is 4, can I set replication to 4 for data and metadata and have each server
have a full copy?

These are protocol nodes and multi cluster mount another file system (yes I
know not supported) and the cesroot is in the remote file system. On
several occasions where GPFS has wibbled a bit, this has caused issues with
ces locks, so I was thinking of moving the cesroot to a local filesysyem
which is replicated on the local ssds in the protocol nodes. I.e. Its a
generally quiet file system as its only ces cluster config.

I assume if I stop protocols, rsync the data and then change to the new ces
root, I should be able to get this working?

Thanks

Simon

- Message from "Kevin D Johnson"  on Tue, 30 Aug
2016 19:43:39 + -
   
  To: gpfsug-discuss@spectrumscale.org 
   
 Subject: [gpfsug-discuss] greetings   
   

I'm in Lab Services at IBM - just joining and happy to help any way I can.

Kevin D. Johnson, MBA, MAFM
Spectrum Computing, Senior Managing Consultant

IBM Certified Deployment Professional - Spectrum Scale V4.1.1
IBM Certified Deployment Professional - Cloud Object Storage V3.8
720.349.6199 - kevin...@us.ibm.com


- Message from Lukas Hejtmanek  on Tue, 30 Aug
2016 22:39:18 +0200 -

  To: gpfsug-discuss@spectrumscale.org  

 Subject: [gpfsug-discuss] GPFS 3.5.0 on
  RHEL 6.8  


Hello,

does it work for anyone? As of kernel 2.6.32-642, GPFS 3.5.0 (including the
latest patch 32) does start but does not mount and file system. The
internal
mount cmd gets stucked.

--
Lukáš Hejtmánek


- Message from "Kevin D Johnson"  on Tue, 30 Aug
2016 20:51:39 + -
   
  To: gpfsug-discuss@spectrumscale.org 
   
 Subject: Re: [gpfsug-discuss] GPFS 3.5.0 on   
  RHEL 6.8 
   

RHEL 6.8/2.6.32-642 requires 4.1.1.8 or 4.2.1.  You can either go to 6.7
for GPFS 3.5 or bump it up to 7.0/7.1.

See Table 13, here:
http://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html?view=kc#linuxq


Kevin D. Johnson, MBA, MAFM
Spectrum Computing, Senior 

Re: [gpfsug-discuss] *New* IBM Spectrum Protect Whitepaper "Petascale Data Protection"

2016-08-30 Thread Olaf Weiser

there're multiple dependencies , the
performance of MD scan is related toas a rule of thumb... the total amount of IOPS you need to
scan your MD is highly dependent on the metadata blocksize, inode size
(assuming default 4K )   ( and the total number Inodes.. ;-) ) the time it takes to answer these IOs
depends on your backend(s) , and ... .. the parallelism and the node's hardware
resource  and finally the network connectivity (latency, bandwidth)
to give some directions... we even have clusters, using regular
(old and spinning) drives , and 're able to scan > 200 mio files within
< 15 minutes.. From:      
 "Knister, Aaron
S. (GSFC-606.2)[COMPUTER SCIENCE CORP]" To:      
 gpfsug main discussion
list , "gpfsug    
   main discussion list" Date:      
 08/31/2016 06:01 AMSubject:    
   Re: [gpfsug-discuss]
*New* IBM Spectrum Protect Whitepaper "Petascale Data Protection"Sent by:    
   gpfsug-discuss-boun...@spectrumscale.orgJust want to add on to one of the points
Sven touched on regarding metadata HW. We have a modest SSD infrastructure
for our metadata disks and we can scan 500M inodes in parallel in about
5 hours if my memory serves me right (and I believe we could go faster
if we really wanted to). I think having solid metadata disks (no pun intended)
will really help with scan times. From: Sven OehmeSent: 8/30/16, 7:25 PMTo: gpfsug main discussion listSubject: Re: [gpfsug-discuss] *New* IBM Spectrum Protect Whitepaper
"Petascale Data Protection"so lets start with some simple questions.  when you say mmbackup takes ages, what version of gpfs
code are you running ? how do you execute the mmbackup command ? exact parameters
would be useful . what HW are you using for the metadata disks ? how much capacity (df -h) and how many inodes (df -i)
do you have in the filesystem you try to backup ?svenOn Tue, Aug 30, 2016 at 3:02 PM, Lukas Hejtmanek 
wrote:Hello,On Mon, Aug 29, 2016 at 09:20:46AM +0200, Frank Kraemer wrote:> Find the paper here:>> https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Petascale%20Data%20Protectionthank you for the paper, I appreciate it.However, I wonder whether it could be extended a little. As it has the
titlePetascale Data Protection, I think that in Peta scale, you have to deal
withmillions (well rather hundreds of millions) of files you store in and this
issomething where TSM does not scale well.Could you give some hints:On the backup site:mmbackup takes ages for:a) scan (try to scan 500M files even in parallel)b) backup - what if 10 % of files get changed - backup process can be blockedseveral days as mmbackup cannot run in several instances on the same filesystem, so you have to wait until one run of mmbackup finishes. How long
couldit take at petascale?On the restore site:how can I restore e.g. 40 millions of file efficiently? dsmc restore '/path/*'runs into serious troubles after say 20M files (maybe wrong internalstructures used), however, scanning 1000 more files takes several minutesresulting the dsmc restore never reaches that 40M files.using filelists the situation is even worse. I run dsmc restore -filelistwith a filelist consisting of 2.4M files. Running for *two* days withoutrestoring even a single file. dsmc is consuming 100 % CPU.So any hints addressing these issues with really large number of files
wouldbe even more appreciated.--Lukáš Hejtmánek___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] *New* IBM Spectrum Protect Whitepaper "Petascale Data Protection"

2016-08-30 Thread Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP]
Just want to add on to one of the points Sven touched on regarding metadata HW. 
We have a modest SSD infrastructure for our metadata disks and we can scan 500M 
inodes in parallel in about 5 hours if my memory serves me right (and I believe 
we could go faster if we really wanted to). I think having solid metadata disks 
(no pun intended) will really help with scan times.


From: Sven Oehme
Sent: 8/30/16, 7:25 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] *New* IBM Spectrum Protect Whitepaper "Petascale 
Data Protection"
so lets start with some simple questions.

when you say mmbackup takes ages, what version of gpfs code are you running ?
how do you execute the mmbackup command ? exact parameters would be useful .
what HW are you using for the metadata disks ?
how much capacity (df -h) and how many inodes (df -i) do you have in the 
filesystem you try to backup ?

sven


On Tue, Aug 30, 2016 at 3:02 PM, Lukas Hejtmanek 
> wrote:
Hello,

On Mon, Aug 29, 2016 at 09:20:46AM +0200, Frank Kraemer wrote:
> Find the paper here:
>
> https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Petascale%20Data%20Protection

thank you for the paper, I appreciate it.

However, I wonder whether it could be extended a little. As it has the title
Petascale Data Protection, I think that in Peta scale, you have to deal with
millions (well rather hundreds of millions) of files you store in and this is
something where TSM does not scale well.

Could you give some hints:

On the backup site:
mmbackup takes ages for:
a) scan (try to scan 500M files even in parallel)
b) backup - what if 10 % of files get changed - backup process can be blocked
several days as mmbackup cannot run in several instances on the same file
system, so you have to wait until one run of mmbackup finishes. How long could
it take at petascale?

On the restore site:
how can I restore e.g. 40 millions of file efficiently? dsmc restore '/path/*'
runs into serious troubles after say 20M files (maybe wrong internal
structures used), however, scanning 1000 more files takes several minutes
resulting the dsmc restore never reaches that 40M files.

using filelists the situation is even worse. I run dsmc restore -filelist
with a filelist consisting of 2.4M files. Running for *two* days without
restoring even a single file. dsmc is consuming 100 % CPU.

So any hints addressing these issues with really large number of files would
be even more appreciated.

--
Lukáš Hejtmánek
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] *New* IBM Spectrum Protect Whitepaper "Petascale Data Protection"

2016-08-30 Thread Sven Oehme
so lets start with some simple questions.

when you say mmbackup takes ages, what version of gpfs code are you running
?
how do you execute the mmbackup command ? exact parameters would be useful
.
what HW are you using for the metadata disks ?
how much capacity (df -h) and how many inodes (df -i) do you have in the
filesystem you try to backup ?

sven


On Tue, Aug 30, 2016 at 3:02 PM, Lukas Hejtmanek 
wrote:

> Hello,
>
> On Mon, Aug 29, 2016 at 09:20:46AM +0200, Frank Kraemer wrote:
> > Find the paper here:
> >
> > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/
> Tivoli%20Storage%20Manager/page/Petascale%20Data%20Protection
>
> thank you for the paper, I appreciate it.
>
> However, I wonder whether it could be extended a little. As it has the
> title
> Petascale Data Protection, I think that in Peta scale, you have to deal
> with
> millions (well rather hundreds of millions) of files you store in and this
> is
> something where TSM does not scale well.
>
> Could you give some hints:
>
> On the backup site:
> mmbackup takes ages for:
> a) scan (try to scan 500M files even in parallel)
> b) backup - what if 10 % of files get changed - backup process can be
> blocked
> several days as mmbackup cannot run in several instances on the same file
> system, so you have to wait until one run of mmbackup finishes. How long
> could
> it take at petascale?
>
> On the restore site:
> how can I restore e.g. 40 millions of file efficiently? dsmc restore
> '/path/*'
> runs into serious troubles after say 20M files (maybe wrong internal
> structures used), however, scanning 1000 more files takes several minutes
> resulting the dsmc restore never reaches that 40M files.
>
> using filelists the situation is even worse. I run dsmc restore -filelist
> with a filelist consisting of 2.4M files. Running for *two* days without
> restoring even a single file. dsmc is consuming 100 % CPU.
>
> So any hints addressing these issues with really large number of files
> would
> be even more appreciated.
>
> --
> Lukáš Hejtmánek
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] *New* IBM Spectrum Protect Whitepaper "Petascale Data Protection"

2016-08-30 Thread Lukas Hejtmanek
Hello,

On Mon, Aug 29, 2016 at 09:20:46AM +0200, Frank Kraemer wrote:
> Find the paper here:
> 
> https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Petascale%20Data%20Protection

thank you for the paper, I appreciate it. 

However, I wonder whether it could be extended a little. As it has the title
Petascale Data Protection, I think that in Peta scale, you have to deal with
millions (well rather hundreds of millions) of files you store in and this is
something where TSM does not scale well.

Could you give some hints:

On the backup site:
mmbackup takes ages for:
a) scan (try to scan 500M files even in parallel)
b) backup - what if 10 % of files get changed - backup process can be blocked
several days as mmbackup cannot run in several instances on the same file
system, so you have to wait until one run of mmbackup finishes. How long could
it take at petascale?

On the restore site:
how can I restore e.g. 40 millions of file efficiently? dsmc restore '/path/*'
runs into serious troubles after say 20M files (maybe wrong internal
structures used), however, scanning 1000 more files takes several minutes
resulting the dsmc restore never reaches that 40M files. 

using filelists the situation is even worse. I run dsmc restore -filelist
with a filelist consisting of 2.4M files. Running for *two* days without
restoring even a single file. dsmc is consuming 100 % CPU. 

So any hints addressing these issues with really large number of files would
be even more appreciated.

-- 
Lukáš Hejtmánek
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS 3.5.0 on RHEL 6.8

2016-08-30 Thread mark . bergman
In the message dated: Tue, 30 Aug 2016 22:39:18 +0200,
The pithy ruminations from Lukas Hejtmanek on 
<[gpfsug-discuss] GPFS 3.5.0 on RHEL 6.8> were:
=> Hello,  

GPFS 3.5.0.[23..3-0] work for me under [CentOS|ScientificLinux] 6.8,
but at kernel 2.6.32-573 and lower.

I've found kernel bugs in blk_cloned_rq_check_limits() in later kernel
revs that caused multipath errors, resulting in GPFS being unable to
find all NSDs and mount the filesystem.

I am not updating to a newer kernel until I'm certain this is resolved.

I opened a bug with CentOS:

https://bugs.centos.org/view.php?id=10997

and began an extended discussion with the (RH & SUSE) developers of that
chunk of kernel code. I don't know if an upstream bug has been opened
by RH, but see:

https://patchwork.kernel.org/patch/9140337/
=> 
=> does it work for anyone? As of kernel 2.6.32-642, GPFS 3.5.0 (including the
=> latest patch 32) does start but does not mount and file system. The internal
=> mount cmd gets stucked. 
=> 
=> -- 
=> Lukáš Hejtmánek  


-- 
Mark Bergman   voice: 215-746-4061  
 
mark.berg...@uphs.upenn.edu  fax: 215-614-0266
http://www.cbica.upenn.edu/
IT Technical Director, Center for Biomedical Image Computing and Analytics
Department of Radiology University of Pennsylvania
  PGP Key: http://www.cbica.upenn.edu/sbia/bergman 
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS 3.5.0 on RHEL 6.8

2016-08-30 Thread Kevin D Johnson
RHEL 6.8/2.6.32-642 requires 4.1.1.8 or 4.2.1.  You can either go to 6.7 for GPFS 3.5 or bump it up to 7.0/7.1. 
See Table 13, here:
http://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html?view=kc#linuxq 
Kevin D. Johnson, MBA, MAFMSpectrum Computing, Senior Managing ConsultantIBM Certified Deployment Professional - Spectrum Scale V4.1.1IBM Certified Deployment Professional - Cloud Object Storage V3.8720.349.6199 - kevin...@us.ibm.com
 
 
 
- Original message -From: Lukas Hejtmanek Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug-discuss@spectrumscale.orgCc:Subject: [gpfsug-discuss] GPFS 3.5.0 on RHEL 6.8Date: Tue, Aug 30, 2016 4:39 PM 
Hello,does it work for anyone? As of kernel 2.6.32-642, GPFS 3.5.0 (including thelatest patch 32) does start but does not mount and file system. The internalmount cmd gets stucked.--Lukáš Hejtmánek___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss 
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] greetings

2016-08-30 Thread Kevin D Johnson
I'm in Lab Services at IBM - just joining and happy to help any way I can.
 
Kevin D. Johnson, MBA, MAFMSpectrum Computing, Senior Managing ConsultantIBM Certified Deployment Professional - Spectrum Scale V4.1.1IBM Certified Deployment Professional - Cloud Object Storage V3.8720.349.6199 - kevin...@us.ibm.com
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Data Replication

2016-08-30 Thread Brian Marshall
Thanks.   This confirms the numbers that I am seeing.

Brian

On Tue, Aug 30, 2016 at 2:50 PM, Laurence Horrocks-Barlow <
laure...@qsplace.co.uk> wrote:

> Its the client that does all the synchronous replication, this way the
> cluster is able to scale as the clients do the leg work (so to speak).
>
> The somewhat "exception" is if a GPFS NSD server (or client with direct
> NSD) access uses a server bases protocol such as SMB, in this case the SMB
> server will do the replication as the SMB client doesn't know about GPFS or
> its replication; essentially the SMB server is the GPFS client.
>
> -- Lauz
>
> On 30 August 2016 17:03:38 CEST, Bryan Banister 
> wrote:
>
>> The NSD Client handles the replication and will, as you stated, write one
>> copy to one NSD (using the primary server for this NSD) and one to a
>> different NSD in a different GPFS failure group (using quite likely, but
>> not necessarily, a different NSD server that is the primary server for this
>> alternate NSD).
>>
>> Cheers,
>>
>> -Bryan
>>
>>
>>
>> *From:* gpfsug-discuss-boun...@spectrumscale.org [mailto:gpfsug-discuss-
>> boun...@spectrumscale.org] *On Behalf Of *Brian Marshall
>> *Sent:* Tuesday, August 30, 2016 9:59 AM
>> *To:* gpfsug main discussion list
>> *Subject:* [gpfsug-discuss] Data Replication
>>
>>
>>
>> All,
>>
>>
>>
>> If I setup a filesystem to have data replication of 2 (2 copies of data),
>> does the data get replicated at the NSD Server or at the client?  i.e. Does
>> the client send 2 copies over the network or does the NSD Server get a
>> single copy and then replicate on storage NSDs?
>>
>>
>>
>> I couldn't find a place in the docs that talked about this specific point.
>>
>>
>>
>> Thank you,
>>
>> Brian Marshall
>>
>>
>> --
>>
>> Note: This email is for the confidential use of the named addressee(s)
>> only and may contain proprietary, confidential or privileged information.
>> If you are not the intended recipient, you are hereby notified that any
>> review, dissemination or copying of this email is strictly prohibited, and
>> to please notify the sender immediately and destroy this email and any
>> attachments. Email transmission cannot be guaranteed to be secure or
>> error-free. The Company, therefore, does not make any guarantees as to the
>> completeness or accuracy of this email or any attachments. This email is
>> for informational purposes only and does not constitute a recommendation,
>> offer, request or solicitation of any kind to buy, sell, subscribe, redeem
>> or perform any type of transaction of a financial product.
>>
>> --
>>
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] gpfs native raid

2016-08-30 Thread Aaron Knister
Thanks Christopher. I've tried GPFS on zvols a couple times and the 
write throughput I get is terrible because of the required sync=always 
parameter. Perhaps a couple of SSD's could help get the number up, though.


-Aaron

On 8/30/16 12:47 PM, Christopher Maestas wrote:

Interestingly enough, Spectrum Scale can run on zvols. Check out:

http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf

-cdm


On Aug 30, 2016, 9:17:05 AM, aaron.s.knis...@nasa.gov wrote:

From: aaron.s.knis...@nasa.gov
To: gpfsug-discuss@spectrumscale.org
Cc:
Date: Aug 30, 2016 9:17:05 AM
Subject: [gpfsug-discuss] gpfs native raid

Does anyone know if/when we might see gpfs native raid opened up for the
masses on non-IBM hardware? It's hard to answer the question of "why
can't GPFS do this? Lustre can" in regards to Lustre's integration with
ZFS and support for RAID on commodity hardware.
-Aaron
--
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776 
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



--
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] gpfs native raid

2016-08-30 Thread Christopher Maestas
Interestingly enough, Spectrum Scale can run on zvols. Check out: 
   
http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf
  
  -cdm 
  
  On Aug 30, 2016, 9:17:05 AM, aaron.s.knis...@nasa.gov wrote:
  
  From: aaron.s.knis...@nasa.gov
  To: gpfsug-discuss@spectrumscale.org
  Cc: 
  Date: Aug 30, 2016 9:17:05 AM
  Subject: [gpfsug-discuss] gpfs native raid
  
  
 Does anyone know if/when we might see gpfs native raid opened up for the 
   masses on non-IBM hardware? It's hard to answer the question of "why 
   can't GPFS do this? Lustre can" in regards to Lustre's integration with 
   ZFS and support for RAID on commodity hardware.
   -Aaron
   -- 
   Aaron Knister
   NASA Center for Climate Simulation (Code 606.2)
   Goddard Space Flight Center
   (301) 286-2776
   ___
   gpfsug-discuss mailing list
   gpfsug-discuss at spectrumscale.org
   http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] gpfs native raid

2016-08-30 Thread Aaron Knister
Does anyone know if/when we might see gpfs native raid opened up for the 
masses on non-IBM hardware? It's hard to answer the question of "why 
can't GPFS do this? Lustre can" in regards to Lustre's integration with 
ZFS and support for RAID on commodity hardware.


-Aaron

--
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Data Replication

2016-08-30 Thread Bryan Banister
The NSD Client handles the replication and will, as you stated, write one copy 
to one NSD (using the primary server for this NSD) and one to a different NSD 
in a different GPFS failure group (using quite likely, but not necessarily, a 
different NSD server that is the primary server for this alternate NSD).
Cheers,
-Bryan

From: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Brian Marshall
Sent: Tuesday, August 30, 2016 9:59 AM
To: gpfsug main discussion list
Subject: [gpfsug-discuss] Data Replication

All,

If I setup a filesystem to have data replication of 2 (2 copies of data), does 
the data get replicated at the NSD Server or at the client?  i.e. Does the 
client send 2 copies over the network or does the NSD Server get a single copy 
and then replicate on storage NSDs?

I couldn't find a place in the docs that talked about this specific point.

Thank you,
Brian Marshall



Note: This email is for the confidential use of the named addressee(s) only and 
may contain proprietary, confidential or privileged information. If you are not 
the intended recipient, you are hereby notified that any review, dissemination 
or copying of this email is strictly prohibited, and to please notify the 
sender immediately and destroy this email and any attachments. Email 
transmission cannot be guaranteed to be secure or error-free. The Company, 
therefore, does not make any guarantees as to the completeness or accuracy of 
this email or any attachments. This email is for informational purposes only 
and does not constitute a recommendation, offer, request or solicitation of any 
kind to buy, sell, subscribe, redeem or perform any type of transaction of a 
financial product.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Data Replication

2016-08-30 Thread Brian Marshall
All,

If I setup a filesystem to have data replication of 2 (2 copies of data),
does the data get replicated at the NSD Server or at the client?  i.e. Does
the client send 2 copies over the network or does the NSD Server get a
single copy and then replicate on storage NSDs?

I couldn't find a place in the docs that talked about this specific point.

Thank you,
Brian Marshall
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] CES network aliases

2016-08-30 Thread Sobey, Richard A
Ace thanks jf.

From: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Jan-Frode 
Myklebust
Sent: 30 August 2016 10:55
To: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] CES network aliases

You only need a static address for your ifcfg-ethX on all nodes, and can then 
have CES manage multiple floating addresses in that subnet.

Also, it doesn't matter much what your interfaces are named (ethX, vlanX, 
bondX, ethX.5), GPFS will just find the interface that covers the floating 
address in its subnet, and add the alias there.


-jf
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] CES network aliases

2016-08-30 Thread Jan-Frode Myklebust
You only need a static address for your ifcfg-ethX on all nodes, and can
then have CES manage multiple floating addresses in that subnet.

Also, it doesn't matter much what your interfaces are named (ethX, vlanX,
bondX, ethX.5), GPFS will just find the interface that covers the floating
address in its subnet, and add the alias there.


-jf
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] CES network aliases

2016-08-30 Thread Sobey, Richard A
Hi all,

It's Tuesday morning and that means question time :)

So from 
http://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adv.doc/bl1adv_cesnetworkconfig.htm,
 I've extracted the following:

How to use an alias
To use an alias address for CES, you need to provide a static IP address that 
is not already defined as an alias in the /etc/sysconfig/network-scripts 
directory.
Before you enable the node as a CES node, configure the network adapters for 
each subnet that are represented in the CES address pool:

  1.  Define a static IP address for the device:
2.  /etc/sysconfig/network-scripts/ifcfg-eth0
3.  DEVICE=eth1
4.  BOOTPROTO=none
5.  IPADDR=10.1.1.10
6.  NETMASK=255.255.255.0
7.  ONBOOT=yes
8.  GATEWAY=10.1.1.1
TYPE=Ethernet

  1.  Ensure that there are no aliases that are defined in the network-scripts 
directory for this interface:
10.# ls -l /etc/sysconfig/network-scripts/ifcfg-eth1:*
ls: /etc/sysconfig/network-scripts/ifcfg-eth1:*: No such file or directory
After the node is enabled as a CES node, no further action is required. CES 
addresses are added as aliases to the already configured adapters.
Now, does this mean for every floating (CES) IP address I need a separate 
ifcfg-ethX on each node? At the moment I simply have an ifcfg-X file 
representing each physical network adapter, and then the CES IPs defined. I can 
see IP addresses being added during failover to the primary interface, but now 
I've read I potentially need to create a separate file. What's the right way to 
move forward? If I need separate files, I presume the listed IP is a CES IP 
(not system) and does it also matter what X is in ifcfg-ethX?

Many thanks
Richard
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss