[gpfsug-discuss] Question on FPO node - NSD recovery

2016-02-01 Thread Oesterlin, Robert
When a node that’s part of an FPO file system (local disks) and the node is 
rebooted – the NSDs come up as “down” until I manually starts them. GPFS start 
on the node but the NSDs stay down.

Is this the expected behavior or is there a config setting I missed somewhere?

Bob Oesterlin
Sr Storage Engineer, Nuance HPC Grid
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] EXTERNAL: Next meet the devs - 24th Feb 2016

2016-02-01 Thread Paul.Tomlinson
Hi Simon,
   We would like to send Mark Roberts (HPC) from AWE if any 
places are available. If there any places I'm sure will be willing to provide a 
list of topics that interest us.

Best Regards
Paul Tomlinson
High Performance Computing
Direct: 0118 985 8060 or 0118 982 4147
Mobile 07920783365 VPN: 88864
AWE, Aldermaston, Reading, RG7 4PR

From: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of "Spectrum scale 
UG Chair (Simon Thompson)"<
Sent: 19 January 2016 17:14
To: gpfsug-discuss@spectrumscale.org
Subject: EXTERNAL: [gpfsug-discuss] Next meet the devs - 24th Feb 2016

Dear All,

We are planning the next 'Meet the Devs' event for Wednesday 24th February 2016,
11am-3:30pm. The event will be held in central Oxford. The agenda promises to 
be hands on and give you
the opportunity to speak face to face with the developers of Spectrum Scale.

Guideline agenda:
* TBC - please provide input on what you'd like to see!

Lunch and refreshments will be provided.

Please can you let me know by email if you are interested in attending by 
Wednesday 17th February.

Thanks and we hope to see you there. Thanks to Andy at OERC for offering to 
host.

Simon




The information in this email and in any attachment(s) is
commercial in confidence. If you are not the named addressee(s)
or
if you receive this email in error then any distribution, copying or
use of this communication or the information in it is strictly
prohibited. Please notify us immediately by email at
admin.internet(at)awe.co.uk, and then delete this message from
your computer. While attachments are virus checked, AWE plc
does not accept any liability in respect of any virus which is not
detected.

AWE Plc
Registered in England and Wales
Registration No 02763902
AWE, Aldermaston, Reading, RG7 4PR
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0

2016-02-01 Thread Frank Kraemer


IBM Elastic Storage Server (ESS) Advanced Edition ver 4.0 solution is
composed of various components tested together for compatibility and
correctness. Key components are RHEL, IBM Spectrum Scale RAID V4.2.0-1, and
Power System Firmware.

Update Name: ESS_ADV_BASEIMAGE-4.0.0-power-Linux
Fix ID: ESS_ADV_BASEIMAGE-4.0.0-power-Linux
Publication Date: 29 January 2016

Summary of changes in ESS ver 4.0

  a) ESS core
 - IBM Spectrum Scale RAID V4.2.0-1
 - Updated GUI

  b) Support of Red Hat Enterprise Linux 7.1
 - No changes from 3.0.x or 3.5.x

  c) Support of MLNX_OFED_LINUX-3.1-1.0.6.1
 - Updated from 3.x.y

  d) Install Toolkit
 - Updated Install Toolkit

  e) Updated firmware rpm
 - IP RAID Adapter FW
 - Host Adapter FW
 - Enclosure and drive FW

Download: (612 MB)
http://www.ibm.com/support/fixcentral/swg/quickorder?product=ibm/StorageSoftware/IBM
+Spectrum+Scale
+RAID=fixid=ESS_ADV_BASEIMAGE-4.0.0-power-Linux

README:
http://www-01.ibm.com/support/docview.wss?rs=0=isg42500

Deployment and Administration Guides are available in IBM Knowledge Center.
http://www.ibm.com/support/knowledgecenter/SSYSP8/sts_welcome.html

- Elastic Storage Server: Quick Deployment Guide
- Deploying the Elastic Storage Server
- IBM Spectrum Scale RAID: Administration

Frank Kraemer
IBM Consulting IT Specialist  / Client Technical Architect
Hechtsheimer Str. 2, 55131 Mainz
mailto:kraem...@de.ibm.com
voice: +49171-3043699
IBM Germany
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0

2016-02-01 Thread Kenneth Liu

And ISKLM to manage the encryption keys.


Kenneth Liu

Software Defined Infrastructure --
Spectrum Storage, Cleversafe & Platform Computing Sales

Address:4000 Executive Parkway
  San Ramon,  CA  94583
Mobile #:(510) 584-7657
Email:  l...@us.ibm.com




From:   "Frank Kraemer" 
To: gpfsug-disc...@gpfsug.org
Date:   02/01/2016 09:30 AM
Subject:[gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage
Server (ESS) Advanced Edition v4.0
Sent by:gpfsug-discuss-boun...@spectrumscale.org



IBM Elastic Storage Server (ESS) Advanced Edition ver 4.0 solution is
composed of various components tested together for compatibility and
correctness. Key components are RHEL, IBM Spectrum Scale RAID V4.2.0-1, and
Power System Firmware.

Update Name: ESS_ADV_BASEIMAGE-4.0.0-power-Linux
Fix ID: ESS_ADV_BASEIMAGE-4.0.0-power-Linux
Publication Date: 29 January 2016

Summary of changes in ESS ver 4.0

a) ESS core
- IBM Spectrum Scale RAID V4.2.0-1
- Updated GUI

b) Support of Red Hat Enterprise Linux 7.1
- No changes from 3.0.x or 3.5.x

c) Support of MLNX_OFED_LINUX-3.1-1.0.6.1
- Updated from 3.x.y

d) Install Toolkit
- Updated Install Toolkit

e) Updated firmware rpm
- IP RAID Adapter FW
- Host Adapter FW
- Enclosure and drive FW

Download: (612 MB)
http://www.ibm.com/support/fixcentral/swg/quickorder?product=ibm/StorageSoftware/IBM
+Spectrum+Scale
+RAID=fixid=ESS_ADV_BASEIMAGE-4.0.0-power-Linux

README:
http://www-01.ibm.com/support/docview.wss?rs=0=isg42500

Deployment and Administration Guides are available in IBM Knowledge Center.
http://www.ibm.com/support/knowledgecenter/SSYSP8/sts_welcome.html

- Elastic Storage Server: Quick Deployment Guide
- Deploying the Elastic Storage Server
- IBM Spectrum Scale RAID: Administration

Frank Kraemer
IBM Consulting IT Specialist / Client Technical Architect
Hechtsheimer Str. 2, 55131 Mainz
mailto:kraem...@de.ibm.com
voice: +49171-3043699
IBM Germany___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0

2016-02-01 Thread Kenneth Liu

And ISKLM to manage the encryption keys.


Kenneth Liu

Software Defined Infrastructure --
Spectrum Storage, Cleversafe & Platform Computing Sales

Address:4000 Executive Parkway
  San Ramon,  CA  94583
Mobile #:(510) 584-7657
Email:  l...@us.ibm.com




From:   "Frank Kraemer" 
To: gpfsug-disc...@gpfsug.org
Date:   02/01/2016 09:30 AM
Subject:[gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage
Server (ESS) Advanced Edition v4.0
Sent by:gpfsug-discuss-boun...@spectrumscale.org



IBM Elastic Storage Server (ESS) Advanced Edition ver 4.0 solution is
composed of various components tested together for compatibility and
correctness. Key components are RHEL, IBM Spectrum Scale RAID V4.2.0-1, and
Power System Firmware.

Update Name: ESS_ADV_BASEIMAGE-4.0.0-power-Linux
Fix ID: ESS_ADV_BASEIMAGE-4.0.0-power-Linux
Publication Date: 29 January 2016

Summary of changes in ESS ver 4.0

a) ESS core
- IBM Spectrum Scale RAID V4.2.0-1
- Updated GUI

b) Support of Red Hat Enterprise Linux 7.1
- No changes from 3.0.x or 3.5.x

c) Support of MLNX_OFED_LINUX-3.1-1.0.6.1
- Updated from 3.x.y

d) Install Toolkit
- Updated Install Toolkit

e) Updated firmware rpm
- IP RAID Adapter FW
- Host Adapter FW
- Enclosure and drive FW

Download: (612 MB)
http://www.ibm.com/support/fixcentral/swg/quickorder?product=ibm/StorageSoftware/IBM
+Spectrum+Scale
+RAID=fixid=ESS_ADV_BASEIMAGE-4.0.0-power-Linux

README:
http://www-01.ibm.com/support/docview.wss?rs=0=isg42500

Deployment and Administration Guides are available in IBM Knowledge Center.
http://www.ibm.com/support/knowledgecenter/SSYSP8/sts_welcome.html

- Elastic Storage Server: Quick Deployment Guide
- Deploying the Elastic Storage Server
- IBM Spectrum Scale RAID: Administration

Frank Kraemer
IBM Consulting IT Specialist / Client Technical Architect
Hechtsheimer Str. 2, 55131 Mainz
mailto:kraem...@de.ibm.com
voice: +49171-3043699
IBM Germany___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1)

2016-02-01 Thread Wahl, Edward
Along the same vein I've patched rsync to maintain source atimes in Linux for 
large transitions such as this.  Along with the stadnard "patches" mod for 
destination atimes it is quite useful.   Works in 3.0.8 and 3.0.9.  I've not 
yet ported it to 3.1.x
https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff

Ed Wahl
OSC


From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Orlando Richards 
[orlando.richa...@ed.ac.uk]
Sent: Monday, February 01, 2016 4:25 AM
To: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance 
(GPFS4.1)

For what it's worth - there's a patch for rsync which IBM provided a
while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up
on the gpfsug github here:

   https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync



On 29/01/16 22:36, Sven Oehme wrote:
> Doug,
>
> This won't really work if you make use of ACL's or use special GPFS
> extended attributes or set quotas, filesets, etc
> so unfortunate the answer is you need to use a combination of things and
> there is work going on to make some of this simpler (e.g. for ACL's) ,
> but its a longer road to get there.  so until then you need to think
> about multiple aspects .
>
> 1. you need to get the data across and there are various ways to do this.
>
> a) AFM is the simplest of all as it not just takes care of ACL's and
> extended attributes and alike as it understands the GPFS internals it
> also is operating in parallel can prefetch data, etc so its a efficient
> way to do this but as already pointed out doesn't transfer quota or
> fileset informations.
>
> b) you can either use rsync or any other pipe based copy program. the
> downside is that they are typical single threaded and do a file by file
> approach, means very metadata intensive on the source as well as target
> side and cause a lot of ios on both side.
>
> c) you can use the policy engine to create a list of files to transfer
> to at least address the single threaded scan part, then partition the
> data and run multiple instances of cp or rsync in parallel, still
> doesn't fix the ACL / EA issues, but the data gets there faster.
>
> 2. you need to get ACL/EA informations over too. there are several
> command line options to dump the data and restore it, they kind of
> suffer the same problem as data transfers , which is why using AFM is
> the best way of doing this if you rely on ACL/EA  informations.
>
> 3. transfer quota / fileset infos.  there are several ways to do this,
> but all require some level of scripting to do this.
>
> if you have TSM/HSM you could also transfer the data using SOBAR it's
> described in the advanced admin book.
>
> sven
>
>
> On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug
>  > wrote:
>
> I have found that a tar pipe is much faster than rsync for this sort
> of thing. The fastest of these is ‘star’ (schily tar). On average it
> is about 2x-5x faster than rsync for doing this. After one pass with
> this, you can use rsync for a subsequent or last pass synch.
>
> __ __
>
> e.g.
>
> $ cd /export/gpfs1/foo
>
> $ star –c H=xtar | (cd /export/gpfs2/foo; star –xp)
>
> __ __
>
> This also will not preserve filesets and quotas, though. You should
> be able to automate that with a little bit of awk, perl, or whatnot.
>
> __ __
>
> __ __
>
> *From:*gpfsug-discuss-boun...@spectrumscale.org
> 
> [mailto:gpfsug-discuss-boun...@spectrumscale.org
> ] *On Behalf Of
> *Damir Krstic
> *Sent:* Friday, January 29, 2016 2:32 PM
> *To:* gpfsug main discussion list
> *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS
> appliance (GPFS4.1)
>
> __ __
>
> We have recently purchased ESS appliance from IBM (GL6) with 1.5PT
> of storage. We are in planning stages of implementation. We would
> like to migrate date from our existing GPFS installation (around
> 300TB) to new solution. 
>
> __ __
>
> We were planning of adding ESS to our existing GPFS cluster and
> adding its disks and then deleting our old disks and having the data
> migrated this way. However, our existing block size on our projects
> filesystem is 1M and in order to extract as much performance out of
> ESS we would like its filesystem created with larger block size.
> Besides rsync do you have any suggestions of how to do this without
> downtime and in fastest way possible? 
>
> __ __
>
> I have looked at AFM but it does not seem to migrate quotas and
> filesets so that may not be an optimal solution. 
>
>
> 

Re: [gpfsug-discuss] what's on a 'dataOnly' disk?

2016-02-01 Thread Marc A Kaplan
Just to add...

Spectrum Scale is no different than most other file systems in this 
respect.  It assumes the disk system and network systems will detect I/O 
errors, including data corruption.
And it usually will ... but there are, as you've discovered, scenarios 
where it can not. 


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] NYC GPFS job opportunity

2016-02-01 Thread Zachary Giles
errr.. Yeah, that was to the mailing list. Sorry everyone. Please
erase from your brains if possible.  I promise I've used email before.
Lesson: don't do emails before coffee
I believe a valid punishment is falling on your own sword. I'll be back...

-Zach
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss