The licensing model was my last point — if the OP uses FPO just to create data 
resiliency they increase their cost (or curtail their access).

I was really asking if there was a real, technical positive for using FPO in 
this example, as I could only come up with equivalences and negatives.

-- 
Stephen



> On Nov 30, 2016, at 10:55 PM, Ken Hill <k...@us.ibm.com> wrote:
> 
> Hello Stephen,
> 
> There are three licensing models for Spectrum Scale | GPFS:
> 
> Server
> FPO
> Client
> 
> I think the thing you might be missing is the associated cost per function. 
> 
> Regards,
> 
> Ken Hill
> Technical Sales Specialist | Software Defined Solution Sales
> IBM Systems
> Phone:1-540-207-7270
> E-mail: k...@us.ibm.com <mailto:k...@us.ibm.com>      
> <Mail Attachment.png> <http://www.ibm.com/us-en/>  <Mail Attachment.png> 
> <http://www-03.ibm.com/systems/platformcomputing/products/lsf/>  <Mail 
> Attachment.png> 
> <http://www-03.ibm.com/systems/platformcomputing/products/high-performance-services/index.html>
>   <Mail Attachment.png> 
> <http://www-03.ibm.com/systems/platformcomputing/products/symphony/index.html>
>   <Mail Attachment.png> <http://www-03.ibm.com/systems/storage/spectrum/>  
> <Mail Attachment.png> 
> <http://www-01.ibm.com/software/tivoli/csi/cloud-storage/>  <Mail 
> Attachment.png> <http://www-01.ibm.com/software/tivoli/csi/backup-recovery/>  
> <Mail Attachment.png> 
> <http://www-03.ibm.com/systems/storage/tape/ltfs/index.html>  <Mail 
> Attachment.png> <http://www-03.ibm.com/systems/storage/spectrum/>  <Mail 
> Attachment.png> <http://www-03.ibm.com/systems/storage/spectrum/scale/>  
> <Mail Attachment.png> 
> <https://www.ibm.com/marketplace/cloud/object-storage/us/en-us> 
> 
> 2300 Dulles Station Blvd
> Herndon, VA 20171-6133
> United States
> 
> 
> 
> 
> 
> From:        Stephen Ulmer <ul...@ulmer.org>
> To:        gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
> Date:        11/30/2016 09:46 PM
> Subject:        Re: [gpfsug-discuss] Strategies - servers with local SAS disks
> Sent by:        gpfsug-discuss-boun...@spectrumscale.org
> 
> 
> 
> 
> I don’t understand what FPO provides here that mirroring doesn’t:
> You can still use failure domains — one for each node.
> Both still have redundancy for the data; you can lose a disk or a node.
> The data has to be re-striped in the event of a disk failure — no matter what.
> 
> Also, the FPO license doesn’t allow for regular clients to access the data -- 
> only server and FPO nodes.
> 
> What am I missing?
> 
> Liberty,
> 
> -- 
> Stephen
> 
> 
> 
> On Nov 30, 2016, at 3:51 PM, Andrew Beattie <abeat...@au1.ibm.com 
> <mailto:abeat...@au1.ibm.com>> wrote:
> 
> Bob,
>  
> If your not going to use integrated Raid controllers in the servers, then FPO 
> would seem to be the most resilient scenario.
> yes it has its own overheads, but with that many drives to manage, a JOBD 
> architecture and manual restriping doesn't sound like fun
>  
> If you are going down the path of integrated raid controllers then any form 
> of distributed raid is probably the best scenario, Raid 6 obviously.
>  
> How many Nodes are you planning on building?  The more nodes the more value 
> FPO is likely to bring as you can be more specific in how the data is written 
> to the nodes.
>  
> Andrew Beattie
> Software Defined Storage  - IT Specialist
> Phone: 614-2133-7927
> E-mail: abeat...@au1.ibm.com <mailto:abeat...@au1.ibm.com>
>  
>  
> ----- Original message -----
> From: "Oesterlin, Robert" <robert.oester...@nuance.com 
> <mailto:robert.oester...@nuance.com>>
> Sent by: gpfsug-discuss-boun...@spectrumscale.org 
> <mailto:gpfsug-discuss-boun...@spectrumscale.org>
> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org 
> <mailto:gpfsug-discuss@spectrumscale.org>>
> Cc:
> Subject: [gpfsug-discuss] Strategies - servers with local SAS disks
> Date: Thu, Dec 1, 2016 12:34 AM
> Looking for feedback/strategies in setting up several GPFS servers with local 
> SAS. They would all be part of the same file system. The systems are all 
> similar in configuration - 70 4TB drives.
> 
>  
> Options I’m considering:
> 
>  
> - Create RAID arrays of the disks on each server (worried about the RAID 
> rebuild time when a drive fails with 4, 6, 8TB drives)
> 
> - No RAID with 2 replicas, single drive per NSD. When a drive fails, recreate 
> the NSD – but then I need to fix up the data replication via restripe
> 
> - FPO – with multiple failure groups -  letting the system manage replica 
> placement and then have GPFS due the restripe on disk failure automatically
> 
>  
> Comments or other ideas welcome.
> 
>  
> Bob Oesterlin
> Sr Principal Storage Engineer, Nuance
> 507-269-0413
> 
>  
>  
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
>  
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
> 
> 
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to