I don’t understand what FPO provides here that mirroring doesn’t:
You can still use failure domains — one for each node.
Both still have redundancy for the data; you can lose a disk or a node.
The data has to be re-striped in the event of a disk failure — no matter what.

Also, the FPO license doesn’t allow for regular clients to access the data -- 
only server and FPO nodes.

What am I missing?

Liberty,

-- 
Stephen



> On Nov 30, 2016, at 3:51 PM, Andrew Beattie <[email protected]> wrote:
> 
> Bob,
>  
> If your not going to use integrated Raid controllers in the servers, then FPO 
> would seem to be the most resilient scenario.
> yes it has its own overheads, but with that many drives to manage, a JOBD 
> architecture and manual restriping doesn't sound like fun
>  
> If you are going down the path of integrated raid controllers then any form 
> of distributed raid is probably the best scenario, Raid 6 obviously.
>  
> How many Nodes are you planning on building?  The more nodes the more value 
> FPO is likely to bring as you can be more specific in how the data is written 
> to the nodes.
>  
> Andrew Beattie
> Software Defined Storage  - IT Specialist
> Phone: 614-2133-7927
> E-mail: [email protected] <mailto:[email protected]>
>  
>  
> ----- Original message -----
> From: "Oesterlin, Robert" <[email protected]>
> Sent by: [email protected]
> To: gpfsug main discussion list <[email protected]>
> Cc:
> Subject: [gpfsug-discuss] Strategies - servers with local SAS disks
> Date: Thu, Dec 1, 2016 12:34 AM
>  
> Looking for feedback/strategies in setting up several GPFS servers with local 
> SAS. They would all be part of the same file system. The systems are all 
> similar in configuration - 70 4TB drives.
> 
>  
> 
> Options I’m considering:
> 
>  
> 
> - Create RAID arrays of the disks on each server (worried about the RAID 
> rebuild time when a drive fails with 4, 6, 8TB drives)
> 
> - No RAID with 2 replicas, single drive per NSD. When a drive fails, recreate 
> the NSD – but then I need to fix up the data replication via restripe
> 
> - FPO – with multiple failure groups -  letting the system manage replica 
> placement and then have GPFS due the restripe on disk failure automatically
> 
>  
> 
> Comments or other ideas welcome.
> 
>  
> 
> Bob Oesterlin
> Sr Principal Storage Engineer, Nuance
> 507-269-0413
> 
>  
> 
>  
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
>  
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to