Aaron, Thanks for jumping onboard. It's nice to see others confirming this.
Sometimes I feel alone on this topic.
It's should also be possible to use ZFS with ZVOLs presented as block
devices for a backing store for NSDs. I'm not claiming it's stable, nor a
good idea, nor performant.. but should b
Thanks Zach, I was about to echo similar sentiments and you saved me a
ton of typing :)
Bob, I know this doesn't help you today since I'm pretty sure its not
yet available, but if one scours the interwebs they can find mention of
something called Mestor.
There's very very limited information
The licensing model was my last point — if the OP uses FPO just to create data
resiliency they increase their cost (or curtail their access).
I was really asking if there was a real, technical positive for using FPO in
this example, as I could only come up with equivalences and negatives.
--
S
Just remember that replication protects against data availability, not
integrity. GPFS still requires the underlying block device to return good
data.
If you're using it on plain disks (SAS or SSD), and the drive returns
corrupt data, GPFS won't know any better and just deliver it to the client.
F
Hello Stephen,
There are three licensing models for Spectrum Scale | GPFS:
Server
FPO
Client
I think the thing you might be missing is the associated cost per
function.
Regards,
Ken Hill
Technical Sales Specialist | Software Defined Solution Sales
IBM Systems
Phone: 1-540-207-7270
E-mail:
I don’t understand what FPO provides here that mirroring doesn’t:
You can still use failure domains — one for each node.
Both still have redundancy for the data; you can lose a disk or a node.
The data has to be re-striped in the event of a disk failure — no matter what.
Also, the FPO license does
Bob,
If your not going to use integrated Raid controllers in the servers, then FPO would seem to be the most resilient scenario.
yes it has its own overheads, but with that many drives to manage, a JOBD architecture and manual restriping doesn't sound like fun
If you are going down the path of
I have once set up a small system with just a few SSDs in two NSD servers,
providin a scratch file system in a computing cluster.
No RAID, two replica.
works, as long the admins do not do silly things (like rebooting servers
in sequence without checking for disks being up in between).
Going for
Looking for feedback/strategies in setting up several GPFS servers with local
SAS. They would all be part of the same file system. The systems are all
similar in configuration - 70 4TB drives.
Options I’m considering:
- Create RAID arrays of the disks on each server (worried about the RAID
reb