@JAB
Same here passing the same LVM LV's through to multiple KVM instances works a
treat for testing.
-- Lauz
On 13 June 2016 22:05:20 EEST, Jonathan Buzzard wrote:
>On 13/06/16 18:53, Marc A Kaplan wrote:
>> How do you set the size of a ZFS file that is simulating a
To specify the size of the disks GPFS uses one can use zvols. Then one can turn
on the zfs setting sync=always to perform safe writes since I'm using SATA
cards there is no BBU. In our testing, turning sync=on creates a 20%-30%
decrease in overall throughput on writes.
I do not have numbers of
As Marc, I also have questions related to performance.
Assuming we let ZFS take care of the underlying software raid, what
would be the difference between GPFS and Lustre for instance, for the
"parallel serving" at scale part of the file system. What would keep
GPFS from performing or
How do you set the size of a ZFS file that is simulating a GPFS disk? How
do "tell" GPFS about that?
How efficient is this layering, compared to just giving GPFS direct access
to the same kind of LUNs that ZFS is using?
Hmmm... to partially answer my question, I do something similar, but
Jaime,
See
https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_nsddevices.htm.
An example I have for add /dev/nvme* devices:
* GPFS doesn't know how that /dev/nvme* are valid block devices, use a user
exit script to let it know about it
cp
hi chris,
do you have any form of HA for the zfs blockdevices/jbod (eg when a nsd
reboots/breaks/...)? or do you rely on replication within GPFS?
stijn
On 06/13/2016 06:19 PM, Hoffman, Christopher P wrote:
> Hi Jaime,
>
> What in particular would you like explained more? I'd be more than
Hi Jaime,
What in particular would you like explained more? I'd be more than happy to
discuss things further.
Chris
From: gpfsug-discuss-boun...@spectrumscale.org
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Jaime Pinto