Re: [gpfsug-discuss] GPFS on ZFS?

2016-06-14 Thread Yuri L Volobuev

GPFS proper (as opposed to GNR) isn't particularly picky about block
devices.  Any block device that GPFS can see, with help from an nsddevices
user exit if necessary, is fair game, for those willing to blaze new
trails.  This applies to "real" devices, e.g. disk partitions or hardware
RAID LUNs, and "virtual" ones, like software RAID devices.  The device has
to be capable to accepting IO requests of GPFS block size, but aside from
that, Linux kernel does a pretty good job abstracting the realities of
low-level implementation from the higher-level block device API.  The basic
problem with software RAID approaches is the lack of efficient HA.  Since a
given device is only visible to one node, if a node goes down, it takes the
NSDs with it (as opposed to the more traditional twin-tailed disk model,
when another NSD server can take over).  So one would have to rely on GPFS
data/metadata replication to get HA, and that is costly, in terms of disk
utilization efficiency and data write cost.  This is still an attractive
model for some use cases, but it's not quite a one-to-one replacement for
something like GNR for general use.

yuri



From:   "Jaime Pinto" <pi...@scinet.utoronto.ca>
To: "gpfsug main discussion list"
<gpfsug-discuss@spectrumscale.org>,
Date:   06/13/2016 09:11 AM
Subject:Re: [gpfsug-discuss] GPFS on ZFS?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



I just came across this presentation on "GPFS with underlying ZFS
block devices", by Christopher Hoffman, Los Alamos National Lab,
although some of the
implementation remains obscure.

http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf

It would be great to have more details, in particular the possibility
of straight use of GPFS on ZFS, instead of the 'archive' use case as
described on the presentation.

Thanks
Jaime




Quoting "Jaime Pinto" <pi...@scinet.utoronto.ca>:

> Since we can not get GNR outside ESS/GSS appliances, is anybody using
> ZFS for software raid on commodity storage?
>
> Thanks
> Jaime
>
>




  
   TELL US ABOUT YOUR SUCCESS STORIES
  http://www.scinethpc.ca/testimonials
  
---
Jaime Pinto
SciNet HPC Consortium  - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.org
University of Toronto
256 McCaul Street, Room 235
Toronto, ON, M5T1W5
P: 416-978-2755
C: 416-505-1477


This message was sent using IMP at SciNet Consortium, University of
Toronto.

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS on ZFS! ... ?

2016-06-13 Thread Laurence Horrocks-Barlow
@JAB

Same here passing the same LVM LV's through to multiple KVM instances works a 
treat for testing.

-- Lauz

On 13 June 2016 22:05:20 EEST, Jonathan Buzzard  wrote:
>On 13/06/16 18:53, Marc A Kaplan wrote:
>> How do you set the size of a ZFS file that is simulating a GPFS disk?
>>   How do "tell" GPFS about that?
>>
>> How efficient is this layering, compared to just giving GPFS direct
>> access to the same kind of LUNs that ZFS is using?
>>
>> Hmmm... to partially answer my question, I do something similar, but
>> strictly for testing non-performance critical GPFS functions.
>> On any file system one can:
>>
>>dd if=/dev/zero of=/fakedisks/d3 count=1 bs=1M seek=3000  # create
>a
>> fake 3GB disk for GPFS
>>
>> Then use a GPFS nsd configuration record like this:
>>
>> %nsd: nsd=d3  device=/fakedisks/d3  usage=dataOnly pool=xtra
>>   servers=bog-xxx
>>
>> Which starts out as sparse and the filesystem will dynamically "grow"
>as
>> GPFS writes to it...
>>
>> But I have no idea how well this will work for a critical
>"production"
>> system...
>>
>
>For "testing" purposes I just create a logical volume and map it
>through 
>to my bunch of GPFS KVM instances as a disk. Works a treat and SSD's
>are 
>silly money these days so for testing performance is just fine. There 
>was a 960GB SanDisk on offer for 160GBP last month.
>
>JAB.
>
>-- 
>Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk
>Fife, United Kingdom.
>___
>gpfsug-discuss mailing list
>gpfsug-discuss at spectrumscale.org
>http://gpfsug.org/mailman/listinfo/gpfsug-discuss

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS on ZFS! ... ?

2016-06-13 Thread Hoffman, Christopher P
To specify the size of the disks GPFS uses one can use zvols. Then one can turn 
on the zfs setting sync=always to perform safe writes since I'm using SATA 
cards there is no BBU. In our testing, turning sync=on creates a 20%-30% 
decrease in overall throughput on writes.

I do not have numbers of this setup vs hardware RAID6.

Thanks,
Chris

From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Jaime Pinto 
[pi...@scinet.utoronto.ca]
Sent: Monday, June 13, 2016 12:02
To: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] GPFS on ZFS! ... ?

As Marc, I also have questions related to performance.

Assuming we let ZFS take care of the underlying software raid, what
would be the difference between GPFS and Lustre for instance, for the
"parallel serving" at scale part of the file system. What would keep
GPFS from performing or functioning just as well?

Thanks
Jaime

Quoting "Marc A Kaplan" <makap...@us.ibm.com>:

> How do you set the size of a ZFS file that is simulating a GPFS disk?  How
> do "tell" GPFS about that?
>
> How efficient is this layering, compared to just giving GPFS direct access
> to the same kind of LUNs that ZFS is using?
>
> Hmmm... to partially answer my question, I do something similar, but
> strictly for testing non-performance critical GPFS functions.
> On any file system one can:
>
>   dd if=/dev/zero of=/fakedisks/d3 count=1 bs=1M seek=3000  # create a
> fake 3GB disk for GPFS
>
> Then use a GPFS nsd configuration record like this:
>
> %nsd: nsd=d3  device=/fakedisks/d3  usage=dataOnly pool=xtra
> servers=bog-xxx
>
> Which starts out as sparse and the filesystem will dynamically "grow" as
> GPFS writes to it...
>
> But I have no idea how well this will work for a critical "production"
> system...
>
> tx, marc kaplan.
>
>
>
> From:   "Allen, Benjamin S." <bsal...@alcf.anl.gov>
> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
> Date:   06/13/2016 12:34 PM
> Subject:Re: [gpfsug-discuss] GPFS on ZFS?
> Sent by:gpfsug-discuss-boun...@spectrumscale.org
>
>
>
> Jaime,
>
> See
> https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_nsddevices.htm
> . An example I have for add /dev/nvme* devices:
>
> * GPFS doesn't know how that /dev/nvme* are valid block devices, use a
> user exit script to let it know about it
>
> cp /usr/lpp/mmfs/samples/nsddevices.sample /var/mmfs/etc/nsddevices
>
> * Edit /var/mmfs/etc/nsddevices, and add to linux section:
>
> if [[ $osName = Linux ]]
> then
>   : # Add function to discover disks in the Linux environment.
> for dev in $( cat /proc/partitions | grep nvme | awk '{print $4}' )
>   do
> echo $dev generic
> done
> fi
>
> * Copy edited nsddevices to the rest of the nodes at the same path
> for host in n01 n02 n03 n04; do
>   scp /var/mmfs/etc/nsddevices ${host}:/var/mmfs/etc/nsddevices
> done
>
>
> Ben
>
>> On Jun 13, 2016, at 11:26 AM, Jaime Pinto <pi...@scinet.utoronto.ca>
> wrote:
>>
>> Hi Chris
>>
>> As I understand, GPFS likes to 'see' the block devices, even on a
> hardware raid solution such as DDN's.
>>
>> How is that accomplished when you use ZFS for software raid?
>> On page 4, I see this info, and I'm trying to interpret it:
>>
>> General Configuration
>> ...
>> * zvols
>> * nsddevices
>>  - echo "zdX generic"
>>
>>
>> Thanks
>> Jaime
>>
>> Quoting "Hoffman, Christopher P" <cphof...@lanl.gov>:
>>
>>> Hi Jaime,
>>>
>>> What in particular would you like explained more? I'd be more than
> happy to discuss things further.
>>>
>>> Chris
>>> 
>>> From: gpfsug-discuss-boun...@spectrumscale.org
> [gpfsug-discuss-boun...@spectrumscale.org] on behalf of Jaime Pinto
> [pi...@scinet.utoronto.ca]
>>> Sent: Monday, June 13, 2016 10:11
>>> To: gpfsug main discussion list
>>> Subject: Re: [gpfsug-discuss] GPFS on ZFS?
>>>
>>> I just came across this presentation on "GPFS with underlying ZFS
>>> block devices", by Christopher Hoffman, Los Alamos National Lab,
>>> although some of the
>>> implementation remains obscure.
>>>
>>> http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf
>>>
>>> It would be great to have more details, in particular the possibility
>>> of straight use

Re: [gpfsug-discuss] GPFS on ZFS! ... ?

2016-06-13 Thread Jaime Pinto

As Marc, I also have questions related to performance.

Assuming we let ZFS take care of the underlying software raid, what  
would be the difference between GPFS and Lustre for instance, for the  
"parallel serving" at scale part of the file system. What would keep  
GPFS from performing or functioning just as well?


Thanks
Jaime

Quoting "Marc A Kaplan" <makap...@us.ibm.com>:


How do you set the size of a ZFS file that is simulating a GPFS disk?  How
do "tell" GPFS about that?

How efficient is this layering, compared to just giving GPFS direct access
to the same kind of LUNs that ZFS is using?

Hmmm... to partially answer my question, I do something similar, but
strictly for testing non-performance critical GPFS functions.
On any file system one can:

  dd if=/dev/zero of=/fakedisks/d3 count=1 bs=1M seek=3000  # create a
fake 3GB disk for GPFS

Then use a GPFS nsd configuration record like this:

%nsd: nsd=d3  device=/fakedisks/d3  usage=dataOnly pool=xtra
servers=bog-xxx

Which starts out as sparse and the filesystem will dynamically "grow" as
GPFS writes to it...

But I have no idea how well this will work for a critical "production"
system...

tx, marc kaplan.



From:   "Allen, Benjamin S." <bsal...@alcf.anl.gov>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   06/13/2016 12:34 PM
Subject:Re: [gpfsug-discuss] GPFS on ZFS?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Jaime,

See
https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_nsddevices.htm
. An example I have for add /dev/nvme* devices:

* GPFS doesn't know how that /dev/nvme* are valid block devices, use a
user exit script to let it know about it

cp /usr/lpp/mmfs/samples/nsddevices.sample /var/mmfs/etc/nsddevices

* Edit /var/mmfs/etc/nsddevices, and add to linux section:

if [[ $osName = Linux ]]
then
  : # Add function to discover disks in the Linux environment.
for dev in $( cat /proc/partitions | grep nvme | awk '{print $4}' )
  do
echo $dev generic
done
fi

* Copy edited nsddevices to the rest of the nodes at the same path
for host in n01 n02 n03 n04; do
  scp /var/mmfs/etc/nsddevices ${host}:/var/mmfs/etc/nsddevices
done


Ben


On Jun 13, 2016, at 11:26 AM, Jaime Pinto <pi...@scinet.utoronto.ca>

wrote:


Hi Chris

As I understand, GPFS likes to 'see' the block devices, even on a

hardware raid solution such as DDN's.


How is that accomplished when you use ZFS for software raid?
On page 4, I see this info, and I'm trying to interpret it:

General Configuration
...
* zvols
* nsddevices
 - echo "zdX generic"


Thanks
Jaime

Quoting "Hoffman, Christopher P" <cphof...@lanl.gov>:


Hi Jaime,

What in particular would you like explained more? I'd be more than

happy to discuss things further.


Chris

From: gpfsug-discuss-boun...@spectrumscale.org

[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Jaime Pinto
[pi...@scinet.utoronto.ca]

Sent: Monday, June 13, 2016 10:11
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] GPFS on ZFS?

I just came across this presentation on "GPFS with underlying ZFS
block devices", by Christopher Hoffman, Los Alamos National Lab,
although some of the
implementation remains obscure.

http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf

It would be great to have more details, in particular the possibility
of straight use of GPFS on ZFS, instead of the 'archive' use case as
described on the presentation.

Thanks
Jaime




Quoting "Jaime Pinto" <pi...@scinet.utoronto.ca>:


Since we can not get GNR outside ESS/GSS appliances, is anybody using
ZFS for software raid on commodity storage?

Thanks
Jaime







 
  TELL US ABOUT YOUR SUCCESS STORIES
 http://www.scinethpc.ca/testimonials
 
---
Jaime Pinto
SciNet HPC Consortium  - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.org
University of Toronto
256 McCaul Street, Room 235
Toronto, ON, M5T1W5
P: 416-978-2755
C: 416-505-1477


This message was sent using IMP at SciNet Consortium, University of

Toronto.


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss









 TELL US ABOUT YOUR SUCCESS STORIES
http://www.scinethpc.ca/testimonials

---
Jaime Pinto
SciNet HPC Consortium  - Comp

Re: [gpfsug-discuss] GPFS on ZFS! ... ?

2016-06-13 Thread Marc A Kaplan
How do you set the size of a ZFS file that is simulating a GPFS disk?  How 
do "tell" GPFS about that?

How efficient is this layering, compared to just giving GPFS direct access 
to the same kind of LUNs that ZFS is using?

Hmmm... to partially answer my question, I do something similar, but 
strictly for testing non-performance critical GPFS functions.
On any file system one can:

  dd if=/dev/zero of=/fakedisks/d3 count=1 bs=1M seek=3000  # create a 
fake 3GB disk for GPFS

Then use a GPFS nsd configuration record like this:

%nsd: nsd=d3  device=/fakedisks/d3  usage=dataOnly pool=xtra 
servers=bog-xxx

Which starts out as sparse and the filesystem will dynamically "grow" as 
GPFS writes to it...

But I have no idea how well this will work for a critical "production" 
system...

tx, marc kaplan.



From:   "Allen, Benjamin S." <bsal...@alcf.anl.gov>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   06/13/2016 12:34 PM
Subject:Re: [gpfsug-discuss] GPFS on ZFS?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Jaime,

See 
https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_nsddevices.htm
. An example I have for add /dev/nvme* devices:

* GPFS doesn't know how that /dev/nvme* are valid block devices, use a 
user exit script to let it know about it

cp /usr/lpp/mmfs/samples/nsddevices.sample /var/mmfs/etc/nsddevices

* Edit /var/mmfs/etc/nsddevices, and add to linux section:

if [[ $osName = Linux ]]
then
  : # Add function to discover disks in the Linux environment.
for dev in $( cat /proc/partitions | grep nvme | awk '{print $4}' )
  do
echo $dev generic
done
fi

* Copy edited nsddevices to the rest of the nodes at the same path
for host in n01 n02 n03 n04; do
  scp /var/mmfs/etc/nsddevices ${host}:/var/mmfs/etc/nsddevices
done


Ben

> On Jun 13, 2016, at 11:26 AM, Jaime Pinto <pi...@scinet.utoronto.ca> 
wrote:
> 
> Hi Chris
> 
> As I understand, GPFS likes to 'see' the block devices, even on a 
hardware raid solution such at DDN's.
> 
> How is that accomplished when you use ZFS for software raid?
> On page 4 I see this info, and I'm trying to interpret it:
> 
> General Configuration
> ...
> * zvols
> * nsddevices
>  - echo "zdX generic"
> 
> 
> Thanks
> Jaime
> 
> Quoting "Hoffman, Christopher P" <cphof...@lanl.gov>:
> 
>> Hi Jaime,
>> 
>> What in particular would you like explained more? I'd be more than 
happy to discuss things further.
>> 
>> Chris
>> 
>> From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Jaime Pinto 
[pi...@scinet.utoronto.ca]
>> Sent: Monday, June 13, 2016 10:11
>> To: gpfsug main discussion list
>> Subject: Re: [gpfsug-discuss] GPFS on ZFS?
>> 
>> I just came across this presentation on "GPFS with underlying ZFS
>> block devices", by Christopher Hoffman, Los Alamos National Lab,
>> although some of the
>> implementation remains obscure.
>> 
>> http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf
>> 
>> It would be great to have more details, in particular the possibility
>> of straight use of GPFS on ZFS, instead of the 'archive' use case as
>> described on the presentation.
>> 
>> Thanks
>> Jaime
>> 
>> 
>> 
>> 
>> Quoting "Jaime Pinto" <pi...@scinet.utoronto.ca>:
>> 
>>> Since we can not get GNR outside ESS/GSS appliances, is anybody using
>>> ZFS for software raid on commodity storage?
>>> 
>>> Thanks
>>> Jaime
>>> 
>>> 
>> 
>> 
>> 
>> 
>>  
>>   TELL US ABOUT YOUR SUCCESS STORIES
>>  http://www.scinethpc.ca/testimonials
>>  
>> ---
>> Jaime Pinto
>> SciNet HPC Consortium  - Compute/Calcul Canada
>> www.scinet.utoronto.ca - www.computecanada.org
>> University of Toronto
>> 256 McCaul Street, Room 235
>> Toronto, ON, M5T1W5
>> P: 416-978-2755
>> C: 416-505-1477
>> 
>> 
>> This message was sent using IMP at SciNet Consortium, University of 
Toronto.
>> 
>> ___
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>> _

Re: [gpfsug-discuss] GPFS on ZFS?

2016-06-13 Thread Allen, Benjamin S.
Jaime,

See 
https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_nsddevices.htm.
 An example I have for add /dev/nvme* devices:

* GPFS doesn't know how that /dev/nvme* are valid block devices, use a user 
exit script to let it know about it

cp /usr/lpp/mmfs/samples/nsddevices.sample /var/mmfs/etc/nsddevices

* Edit /var/mmfs/etc/nsddevices, and add to linux section:

if [[ $osName = Linux ]]
then
  : # Add function to discover disks in the Linux environment.
for dev in $( cat /proc/partitions | grep nvme | awk '{print $4}' )
  do
echo $dev generic
done
fi

* Copy edited nsddevices to the rest of the nodes at the same path
for host in n01 n02 n03 n04; do
  scp /var/mmfs/etc/nsddevices ${host}:/var/mmfs/etc/nsddevices
done


Ben

> On Jun 13, 2016, at 11:26 AM, Jaime Pinto <pi...@scinet.utoronto.ca> wrote:
> 
> Hi Chris
> 
> As I understand, GPFS likes to 'see' the block devices, even on a hardware 
> raid solution such at DDN's.
> 
> How is that accomplished when you use ZFS for software raid?
> On page 4 I see this info, and I'm trying to interpret it:
> 
> General Configuration
> ...
> * zvols
> * nsddevices
>  - echo "zdX generic"
> 
> 
> Thanks
> Jaime
> 
> Quoting "Hoffman, Christopher P" <cphof...@lanl.gov>:
> 
>> Hi Jaime,
>> 
>> What in particular would you like explained more? I'd be more than  happy to 
>> discuss things further.
>> 
>> Chris
>> 
>> From: gpfsug-discuss-boun...@spectrumscale.org  
>> [gpfsug-discuss-boun...@spectrumscale.org] on behalf of Jaime Pinto  
>> [pi...@scinet.utoronto.ca]
>> Sent: Monday, June 13, 2016 10:11
>> To: gpfsug main discussion list
>> Subject: Re: [gpfsug-discuss] GPFS on ZFS?
>> 
>> I just came across this presentation on "GPFS with underlying ZFS
>> block devices", by Christopher Hoffman, Los Alamos National Lab,
>> although some of the
>> implementation remains obscure.
>> 
>> http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf
>> 
>> It would be great to have more details, in particular the possibility
>> of straight use of GPFS on ZFS, instead of the 'archive' use case as
>> described on the presentation.
>> 
>> Thanks
>> Jaime
>> 
>> 
>> 
>> 
>> Quoting "Jaime Pinto" <pi...@scinet.utoronto.ca>:
>> 
>>> Since we can not get GNR outside ESS/GSS appliances, is anybody using
>>> ZFS for software raid on commodity storage?
>>> 
>>> Thanks
>>> Jaime
>>> 
>>> 
>> 
>> 
>> 
>> 
>>  
>>   TELL US ABOUT YOUR SUCCESS STORIES
>>  http://www.scinethpc.ca/testimonials
>>  
>> ---
>> Jaime Pinto
>> SciNet HPC Consortium  - Compute/Calcul Canada
>> www.scinet.utoronto.ca - www.computecanada.org
>> University of Toronto
>> 256 McCaul Street, Room 235
>> Toronto, ON, M5T1W5
>> P: 416-978-2755
>> C: 416-505-1477
>> 
>> 
>> This message was sent using IMP at SciNet Consortium, University of Toronto.
>> 
>> ___
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>> ___
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>> 
> 
> 
> 
> 
> 
> 
> 
>  TELL US ABOUT YOUR SUCCESS STORIES
> http://www.scinethpc.ca/testimonials
> 
> ---
> Jaime Pinto
> SciNet HPC Consortium  - Compute/Calcul Canada
> www.scinet.utoronto.ca - www.computecanada.org
> University of Toronto
> 256 McCaul Street, Room 235
> Toronto, ON, M5T1W5
> P: 416-978-2755
> C: 416-505-1477
> 
> 
> This message was sent using IMP at SciNet Consortium, University of Toronto.
> 
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS on ZFS?

2016-06-13 Thread Stijn De Weirdt
hi chris,

do you have any form of HA for the zfs blockdevices/jbod (eg when a nsd
reboots/breaks/...)? or do you rely on replication within GPFS?


stijn

On 06/13/2016 06:19 PM, Hoffman, Christopher P wrote:
> Hi Jaime,
> 
> What in particular would you like explained more? I'd be more than happy to 
> discuss things further.
> 
> Chris
> 
> From: gpfsug-discuss-boun...@spectrumscale.org 
> [gpfsug-discuss-boun...@spectrumscale.org] on behalf of Jaime Pinto 
> [pi...@scinet.utoronto.ca]
> Sent: Monday, June 13, 2016 10:11
> To: gpfsug main discussion list
> Subject: Re: [gpfsug-discuss] GPFS on ZFS?
> 
> I just came across this presentation on "GPFS with underlying ZFS
> block devices", by Christopher Hoffman, Los Alamos National Lab,
> although some of the
> implementation remains obscure.
> 
> http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf
> 
> It would be great to have more details, in particular the possibility
> of straight use of GPFS on ZFS, instead of the 'archive' use case as
> described on the presentation.
> 
> Thanks
> Jaime
> 
> 
> 
> 
> Quoting "Jaime Pinto" <pi...@scinet.utoronto.ca>:
> 
>> Since we can not get GNR outside ESS/GSS appliances, is anybody using
>> ZFS for software raid on commodity storage?
>>
>> Thanks
>> Jaime
>>
>>
> 
> 
> 
> 
>   
>TELL US ABOUT YOUR SUCCESS STORIES
>   http://www.scinethpc.ca/testimonials
>   
> ---
> Jaime Pinto
> SciNet HPC Consortium  - Compute/Calcul Canada
> www.scinet.utoronto.ca - www.computecanada.org
> University of Toronto
> 256 McCaul Street, Room 235
> Toronto, ON, M5T1W5
> P: 416-978-2755
> C: 416-505-1477
> 
> 
> This message was sent using IMP at SciNet Consortium, University of Toronto.
> 
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS on ZFS?

2016-06-13 Thread Hoffman, Christopher P
Hi Jaime,

What in particular would you like explained more? I'd be more than happy to 
discuss things further.

Chris

From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Jaime Pinto 
[pi...@scinet.utoronto.ca]
Sent: Monday, June 13, 2016 10:11
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] GPFS on ZFS?

I just came across this presentation on "GPFS with underlying ZFS
block devices", by Christopher Hoffman, Los Alamos National Lab,
although some of the
implementation remains obscure.

http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf

It would be great to have more details, in particular the possibility
of straight use of GPFS on ZFS, instead of the 'archive' use case as
described on the presentation.

Thanks
Jaime




Quoting "Jaime Pinto" <pi...@scinet.utoronto.ca>:

> Since we can not get GNR outside ESS/GSS appliances, is anybody using
> ZFS for software raid on commodity storage?
>
> Thanks
> Jaime
>
>




  
   TELL US ABOUT YOUR SUCCESS STORIES
  http://www.scinethpc.ca/testimonials
  
---
Jaime Pinto
SciNet HPC Consortium  - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.org
University of Toronto
256 McCaul Street, Room 235
Toronto, ON, M5T1W5
P: 416-978-2755
C: 416-505-1477


This message was sent using IMP at SciNet Consortium, University of Toronto.

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] GPFS on ZFS?

2016-04-18 Thread Jaime Pinto
Since we can not get GNR outside ESS/GSS appliances, is anybody using  
ZFS for software raid on commodity storage?


Thanks
Jaime


---
Jaime Pinto
SciNet HPC Consortium  - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.org
University of Toronto
256 McCaul Street, Room 235
Toronto, ON, M5T1W5
P: 416-978-2755
C: 416-505-1477


This message was sent using IMP at SciNet Consortium, University of Toronto.


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss