Re: [gpfsug-discuss] GPFS on ZFS! ... ?

2016-06-13 Thread Laurence Horrocks-Barlow
@JAB

Same here passing the same LVM LV's through to multiple KVM instances works a 
treat for testing.

-- Lauz

On 13 June 2016 22:05:20 EEST, Jonathan Buzzard  wrote:
>On 13/06/16 18:53, Marc A Kaplan wrote:
>> How do you set the size of a ZFS file that is simulating a GPFS disk?
>>   How do "tell" GPFS about that?
>>
>> How efficient is this layering, compared to just giving GPFS direct
>> access to the same kind of LUNs that ZFS is using?
>>
>> Hmmm... to partially answer my question, I do something similar, but
>> strictly for testing non-performance critical GPFS functions.
>> On any file system one can:
>>
>>dd if=/dev/zero of=/fakedisks/d3 count=1 bs=1M seek=3000  # create
>a
>> fake 3GB disk for GPFS
>>
>> Then use a GPFS nsd configuration record like this:
>>
>> %nsd: nsd=d3  device=/fakedisks/d3  usage=dataOnly pool=xtra
>>   servers=bog-xxx
>>
>> Which starts out as sparse and the filesystem will dynamically "grow"
>as
>> GPFS writes to it...
>>
>> But I have no idea how well this will work for a critical
>"production"
>> system...
>>
>
>For "testing" purposes I just create a logical volume and map it
>through 
>to my bunch of GPFS KVM instances as a disk. Works a treat and SSD's
>are 
>silly money these days so for testing performance is just fine. There 
>was a 960GB SanDisk on offer for 160GBP last month.
>
>JAB.
>
>-- 
>Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk
>Fife, United Kingdom.
>___
>gpfsug-discuss mailing list
>gpfsug-discuss at spectrumscale.org
>http://gpfsug.org/mailman/listinfo/gpfsug-discuss

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS on ZFS! ... ?

2016-06-13 Thread Hoffman, Christopher P
To specify the size of the disks GPFS uses one can use zvols. Then one can turn 
on the zfs setting sync=always to perform safe writes since I'm using SATA 
cards there is no BBU. In our testing, turning sync=on creates a 20%-30% 
decrease in overall throughput on writes.

I do not have numbers of this setup vs hardware RAID6.

Thanks,
Chris

From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Jaime Pinto 
[pi...@scinet.utoronto.ca]
Sent: Monday, June 13, 2016 12:02
To: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] GPFS on ZFS! ... ?

As Marc, I also have questions related to performance.

Assuming we let ZFS take care of the underlying software raid, what
would be the difference between GPFS and Lustre for instance, for the
"parallel serving" at scale part of the file system. What would keep
GPFS from performing or functioning just as well?

Thanks
Jaime

Quoting "Marc A Kaplan" :

> How do you set the size of a ZFS file that is simulating a GPFS disk?  How
> do "tell" GPFS about that?
>
> How efficient is this layering, compared to just giving GPFS direct access
> to the same kind of LUNs that ZFS is using?
>
> Hmmm... to partially answer my question, I do something similar, but
> strictly for testing non-performance critical GPFS functions.
> On any file system one can:
>
>   dd if=/dev/zero of=/fakedisks/d3 count=1 bs=1M seek=3000  # create a
> fake 3GB disk for GPFS
>
> Then use a GPFS nsd configuration record like this:
>
> %nsd: nsd=d3  device=/fakedisks/d3  usage=dataOnly pool=xtra
> servers=bog-xxx
>
> Which starts out as sparse and the filesystem will dynamically "grow" as
> GPFS writes to it...
>
> But I have no idea how well this will work for a critical "production"
> system...
>
> tx, marc kaplan.
>
>
>
> From:   "Allen, Benjamin S." 
> To: gpfsug main discussion list 
> Date:   06/13/2016 12:34 PM
> Subject:Re: [gpfsug-discuss] GPFS on ZFS?
> Sent by:gpfsug-discuss-boun...@spectrumscale.org
>
>
>
> Jaime,
>
> See
> https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_nsddevices.htm
> . An example I have for add /dev/nvme* devices:
>
> * GPFS doesn't know how that /dev/nvme* are valid block devices, use a
> user exit script to let it know about it
>
> cp /usr/lpp/mmfs/samples/nsddevices.sample /var/mmfs/etc/nsddevices
>
> * Edit /var/mmfs/etc/nsddevices, and add to linux section:
>
> if [[ $osName = Linux ]]
> then
>   : # Add function to discover disks in the Linux environment.
> for dev in $( cat /proc/partitions | grep nvme | awk '{print $4}' )
>   do
> echo $dev generic
> done
> fi
>
> * Copy edited nsddevices to the rest of the nodes at the same path
> for host in n01 n02 n03 n04; do
>   scp /var/mmfs/etc/nsddevices ${host}:/var/mmfs/etc/nsddevices
> done
>
>
> Ben
>
>> On Jun 13, 2016, at 11:26 AM, Jaime Pinto 
> wrote:
>>
>> Hi Chris
>>
>> As I understand, GPFS likes to 'see' the block devices, even on a
> hardware raid solution such as DDN's.
>>
>> How is that accomplished when you use ZFS for software raid?
>> On page 4, I see this info, and I'm trying to interpret it:
>>
>> General Configuration
>> ...
>> * zvols
>> * nsddevices
>>  - echo "zdX generic"
>>
>>
>> Thanks
>> Jaime
>>
>> Quoting "Hoffman, Christopher P" :
>>
>>> Hi Jaime,
>>>
>>> What in particular would you like explained more? I'd be more than
> happy to discuss things further.
>>>
>>> Chris
>>> 
>>> From: gpfsug-discuss-boun...@spectrumscale.org
> [gpfsug-discuss-boun...@spectrumscale.org] on behalf of Jaime Pinto
> [pi...@scinet.utoronto.ca]
>>> Sent: Monday, June 13, 2016 10:11
>>> To: gpfsug main discussion list
>>> Subject: Re: [gpfsug-discuss] GPFS on ZFS?
>>>
>>> I just came across this presentation on "GPFS with underlying ZFS
>>> block devices", by Christopher Hoffman, Los Alamos National Lab,
>>> although some of the
>>> implementation remains obscure.
>>>
>>> http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf
>>>
>>> It would be great to have more details, in particular the possibility
>>> of straight use of GPFS on ZFS, instead of the 'archive' use case as
>>> described on the presentation.
>>>
>>> Thanks
>>> Jaime
>>>
>>>
>>>
>>>
>>> Quoting "Jaime Pinto" :
>>>
 Since we can not get GNR outside ESS/GSS appliances, is anybody using
 ZFS for software raid on commodity storage?

 Thanks
 Jaime


>>>
>>>
>>>
>>>
>>>  
>>>   TELL US ABOUT YOUR SUCCESS STORIES
>>>  http://www.scinethpc.ca/testimonials
>>>  
>>> ---
>>> Jaime Pinto
>>> SciNet HPC Consortium  - Compute/Calcul Canada
>>> 

Re: [gpfsug-discuss] GPFS on ZFS! ... ?

2016-06-13 Thread Jaime Pinto

As Marc, I also have questions related to performance.

Assuming we let ZFS take care of the underlying software raid, what  
would be the difference between GPFS and Lustre for instance, for the  
"parallel serving" at scale part of the file system. What would keep  
GPFS from performing or functioning just as well?


Thanks
Jaime

Quoting "Marc A Kaplan" :


How do you set the size of a ZFS file that is simulating a GPFS disk?  How
do "tell" GPFS about that?

How efficient is this layering, compared to just giving GPFS direct access
to the same kind of LUNs that ZFS is using?

Hmmm... to partially answer my question, I do something similar, but
strictly for testing non-performance critical GPFS functions.
On any file system one can:

  dd if=/dev/zero of=/fakedisks/d3 count=1 bs=1M seek=3000  # create a
fake 3GB disk for GPFS

Then use a GPFS nsd configuration record like this:

%nsd: nsd=d3  device=/fakedisks/d3  usage=dataOnly pool=xtra
servers=bog-xxx

Which starts out as sparse and the filesystem will dynamically "grow" as
GPFS writes to it...

But I have no idea how well this will work for a critical "production"
system...

tx, marc kaplan.



From:   "Allen, Benjamin S." 
To: gpfsug main discussion list 
Date:   06/13/2016 12:34 PM
Subject:Re: [gpfsug-discuss] GPFS on ZFS?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Jaime,

See
https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_nsddevices.htm
. An example I have for add /dev/nvme* devices:

* GPFS doesn't know how that /dev/nvme* are valid block devices, use a
user exit script to let it know about it

cp /usr/lpp/mmfs/samples/nsddevices.sample /var/mmfs/etc/nsddevices

* Edit /var/mmfs/etc/nsddevices, and add to linux section:

if [[ $osName = Linux ]]
then
  : # Add function to discover disks in the Linux environment.
for dev in $( cat /proc/partitions | grep nvme | awk '{print $4}' )
  do
echo $dev generic
done
fi

* Copy edited nsddevices to the rest of the nodes at the same path
for host in n01 n02 n03 n04; do
  scp /var/mmfs/etc/nsddevices ${host}:/var/mmfs/etc/nsddevices
done


Ben


On Jun 13, 2016, at 11:26 AM, Jaime Pinto 

wrote:


Hi Chris

As I understand, GPFS likes to 'see' the block devices, even on a

hardware raid solution such as DDN's.


How is that accomplished when you use ZFS for software raid?
On page 4, I see this info, and I'm trying to interpret it:

General Configuration
...
* zvols
* nsddevices
 - echo "zdX generic"


Thanks
Jaime

Quoting "Hoffman, Christopher P" :


Hi Jaime,

What in particular would you like explained more? I'd be more than

happy to discuss things further.


Chris

From: gpfsug-discuss-boun...@spectrumscale.org

[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Jaime Pinto
[pi...@scinet.utoronto.ca]

Sent: Monday, June 13, 2016 10:11
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] GPFS on ZFS?

I just came across this presentation on "GPFS with underlying ZFS
block devices", by Christopher Hoffman, Los Alamos National Lab,
although some of the
implementation remains obscure.

http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf

It would be great to have more details, in particular the possibility
of straight use of GPFS on ZFS, instead of the 'archive' use case as
described on the presentation.

Thanks
Jaime




Quoting "Jaime Pinto" :


Since we can not get GNR outside ESS/GSS appliances, is anybody using
ZFS for software raid on commodity storage?

Thanks
Jaime







 
  TELL US ABOUT YOUR SUCCESS STORIES
 http://www.scinethpc.ca/testimonials
 
---
Jaime Pinto
SciNet HPC Consortium  - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.org
University of Toronto
256 McCaul Street, Room 235
Toronto, ON, M5T1W5
P: 416-978-2755
C: 416-505-1477


This message was sent using IMP at SciNet Consortium, University of

Toronto.


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss









 TELL US ABOUT YOUR SUCCESS STORIES
http://www.scinethpc.ca/testimonials

---
Jaime Pinto
SciNet HPC Consortium  - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.org
University of Toronto
256 McCaul Street, Room 235
Toronto, ON, M5T1W5
P: 

Re: [gpfsug-discuss] GPFS on ZFS! ... ?

2016-06-13 Thread Marc A Kaplan
How do you set the size of a ZFS file that is simulating a GPFS disk?  How 
do "tell" GPFS about that?

How efficient is this layering, compared to just giving GPFS direct access 
to the same kind of LUNs that ZFS is using?

Hmmm... to partially answer my question, I do something similar, but 
strictly for testing non-performance critical GPFS functions.
On any file system one can:

  dd if=/dev/zero of=/fakedisks/d3 count=1 bs=1M seek=3000  # create a 
fake 3GB disk for GPFS

Then use a GPFS nsd configuration record like this:

%nsd: nsd=d3  device=/fakedisks/d3  usage=dataOnly pool=xtra 
servers=bog-xxx

Which starts out as sparse and the filesystem will dynamically "grow" as 
GPFS writes to it...

But I have no idea how well this will work for a critical "production" 
system...

tx, marc kaplan.



From:   "Allen, Benjamin S." 
To: gpfsug main discussion list 
Date:   06/13/2016 12:34 PM
Subject:Re: [gpfsug-discuss] GPFS on ZFS?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Jaime,

See 
https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_nsddevices.htm
. An example I have for add /dev/nvme* devices:

* GPFS doesn't know how that /dev/nvme* are valid block devices, use a 
user exit script to let it know about it

cp /usr/lpp/mmfs/samples/nsddevices.sample /var/mmfs/etc/nsddevices

* Edit /var/mmfs/etc/nsddevices, and add to linux section:

if [[ $osName = Linux ]]
then
  : # Add function to discover disks in the Linux environment.
for dev in $( cat /proc/partitions | grep nvme | awk '{print $4}' )
  do
echo $dev generic
done
fi

* Copy edited nsddevices to the rest of the nodes at the same path
for host in n01 n02 n03 n04; do
  scp /var/mmfs/etc/nsddevices ${host}:/var/mmfs/etc/nsddevices
done


Ben

> On Jun 13, 2016, at 11:26 AM, Jaime Pinto  
wrote:
> 
> Hi Chris
> 
> As I understand, GPFS likes to 'see' the block devices, even on a 
hardware raid solution such at DDN's.
> 
> How is that accomplished when you use ZFS for software raid?
> On page 4 I see this info, and I'm trying to interpret it:
> 
> General Configuration
> ...
> * zvols
> * nsddevices
>  - echo "zdX generic"
> 
> 
> Thanks
> Jaime
> 
> Quoting "Hoffman, Christopher P" :
> 
>> Hi Jaime,
>> 
>> What in particular would you like explained more? I'd be more than 
happy to discuss things further.
>> 
>> Chris
>> 
>> From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Jaime Pinto 
[pi...@scinet.utoronto.ca]
>> Sent: Monday, June 13, 2016 10:11
>> To: gpfsug main discussion list
>> Subject: Re: [gpfsug-discuss] GPFS on ZFS?
>> 
>> I just came across this presentation on "GPFS with underlying ZFS
>> block devices", by Christopher Hoffman, Los Alamos National Lab,
>> although some of the
>> implementation remains obscure.
>> 
>> http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf
>> 
>> It would be great to have more details, in particular the possibility
>> of straight use of GPFS on ZFS, instead of the 'archive' use case as
>> described on the presentation.
>> 
>> Thanks
>> Jaime
>> 
>> 
>> 
>> 
>> Quoting "Jaime Pinto" :
>> 
>>> Since we can not get GNR outside ESS/GSS appliances, is anybody using
>>> ZFS for software raid on commodity storage?
>>> 
>>> Thanks
>>> Jaime
>>> 
>>> 
>> 
>> 
>> 
>> 
>>  
>>   TELL US ABOUT YOUR SUCCESS STORIES
>>  http://www.scinethpc.ca/testimonials
>>  
>> ---
>> Jaime Pinto
>> SciNet HPC Consortium  - Compute/Calcul Canada
>> www.scinet.utoronto.ca - www.computecanada.org
>> University of Toronto
>> 256 McCaul Street, Room 235
>> Toronto, ON, M5T1W5
>> P: 416-978-2755
>> C: 416-505-1477
>> 
>> 
>> This message was sent using IMP at SciNet Consortium, University of 
Toronto.
>> 
>> ___
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>> ___
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>> 
> 
> 
> 
> 
> 
> 
> 
>  TELL US ABOUT YOUR SUCCESS STORIES
> http://www.scinethpc.ca/testimonials
> 
> ---
> Jaime Pinto
> SciNet HPC Consortium  - Compute/Calcul Canada
> www.scinet.utoronto.ca - www.computecanada.org
> University of Toronto
> 256 McCaul Street, Room 235
> Toronto, ON, M5T1W5
> P: 416-978-2755
> C: 416-505-1477
> 
> 

Re: [gpfsug-discuss] GPFS on ZFS?

2016-06-13 Thread Allen, Benjamin S.
Jaime,

See 
https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_nsddevices.htm.
 An example I have for add /dev/nvme* devices:

* GPFS doesn't know how that /dev/nvme* are valid block devices, use a user 
exit script to let it know about it

cp /usr/lpp/mmfs/samples/nsddevices.sample /var/mmfs/etc/nsddevices

* Edit /var/mmfs/etc/nsddevices, and add to linux section:

if [[ $osName = Linux ]]
then
  : # Add function to discover disks in the Linux environment.
for dev in $( cat /proc/partitions | grep nvme | awk '{print $4}' )
  do
echo $dev generic
done
fi

* Copy edited nsddevices to the rest of the nodes at the same path
for host in n01 n02 n03 n04; do
  scp /var/mmfs/etc/nsddevices ${host}:/var/mmfs/etc/nsddevices
done


Ben

> On Jun 13, 2016, at 11:26 AM, Jaime Pinto  wrote:
> 
> Hi Chris
> 
> As I understand, GPFS likes to 'see' the block devices, even on a hardware 
> raid solution such at DDN's.
> 
> How is that accomplished when you use ZFS for software raid?
> On page 4 I see this info, and I'm trying to interpret it:
> 
> General Configuration
> ...
> * zvols
> * nsddevices
>  - echo "zdX generic"
> 
> 
> Thanks
> Jaime
> 
> Quoting "Hoffman, Christopher P" :
> 
>> Hi Jaime,
>> 
>> What in particular would you like explained more? I'd be more than  happy to 
>> discuss things further.
>> 
>> Chris
>> 
>> From: gpfsug-discuss-boun...@spectrumscale.org  
>> [gpfsug-discuss-boun...@spectrumscale.org] on behalf of Jaime Pinto  
>> [pi...@scinet.utoronto.ca]
>> Sent: Monday, June 13, 2016 10:11
>> To: gpfsug main discussion list
>> Subject: Re: [gpfsug-discuss] GPFS on ZFS?
>> 
>> I just came across this presentation on "GPFS with underlying ZFS
>> block devices", by Christopher Hoffman, Los Alamos National Lab,
>> although some of the
>> implementation remains obscure.
>> 
>> http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf
>> 
>> It would be great to have more details, in particular the possibility
>> of straight use of GPFS on ZFS, instead of the 'archive' use case as
>> described on the presentation.
>> 
>> Thanks
>> Jaime
>> 
>> 
>> 
>> 
>> Quoting "Jaime Pinto" :
>> 
>>> Since we can not get GNR outside ESS/GSS appliances, is anybody using
>>> ZFS for software raid on commodity storage?
>>> 
>>> Thanks
>>> Jaime
>>> 
>>> 
>> 
>> 
>> 
>> 
>>  
>>   TELL US ABOUT YOUR SUCCESS STORIES
>>  http://www.scinethpc.ca/testimonials
>>  
>> ---
>> Jaime Pinto
>> SciNet HPC Consortium  - Compute/Calcul Canada
>> www.scinet.utoronto.ca - www.computecanada.org
>> University of Toronto
>> 256 McCaul Street, Room 235
>> Toronto, ON, M5T1W5
>> P: 416-978-2755
>> C: 416-505-1477
>> 
>> 
>> This message was sent using IMP at SciNet Consortium, University of Toronto.
>> 
>> ___
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>> ___
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>> 
> 
> 
> 
> 
> 
> 
> 
>  TELL US ABOUT YOUR SUCCESS STORIES
> http://www.scinethpc.ca/testimonials
> 
> ---
> Jaime Pinto
> SciNet HPC Consortium  - Compute/Calcul Canada
> www.scinet.utoronto.ca - www.computecanada.org
> University of Toronto
> 256 McCaul Street, Room 235
> Toronto, ON, M5T1W5
> P: 416-978-2755
> C: 416-505-1477
> 
> 
> This message was sent using IMP at SciNet Consortium, University of Toronto.
> 
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS on ZFS?

2016-06-13 Thread Stijn De Weirdt
hi chris,

do you have any form of HA for the zfs blockdevices/jbod (eg when a nsd
reboots/breaks/...)? or do you rely on replication within GPFS?


stijn

On 06/13/2016 06:19 PM, Hoffman, Christopher P wrote:
> Hi Jaime,
> 
> What in particular would you like explained more? I'd be more than happy to 
> discuss things further.
> 
> Chris
> 
> From: gpfsug-discuss-boun...@spectrumscale.org 
> [gpfsug-discuss-boun...@spectrumscale.org] on behalf of Jaime Pinto 
> [pi...@scinet.utoronto.ca]
> Sent: Monday, June 13, 2016 10:11
> To: gpfsug main discussion list
> Subject: Re: [gpfsug-discuss] GPFS on ZFS?
> 
> I just came across this presentation on "GPFS with underlying ZFS
> block devices", by Christopher Hoffman, Los Alamos National Lab,
> although some of the
> implementation remains obscure.
> 
> http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf
> 
> It would be great to have more details, in particular the possibility
> of straight use of GPFS on ZFS, instead of the 'archive' use case as
> described on the presentation.
> 
> Thanks
> Jaime
> 
> 
> 
> 
> Quoting "Jaime Pinto" :
> 
>> Since we can not get GNR outside ESS/GSS appliances, is anybody using
>> ZFS for software raid on commodity storage?
>>
>> Thanks
>> Jaime
>>
>>
> 
> 
> 
> 
>   
>TELL US ABOUT YOUR SUCCESS STORIES
>   http://www.scinethpc.ca/testimonials
>   
> ---
> Jaime Pinto
> SciNet HPC Consortium  - Compute/Calcul Canada
> www.scinet.utoronto.ca - www.computecanada.org
> University of Toronto
> 256 McCaul Street, Room 235
> Toronto, ON, M5T1W5
> P: 416-978-2755
> C: 416-505-1477
> 
> 
> This message was sent using IMP at SciNet Consortium, University of Toronto.
> 
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS on ZFS?

2016-06-13 Thread Hoffman, Christopher P
Hi Jaime,

What in particular would you like explained more? I'd be more than happy to 
discuss things further.

Chris

From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Jaime Pinto 
[pi...@scinet.utoronto.ca]
Sent: Monday, June 13, 2016 10:11
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] GPFS on ZFS?

I just came across this presentation on "GPFS with underlying ZFS
block devices", by Christopher Hoffman, Los Alamos National Lab,
although some of the
implementation remains obscure.

http://files.gpfsug.org/presentations/2016/anl-june/LANL_GPFS_ZFS.pdf

It would be great to have more details, in particular the possibility
of straight use of GPFS on ZFS, instead of the 'archive' use case as
described on the presentation.

Thanks
Jaime




Quoting "Jaime Pinto" :

> Since we can not get GNR outside ESS/GSS appliances, is anybody using
> ZFS for software raid on commodity storage?
>
> Thanks
> Jaime
>
>




  
   TELL US ABOUT YOUR SUCCESS STORIES
  http://www.scinethpc.ca/testimonials
  
---
Jaime Pinto
SciNet HPC Consortium  - Compute/Calcul Canada
www.scinet.utoronto.ca - www.computecanada.org
University of Toronto
256 McCaul Street, Room 235
Toronto, ON, M5T1W5
P: 416-978-2755
C: 416-505-1477


This message was sent using IMP at SciNet Consortium, University of Toronto.

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss