Re: [gpfsug-discuss] IO sizes

2022-02-23 Thread Andrew Beattie
Alex, Metadata will be 4Kib Depending on the filesystem version you will also have subblocks to consider V4 filesystems have 1/32 subblocks, V5 filesystems have 1/1024 subblocks (assuming metadata and data block size is the same) My first question would be is “ Are you sure that Linux OS is

Re: [gpfsug-discuss] snapshots causing filesystem quiesce

2022-02-02 Thread Andrew Beattie
much is allocated to the OS / Spectrum Scale Pagepool Regards Andrew Beattie Technical Specialist - Storage for Big Data & AI IBM Technology Group IBM Australia & New Zealand P. +61 421 337 927 E. abeat...@au1.ibm.com > On 2 Feb 2022, at 19:14, Talamo Ivano Giuseppe (PSI

Re: [gpfsug-discuss] WAS: alternative path; Now: RDMA

2021-12-12 Thread Andrew Beattie
ation of RDMA add enough value... possibly not.  Andrew Beattie Technical Sales - Storage for Big Data and AI IBM Systems - Storage IBM Australia & New Zealand Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: "Jonathan Buzzard" S

Re: [gpfsug-discuss] Serial number of [EG]SS nodes

2021-09-02 Thread Andrew Beattie
I’m not sure for the Lenovo But the GUI hardware details tabs has MTM and serial information for all the ESS nodes. Regards Andrew > On 3 Sep 2021, at 01:39, Hannappel, Juergen wrote: > > Hi, > on an ESS node with power cpu I can get the serial number from > /proc/device-tree/system-id >

Re: [gpfsug-discuss] Long IO waiters and IBM Storwize V5030

2021-05-28 Thread Andrew Beattie
the 5030. You could ask you local IBM Presales team to perform a StorM disk model of the expected performance using your current configuration to show you what you performance should look like. Regards Andrew Beattie Technical Sales - Storage for Data and AI IBM Australia and New Zealand > On 29

Re: [gpfsug-discuss] Spectrum Scale & S3

2021-05-21 Thread Andrew Beattie
David, You can use the CES protocol nodes to present a swift Object interface or S3 Object interface for applications to access data from the filesystem / write data into the filesystem over the object protocol. Check the openstack Swift -S3 compatabioity matrix to make sure that the

Re: [gpfsug-discuss] GPFS de duplication

2021-05-20 Thread Andrew Beattie
changes to the data which is why you need a copy? I know of customers that do this regularly with block storage such as the IBM Flashsystem product family In conjunction with IBM Spectrum Copy Data Management. But I don’t believe CDM supports file based storage. Regards, Andrew Beattie

Re: [gpfsug-discuss] New RFE! Add option to gpfs.snap that takes case number and uploads snap data to IBM automatically

2021-04-09 Thread Andrew Beattie
Ryan, There are 2 parts to call home, The spectrum scale Software call home and the IBM ESS hardware call home l. The software call home should work regardless of hardware platform. However it will not pull the detail of the hardware, you will need to upload hardware logs separately. Sent from

Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

2021-03-01 Thread Andrew Beattie
Jonathan, You need to create vdisk sets which will create multiple vdisks, you can then assign vdisk sets to your filesystem. (Assigning multiple vdisks at a time) Things to watch - free space calculations are more complex as it’s building multiple vdisks under the cover using multiple raid

Re: [gpfsug-discuss] Contents of gpfsug-discuss Digest, Vol 107, Issue 13

2020-12-10 Thread Andrew Beattie
via SMB or NFS, all research users across the university who want to access one or more of their storage allocations do so via the SMB / NFS mount points from this specific storage cluster. Regards, Andrew Beattie File & Object Storage - Technical Lead IBM Australia & New Zealand Sent

Re: [gpfsug-discuss] AFM experiences?

2020-11-23 Thread Andrew Beattie
Rob, Talk to Jake Carroll from the University of Queensland, he has done a number of presentations at Scale User Groups of UQ’s MeDiCI data fabric which is based on Spectrum Scale and does very aggressive use of AFM. Their use of AFM is not only on campus, but to remote Storage clusters between

Re: [gpfsug-discuss] Services on DSS/ESS nodes

2020-10-03 Thread Andrew Beattie
issues onto your self rather than it being the vendors problem. Sent from my iPhone > On 3 Oct 2020, at 20:06, Jonathan Buzzard wrote: > > On 02/10/2020 23:19, Andrew Beattie wrote: >> Jonathan, >> I suggest you get a formal statement from Lenovo as the DSS-G Platform &

Re: [gpfsug-discuss] Services on DSS/ESS nodes

2020-10-02 Thread Andrew Beattie
Jonathan, I suggest you get a formal statement from Lenovo as the DSS-G Platform is no longer an IBM platform. But for ESS based platforms the answer would be, it is not supported to run anything on the IO Servers other than GNR and the relevant Scale management services, due to the fact that

Re: [gpfsug-discuss] tsgskkm stuck---> what about AMD epyc support in GPFS?

2020-09-03 Thread Andrew Beattie
AMD. Sent from my iPhone > On 3 Sep 2020, at 17:44, Giovanni Bracco wrote: > > OK from client side, but I would like to know if the same is also for > NSD servers with AMD EPYC, do they operate with good performance > compared to Intel CPUs? > > Giovanni > >> On

Re: [gpfsug-discuss] tsgskkm stuck---> what about AMD epyc support in GPFS?

2020-09-02 Thread Andrew Beattie
)       Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: Giovanni Bracco Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list , Frederick Stock

Re: [gpfsug-discuss] Gpfs Standar and JBOD enclosures

2020-03-09 Thread Andrew Beattie
Iban, Spectrum scale native raid will not be an option for your scenario. Scale erasure code edition does not support JBOD enclosures at this point. And Scale Native RAID is only certified for specific hardware ( IBM ESS or Lenovo GSS ) This means you either need to use Hardware raid

Re: [gpfsug-discuss] AFM Alternative?

2020-02-26 Thread Andrew Beattie
Why don’t you look at packaging your small files into larger files which will be handled more effectively. There is no simple way to replicate / move billions of small files, But surely you can build your work flow to package the files up into a zip or tar format which will simplify not only

Re: [gpfsug-discuss] Spectrum Scale Ganesha NFS multi threaded AFM?

2020-02-21 Thread Andrew Beattie
Andi, You may want to reach out to Jake Carrol at the University of Queensland, When UQ first started exploring with AFM, and global AFM transfers they did extensive testing around tuning for the NFS stack. >From memory they got to a point where they could pretty much saturate a 10GBit >link,

Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites.

2019-12-18 Thread Andrew Beattie
Jonathan,   Not entirely true,   Spectrum Scale Standard (Data Access Edition) does include AFM it does not include AFM-DR   Spectrum Scale Advanced (Data Management Edition) is required to support AFM-DR   Regards, Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems

Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites.

2019-12-18 Thread Andrew Beattie
.     Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: "Andi Nør Christiansen" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list

Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites.

2019-12-18 Thread Andrew Beattie
://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Active%20File%20Management%20(AFM)   Regards, Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com

Re: [gpfsug-discuss] Max number of vdisks in a recovery group - is it 64?

2019-12-13 Thread Andrew Beattie
across all the drives in the RG, this guarantees the best performance for each Vdisk Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: "Olaf Weiser" Sent by: gpfs

Re: [gpfsug-discuss] GPFS and POWER9

2019-09-19 Thread Andrew Beattie
to do almost anything with that intel adapter   regards, Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: Simon Thompson Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo

Re: [gpfsug-discuss] GPFS and POWER9

2019-09-19 Thread Andrew Beattie
Simon,   are you using Intel 10Gb Network Adapters with RH 7.6 by anychance?   regards Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: Simon Thompson Sent by: gpfsug

Re: [gpfsug-discuss] Backup question

2019-09-01 Thread Andrew Beattie
r guidance to your question   Regards,   Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: Cc:Subject: [EXTERNAL] [gpfsug-discu

Re: [gpfsug-discuss] Any guidelines for choosing vdisk size?

2019-07-01 Thread Andrew Beattie
stem based on the number of vdisks per building block i've added to that filesystem and provide a Min performance / Max performance number for a filesystem.   Regards, Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeat...@au1.ibm

Re: [gpfsug-discuss] Any guidelines for choosing vdisk size?

2019-06-30 Thread Andrew Beattie
into an existing filesystem ? Are you creating an new filesystem?     Regards,   Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: Ryan Novosielski Sent by: gpfsug-discuss-boun

Re: [gpfsug-discuss] Gateway role on a NSD server

2019-06-01 Thread Andrew Beattie
be a blanket -- NOT supported) but for other configurations its a case of at your own risk     Regards, Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: "Weil, Matthew&

Re: [gpfsug-discuss] Identifiable groups of disks?

2019-05-14 Thread Andrew Beattie
Raid 4+2P, Raid 4+3P Raid 8+2P and Raid8+3P options, as well as 2 copy and 3copy options -- you would need to reach out to your local IBM team to validate if Scale ECE would be supported with JBOD shelves presented to multiple servers.   Regards   Andrew Beattie File and Object Storage Technical

Re: [gpfsug-discuss] Exporting remote GPFS mounts on a non-ces SMB share

2019-03-07 Thread Andrew Beattie
/ mmNametoUID, to manage the ACL and permissions, they then move the data from the AFM filesystem to the HPC scratch filesystem for processing by the HPC (different filesystems within the same cluster)     Regards, Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage

Re: [gpfsug-discuss] Exporting remote GPFS mounts on a non-ces SMB share

2019-03-07 Thread Andrew Beattie
cluster.   Why are you trying to publish a non HA RHEL SMB share instead of using the HA CES protocols? Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: vall

Re: [gpfsug-discuss] Exporting remote GPFS mounts on a non-ces SMB share

2019-03-07 Thread Andrew Beattie
/bl1adv_configprotocolsonremotefs.htm     Hope this helps,   Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: vall...@cbio.mskcc.orgSent by: gpfsug-discuss-boun

Re: [gpfsug-discuss] Migrating billions of files?

2019-03-06 Thread Andrew Beattie
Yes and its licensed based on the size of your network pipe.   Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: "Frederick Stock" Sent by: gpfsug-di

Re: [gpfsug-discuss] Unbalanced pdisk free space

2019-01-30 Thread Andrew Beattie
Hi there,   Given you have a Lenovo DSS - I suggest you open a support call with Lenovo initially, as I believe all OEM's are tasked with providing their own support -- they have the ability to open a case directly with IBM if they are unable to provide the relevant answers.       Regards Andrew

Re: [gpfsug-discuss] Get list of filesets _without_runningmmlsfileset?

2019-01-09 Thread Andrew Beattie
Kevin,   That sounds like a useful script would you care to share?   Thanks Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: "Buterbaugh, Kevin L" Sent by: gpfsug-discuss-boun...@spectrums

Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

2019-01-09 Thread Andrew Beattie
to the base cluster and by group create exports of different protocols, but its not available today. Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: Andi Rhod Christiansen Sent by: gpfsug-discuss-boun

Re: [gpfsug-discuss] Anybody running GPFS over iSCSI?

2018-12-17 Thread Andrew Beattie
ood chance of getting a pair of switches to connect 2 NSD servers to a single FC storage array for far less than that.     Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: Jan-Frode Myklebust Sent by: gpfs

Re: [gpfsug-discuss] Test?

2018-12-07 Thread Andrew Beattie
this one made it Aaron Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: "Knister, Aaron S. (GSFC-606.2)[InuTeq, LLC]" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discu

Re: [gpfsug-discuss] gpfs mount point not visible in snmp hrStorageTable

2018-11-07 Thread Andrew Beattie
  your looking for.     Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: "Henrik Cednert (Filmlance)" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subjec

Re: [gpfsug-discuss] NSD network checksums (nsdCksumTraditional)

2018-10-29 Thread Andrew Beattie
depending on the type of NSD Servers doing the work.       Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: Stephen Ulmer Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list

Re: [gpfsug-discuss] [Newsletter] Re: Problem with mmlscluster and callback scripts

2018-09-10 Thread Andrew Beattie
, to it, it also becomes useful as a gui server / management server?   regards, Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: Matthias Knigge Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main

Re: [gpfsug-discuss] CES file authentication - bind account deleted?

2018-09-04 Thread Andrew Beattie
and their account is deleted, you would almost certainly have issues with the Scale cluster trying to validate users permissions and having scale get an error from AD when the credentials that it uses are no longer valid.     Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail

Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-01 Thread Andrew Beattie
Stephen,   Sorry your right, I had to go back and look up what we were doing for metadata.   but we ended up with 1MB block for metadata and 8MB for data and a 32k subblock based on the 1MB metadata block size, effectively a 256k subblock for the Data       Andrew Beattie Software Defined Storage

Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem?

2018-08-01 Thread Andrew Beattie
was the default for so many years Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: "Marc A Kaplan" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: R

Re: [gpfsug-discuss] Compression details

2018-07-25 Thread Andrew Beattie
will apply on the groups that have been updated, and the next time the compression policy is run they will be recompressed.   in terms of performance overhead -- the answer will always be -- it depends on your specific data environment.   Regards, Andrew Beattie Software Defined Storage  - IT Specialist

Re: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks

2018-05-15 Thread Andrew Beattie
development team involved in your design and see what we can support for your requirements.     Regards, Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: vall...@cbio.mskcc.orgSent by: gpfsug-discuss-boun

Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7

2018-05-15 Thread Andrew Beattie
this thread is mildly amusing, given we regularly get customers asking why we are dropping support for versions of linux that they "just can't move off"     Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Origin

Re: [gpfsug-discuss] Snapshots for backups

2018-05-09 Thread Andrew Beattie
the benefits of Object Storage erasure coding might be worth looking at although for a 1 or 2 site scenario the overheads are pretty much the same if you use some variant of distributed raid or if you use erasure coding.             Andrew Beattie Software Defined Storage  - IT Specialist Phone

Re: [gpfsug-discuss] Snapshots for backups

2018-05-08 Thread Andrew Beattie
infrastructure in a DR location have you considered looking at an object store and using your existing Protect environment to allow you to Protect environment to HSM out to a Disk basked object storage pool distributed over disparate geographic locations? (obviously capacity dependent)   Andrew

Re: [gpfsug-discuss] more than one mlx connectx-4 adapter in samehost

2017-12-20 Thread Andrew Beattie
configurations       Regards Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: "J. Eric Wonderley" <eric.wonder...@vt.edu>Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discuss

Re: [gpfsug-discuss] 5.0 features?

2017-11-29 Thread Andrew Beattie
advantage of this feature   Regards, Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: "Marc A Kaplan" <makap...@us.ibm.com>Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main

Re: [gpfsug-discuss] Online data migration tool

2017-11-26 Thread Andrew Beattie
will have just migrated onto new hardware)  less than 6 months after doing the migration to new hardware and that migration method can't be add more hardware.   Regards Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message

Re: [gpfsug-discuss] GPFS for aarch64?

2017-06-08 Thread Andrew Beattie
Philipp,   Not to my knowledge,   AIX Linux on x86 ( RHEL / SUSE / Ubuntu) Linux on Power (RHEL / SUSE) WIndows   are the current supported platforms Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From

Re: [gpfsug-discuss] Spectrum Scale - Spectrum Protect - SpaceManagement (GPFS HSM)

2017-06-02 Thread Andrew Beattie
Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: "Jaime Pinto" <pi...@scinet.utoronto.ca>To: "gpfsug main discussion list" <gpfsug-discuss@spectrumscale.org>, "Andrew Beat

[gpfsug-discuss] Spectrum Scale - Spectrum Protect - Space Management (GPFS HSM)

2017-06-01 Thread Andrew Beattie
for the users to schedule their data set recalls?   Regards,   Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com ___ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman

Re: [gpfsug-discuss] AFM

2017-05-02 Thread Andrew Beattie
we are trying to tune the links to improve bandwidth and reduce latency rather than restrict the bandwidth.   Regards, Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: Matt Weil <mw...@wustl.edu&g

Re: [gpfsug-discuss] forcibly panic stripegroup everywhere?

2017-01-22 Thread Andrew Beattie
Out of curiosity -- why would you want to? Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com     - Original message -From: Aaron Knister <aaron.s.knis...@nasa.gov>Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsu

Re: [gpfsug-discuss] CES log files

2017-01-11 Thread Andrew Beattie
mmhealth might be a good place to start   CES should probably throw a message along the lines of the following:   mmhealth shows something is wrong with AD server:...CES  DEGRADED ads_down  ... Andrew Beattie Software Defined Storage  - IT Specialist Phone

Re: [gpfsug-discuss] CES nodes mount nfsv3 not responding

2017-01-03 Thread Andrew Beattie
pport have indicated that they think there is a bug in the SElinux code, which is causing this issue, and have suggested that we disable SElinux and try again.   My clients environment is currently deployed on Centos 7. Andrew Beattie Software Defined Storage  - IT Specialist Phone: 614-2133-7927 E

Re: [gpfsug-discuss] Strategies - servers with local SAS disks

2016-11-30 Thread Andrew Beattie
of integrated raid controllers then any form of distributed raid is probably the best scenario, Raid 6 obviously.   How many Nodes are you planning on building?  The more nodes the more value FPO is likely to bring as you can be more specific in how the data is written to the nodes.   Andrew Beattie

[gpfsug-discuss] Is anyone performing any kind of Charge back / Show back on Scale today and how do you collect the data

2016-11-17 Thread Andrew Beattie
), to provide accountability for the number of virtual machines they are providing, and would like to extend this capability to their file storage offering, which today is based on basic virtual windows file servers   Is anyone doing something similar today? and if so at what granularity?   Andrew Beattie

Re: [gpfsug-discuss] AFM Licensing

2016-11-10 Thread Andrew Beattie
I think you will find that AFM in any flavor is a function of the Server license, not a client license.   i've always found this to be a pretty good guide, although you now need to add Transparent Cloud Tiering into the bottom column         Andrew Beattie Software Defined Storage