Nice!

-- 
Stephen Ulmer

Sent from a mobile device; please excuse auto-correct silliness.

> On Oct 12, 2016, at 1:54 PM, Marc A Kaplan <makap...@us.ibm.com> wrote:
> 
> Yes, you can AFM within a single cluster, in fact with just a single node.  I 
> just set this up on my toy system:
> 
> [root@bog-wifi cmvc]# mmlsfileset yy afmlu --afm
> Filesets in file system 'yy':
> Name                     Status    Path                                       
> afmTarget
> afmlu                    Linked    /yy/afmlu                                  
> gpfs:///xx
> 
> [root@bog-wifi cmvc]# mount
>   ...
> yy on /yy type gpfs (rw,relatime,seclabel)
> xx on /xx type gpfs (rw,relatime,seclabel)
> 
> [root@bog-wifi cmvc]# mmafmctl yy getstate
> Fileset Name    Fileset Target                                Cache State     
>      Gateway Node    Queue Length   Queue numExec
> ------------    --------------                                -------------   
>      ------------    ------------   -------------
> afmlu           gpfs:///xx                                    Active          
>      bog-wifi        0              7
> 
> So, you may add nodes, add disks to an existing cluster, upgrade your 
> software, define a new FS,
> migrate data from an old FS to a new FS
> then delete nodes and disks that are no longer needed...
> 
> 
> 
> From:        Stephen Ulmer <ul...@ulmer.org>
> To:        gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
> Date:        10/11/2016 09:30 PM
> Subject:        Re: [gpfsug-discuss] Hardware refresh
> Sent by:        gpfsug-discuss-boun...@spectrumscale.org
> 
> 
> 
> I think that the OP was asking why not expand the existing cluster with the 
> new hardware, and just make a new FS?
> 
> I’ve not tried to make a cluster talk AFM to itself yet. If that’s 
> impossible, then there’s one good reason to make a new cluster (to use AFM 
> for migration).
> 
> Liberty,
> 
> -- 
> Stephen
> 
> 
> 
> On Oct 11, 2016, at 8:40 PM, Mark.Bush@siriuscom.comwrote:
> 
> Only compelling reason for new cluster would be old hardware is EOL or no 
> longer want to pay maintenance on it.
>  
> From: <gpfsug-discuss-boun...@spectrumscale.org> on behalf of Marc A Kaplan 
> <makap...@us.ibm.com>
> Reply-To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
> Date: Tuesday, October 11, 2016 at 2:58 PM
> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
> Subject: Re: [gpfsug-discuss] Hardware refresh
>  
> New FS? Yes there are some good reasons.  
> New cluster?  I did not see a compelling argument either way.
> 
> 
> 
> From:        "mark.b...@siriuscom.com" <mark.b...@siriuscom.com>
> To:        gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
> Date:        10/11/2016 03:34 PM
> Subject:        Re: [gpfsug-discuss] Hardware refresh
> Sent by:        gpfsug-discuss-boun...@spectrumscale.org
> 
> 
> 
> 
> Ok.  I think I am hearing that a new cluster with a new FS and copying data 
> from old to new cluster is the best way forward.  Thanks everyone for your 
> input.  
> 
> From: <gpfsug-discuss-boun...@spectrumscale.org> on behalf of Yuri L Volobuev 
> <volob...@us.ibm.com>
> Reply-To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
> Date: Tuesday, October 11, 2016 at 12:22 PM
> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
> Subject: Re: [gpfsug-discuss] Hardware refresh
>  
> This depends on the committed cluster version level (minReleaseLevel) and 
> file system format. Since NFSv2 is an on-disk format change, older code 
> wouldn't be able to understand what it is, and thus if there's a possibility 
> of a downlevel node looking at the NSD, the NFSv1 format is going to be used. 
> The code does NSDv1<->NSDv2 conversions under the covers as needed when 
> adding an empty NSD to a file system.
> 
> I'd strongly recommend getting a fresh start by formatting a new file system. 
> Many things have changed over the course of the last few years. In 
> particular, having a 4K-aligned file system can be a pretty big deal, 
> depending on what hardware one is going to deploy in the future, and this is 
> something that can't be bolted onto an existing file system. Having 4K inodes 
> is very handy for many reasons. New directory format and NSD format changes 
> are attractive, too. And disks generally tend to get larger with time, and at 
> some point you may want to add a disk to an existing storage pool that's 
> larger than the existing allocation map format allows. Obviously, it's more 
> hassle to migrate data to a new file system, as opposed to extending an 
> existing one. In a perfect world, GPFS would offer a conversion tool that 
> seamlessly and robustly converts old file systems, making them as good as 
> new, but in the real world such a tool doesn't exist. Getting a clean slate 
> by formatting a new file system every few years is a good long-term 
> investment of time, although it comes front-loaded with extra work.
> 
> yuri
> 
> <image001.gif>Aaron Knister ---10/10/2016 04:45:31 PM---Can one format NSDv2 
> NSDs and put them in a filesystem with NSDv1 NSD's? -Aaron
> 
> From: Aaron Knister <aaron.s.knis...@nasa.gov>
> To: <gpfsug-discuss@spectrumscale.org>, 
> Date: 10/10/2016 04:45 PM
> Subject: Re: [gpfsug-discuss] Hardware refresh
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
> 
> 
> 
> 
> 
> 
> Can one format NSDv2 NSDs and put them in a filesystem with NSDv1 NSD's?
> 
> -Aaron
> 
> On 10/10/16 7:40 PM, Luis Bolinches wrote:
> > Hi
> >
> > Creating a new FS sounds like a best way to go. NSDv2 being a very good
> > reason to do so.
> >
> > AFM for migrations is quite good, latest versions allows to use NSD
> > protocol for mounts as well. Olaf did a great job explaining this
> > scenario on the redbook chapter 6
> >
> > http://www.redbooks.ibm.com/abstracts/sg248254.html?Open
> >
> > --
> > Cheers
> >
> > On 10 Oct 2016, at 23.05, Buterbaugh, Kevin L
> > <kevin.buterba...@vanderbilt.edu
> > <mailto:kevin.buterba...@vanderbilt.edu>> wrote:
> >
> >> Hi Mark,
> >>
> >> The last time we did something like this was 2010 (we’re doing rolling
> >> refreshes now), so there are probably lots of better ways to do this
> >> than what we did, but we:
> >>
> >> 1) set up the new hardware
> >> 2) created new filesystems (so that we could make adjustments we
> >> wanted to make that can only be made at FS creation time)
> >> 3) used rsync to make a 1st pass copy of everything
> >> 4) coordinated a time with users / groups to do a 2nd rsync when they
> >> weren’t active
> >> 5) used symbolic links during the transition (i.e. rm -rvf
> >> /gpfs0/home/joeuser; ln -s /gpfs2/home/joeuser /gpfs0/home/joeuser)
> >> 6) once everybody was migrated, updated the symlinks (i.e. /home
> >> became a symlink to /gpfs2/home)
> >>
> >> HTHAL…
> >>
> >> Kevin
> >>
> >>> On Oct 10, 2016, at 2:56 PM, mark.b...@siriuscom.com
> >>> <mailto:mark.b...@siriuscom.com> wrote:
> >>>
> >>> Have a very old cluster built on IBM X3650’s and DS3500.  Need to
> >>> refresh hardware.  Any lessons learned in this process?  Is it
> >>> easiest to just build new cluster and then use AFM?  Add to existing
> >>> cluster then decommission nodes?  What is the recommended process for
> >>> this?
> >>>
> >>>
> >>> Mark
> >>>
> >>> This message (including any attachments) is intended only for the use
> >>> of the individual or entity to which it is addressed and may contain
> >>> information that is non-public, proprietary, privileged,
> >>> confidential, and exempt from disclosure under applicable law. If you
> >>> are not the intended recipient, you are hereby notified that any use,
> >>> dissemination, distribution, or copying of this communication is
> >>> strictly prohibited. This message may be viewed by parties at Sirius
> >>> Computer Solutions other than those named in the message header. This
> >>> message does not contain an official representation of Sirius
> >>> Computer Solutions. If you have received this communication in error,
> >>> notify Sirius Computer Solutions immediately and (i) destroy this
> >>> message if a facsimile or (ii) delete this message immediately if
> >>> this is an electronic communication. Thank you.
> >>>
> >>> Sirius Computer Solutions <http://www.siriuscom.com/>
> >>> _______________________________________________
> >>> gpfsug-discuss mailing list
> >>> gpfsug-discuss at spectrumscale.org<http://spectrumscale.org/>
> >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >>
> >> —
> >> Kevin Buterbaugh - Senior System Administrator
> >> Vanderbilt University - Advanced Computing Center for Research and
> >> Education
> >> kevin.buterba...@vanderbilt.edu
> >> <mailto:kevin.buterba...@vanderbilt.edu> - (615)875-9633
> >>
> >>
> >>
> >
> > Ellei edellä ole toisin mainittu: / Unless stated otherwise above:
> > Oy IBM Finland Ab
> > PL 265, 00101 Helsinki, Finland
> > Business ID, Y-tunnus: 0195876-3
> > Registered in Finland
> >
> >
> >
> > _______________________________________________
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 
> This message (including any attachments) is intended only for the use of the 
> individual or entity to which it is addressed and may contain information 
> that is non-public, proprietary, privileged, confidential, and exempt from 
> disclosure under applicable law. If you are not the intended recipient, you 
> are hereby notified that any use, dissemination, distribution, or copying of 
> this communication is strictly prohibited. This message may be viewed by 
> parties at Sirius Computer Solutions other than those named in the message 
> header. This message does not contain an official representation of Sirius 
> Computer Solutions. If you have received this communication in error, notify 
> Sirius Computer Solutions immediately and (i) destroy this message if a 
> facsimile or (ii) delete this message immediately if this is an electronic 
> communication. Thank you.
> 
> Sirius Computer Solutions_______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 
>  
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to