Oh, yes....

My thoughts make the assumption that this will be done while users are
not in play.

If that's not true, there are a couple of things that will need to be adjusted.

- Still consider doing a first run with the /CREATE switch.
- Can't shut down the server service, but probably still shouldn't
need the /ZB switch. Instead, do a second full run (not the initial
/CREATE run) after hours to pick up any changed data.
- If a second full run is required, then the /MIR switch is useful

Kurt

On Mon, Jan 29, 2018 at 11:27 AM, Michael Leone <[email protected]> wrote:
> I'd like to impose once more for some advice and opinions. I have a Win 2008
> R2 file server; I need to migrate everything (shares and user home folders)
> to a Win 2012 R2 Storage Server, and then retire the old server. Everything
> is one 1 drive, with 3 main folders (Shares,Users,Scans), total size in the
> neighborhood of 2TB. Both have 4 teamed 1G NICs, so a total bandwidth of 4G.
>
> I'm thinking of use robocopy. I would make a full copy over the weekend:
>
> Source=OldFS\F$
> Destination=NewFs\d$
>
> RoboCopy <Source> <Destination> /S /E /ZB /COPYALL /R:1 /W:1 /V /NP /NFL
> /NDL /LOG+:<LogFile>
>
> That should get everything, NTFS security and all sub-folders. I thought
> about the /MIR option, but I've never used it, and so am just a touch leery
> (perhaps illogically).
>
> The end goal is to:
> copy all the files and shares to the new FS;
> re-name and re-IP the old FS;
> power off the old FS;
> re-name and re-IP the new FS to the old name.
>
>  (this way I can power up the old FS, just in case I need it for something
> I've missed)
>
> That *should* make things transparent to the end users.
>
> (ordinarily, I would think about doing a restore from my backup program
> Networker. But this is a remote site, and I believe that doing a local
> robocopy will probably be faster than trying to restore 2TB of what is
> probably a lot of small user files and folders across a 1G link)
>
> What have I missed? What would make it better?
>
>
>


Reply via email to