Good to know you’ve done it before.

It takes a looooong time for government to buy anything.  A month is actually 
an awfully
optimistic estimate, now that I think about it!   

But I’m told this temporary node has  20+ 2TB disks,  and yes I’ll just do one 
full and then incrementals. 
Probably of the user-data areas only,  not the OS’s on all 9 or more clients.
I’ll just keep adding more spool areas as the first setup gets full.   My 
current backups
of these clients (different software; now expired!)   is taking 3-4 LTO4 tapes 
for a level 0 backup.

Deb Baddorf


> On Nov 2, 2016, at 4:10 PM, Chris Hoogendyk <[email protected]> wrote:
> 
> I did exactly that, and then flushed it a couple of times when I got the tape 
> drive replaced. However, I was only a week or so running that way, not a 
> month.
> 
> For my setup, I have an LTO6 tape library, and a Supermicro server running 
> Ubuntu 14.04. I have two 1TB enterprise SSDs for holding disk. During the 
> days that my tape drive failed, I set up extra disk space that was available 
> for a few more holding disks totaling something like 6TB more space. For a 
> couple of days, I was running incrementals only. That is backing up 4 servers 
> with a total of something like 20TB of actual data (as opposed to capacity) 
> divided up into just over 100 DLEs.
> 
> 
> On 11/2/16 4:31 PM, Debra S Baddorf wrote:
>> For a temporary setup,  until I get a new computer to attach to my tape 
>> drive ….
>> (perhaps a month)
>> is there any reason I can’t just have a LOT of spool space(s),   and let data
>> accumulate there?
>> 
>> The alternative is creating virtual-tapes on those same disks.  But that 
>> seems
>> like more work than just leaving the data in the spool area.   Is this a bad 
>> idea?
>> 
>> Deb Baddorf
>> Fermilab
>> 
> 
> -- 
> ---------------
> 
> Chris Hoogendyk
> 
> -
>   O__  ---- Systems Administrator
>  c/ /'_ --- Biology & Geosciences Departments
> (*) \(*) -- 315 Morrill Science Center
> ~~~~~~~~~~ - University of Massachusetts, Amherst
> 
> <[email protected]>
> 
> ---------------
> 
> Erdös 4
> 


Reply via email to