Holger Parplies wrote:

>Hi,
>
>Mark Sopuch wrote on 07.06.2007 at 13:36:55 [Re: [BackupPC-users] Grouping 
>hosts and pool]:
>  
>
>>Jason M. Kusar wrote:
>>    
>>
>>>Mark Sopuch wrote:
>>>      
>>>
>>>>I'd like to group data (let's just say dept data) from certain hosts 
>>>>together (or actually to seperate some from others) to a different 
>>>>filesystem and still keep the deduping pool in a common filesystem. 
>>>>[...]
>>>>        
>>>>
>>>Yes, hard links do not work across filesystems.
>>>[...]
>>>      
>>>
>>[...] my concerns lie mainly with certain types of 
>>hosts (data) encroaching quite wildly into the shared allocated space 
>>under DATA/... thus leaving less room for the incoming data from other 
>>hosts. It's a space budgetting and control thing. [...] I am not sure how
>>any other quoting schemes would work to provide similar capability for soft 
>>and hard quota if they are in the same fs and usernames are not stamped 
>>around in DATA/... to differentiate such things to those other quota'ing 
>>systems. Sure I want to back everything up but I do not want the 
>>bulkiest least important thing blocking a smaller top priority backup 
>>getting space to write to when there's a mad run of new data.
>>
>>Hope I am being clear enough. Thanks again.
>>    
>>
>
>I believe you are being clear enough, but I doubt you have a clear enough
>idea of what you actually want :-).
>  
>
That may be true. I obviously treated hard-links as quite 'magical' and 
the reality of there implementation and implication (amongst other 
things) didn't come to thought.

>If multiple hosts/users share the same (de-duplicated) data, which one would
>you want it to be accounted to? 
>
For me, it's more about isolation than accounting. I guess I was looking 
for a common filesystem to pool into plus seperate filessystems per 
group of hosts. Each group would have hosts sandboxed (in a filesystem 
with soft and hard quotas) then alerts of quotas near nearing limits are 
sent by the file server appliance to backuppc admins. If a per group 
sandbox fills then I can live with that and it's backups failing but I 
cannot live with a common pooling filesystem filling up of course due to 
it's shared nature (dependencies). My efforts would always have ensured 
the common pool is massive enough to cover some concept of a worst case 
(best dedupe case) leaving the sandbox management as my only real concern.

>If you don't expect much duplicate data between hosts (or groups of hosts),
>the best approach is probably really to run independent instances of
>BackupPC.
>  
>
I think I'll take that advice and given hard-links don't span 
filesystems I am railroaded anyway I suspect. Making a group manager 
that would edit symlinks to route the hosts to their respective group 
sandboxes was the other thing I was looking to do and now don't need to 
which is some consolation.

Thanks for the polished explanation Holger.

-- Mark

-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
BackupPC-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to