I never said anything about "philosophy." Those are your words and
"philosophical" arguments.
Thank you for your input, but I've already considered your other
suggestions.
As a reminder to other gentle readers, and to avoid further "philosophical"
tirades about my "foolish" idea, my original question posed was "Has anyone
gone down this path before me? If so, did you succeed or fail? I'd like to
compare notes either way." If you haven't, then please don't feel compelled
to send an abrasive reply.
On Wed, Mar 13, 2013 at 10:11 AM, <backu...@kosowsky.org> wrote:
> Stephen Joyce wrote at about 07:52:11 -0400 on Wednesday, March 13, 2013:
> > I'm in a situation where I find myself desiring per-pc pools.[1]
> >
> > It's been a while since I've dipped my toes into the BackupPC's code,
> but
> > I've done a bit of preliminary research into this, and think I've
> > identified the places where changes would need to be made to allow the
> pool
> > or cpool to be overridden for individual PCs -- nominally to be located
> at
> > $TopDir/pc/$host/cpool, for example.
> >
> > The idea here is would be to allow different PCs' pools to reside on
> > different physical filesystems for political reasons. I don't want to
> > disable pooling entirely since it still has features that would be
> > beneficial even if pooling one (or a few) PCs rather than an entire
> > organization.
> >
> > I haven't read this list regularly in a few years, so I thought I'd ask:
> > has anyone gone down this path before me? If so, did you succeed or
> fail?
> > I'd like to compare notes either way.
> >
> > [1] My specific situation is that I have multiple linux PCs with large
> > volumes of research data. This data is generally unique to that PC with
> > little duplication between PCs (at least between PCs of different
> research
> > groups). The storage to backup this data is also usually funded by
> > individual faculty accounts (sometimes grants) and as such should be
> > dedicated to that faculty's PC(s). Separate BackupPC servers (real or
> > virtual) is another option I have considered, but seems unnecessarily
> > wasteful.
>
> I read what you write and come to a different conclusion...
> 1. First, I don't see why separate budgets requires separate pc pools
> but not separate BackupPC instances. If it's merely an accounting
> issue, then come up with a fair metric that roughly aligns with
> usage. For example, charge each user based upon the relative size
> of their fulls to the total storage size. That seems a lot simpler
> and more robust then rewriting the basic code of BackupPC. If the
> issue were security and encryption or statutory requirements for
> keeping data isolated, then I could understand the
> need for pool separation. Otherwise, you are pursuing the wrong
> solution to the wrong problem. If its not even an accounting issue,
> but a "philosophical" one, then perhaps you should be pursuing the
> question on alt.philosophy.backuppc... because that is beyond the
> scope of this group...
>
> 2. Second, why go to the trouble of rewriting deeply embedded code to
> separate pools within a single BackupPC daemon process rather than
> just running separate instances of BackupPC? Once the pools are
> separate, there is truly de minimus savings to run one vs. multiple
> BackupPC instances. The storage (pool and pc trees) is completely
> separate, the BackupPC_dump instances run separately anyway and the
> BackuPC_nightly processes are separate. The only "savings" is that
> you have a single backup daemon running which takes up only trivial
> processing power and maybe a few small file shared log and config
> files. On the other hand, you are relying on untested hacked code
> to perform the critical task of backups with the possibility of
> introducing subtle errors that may not surface until you need to do
> a restore and it's too late. Can you be sure you have located all
> the places in the code where tests are made for pool vs. cpool?
> Seems a waste of time and foolish to me.
>
> 3. Finally, if you want to use separate filesystems for the pools,
> then you also will need to have separate pc trees. And you will
> need to edit the backuppc startup code that attempts to test
> writing hard links and probably also hack how BackupPC moves and
> deletes 'trash'. So, now you have separate pools, separate pc
> trees, separate dump (and link) processes (always true), separate
> (i.e. non-shared) BackupPC_nightly processes, separate (i.e.,
> non-shared) BackupPC_trashClean, etc... Seems to me like you are
> doing a lot of coding work and taking a huge risk of errors in
> order to do something that is conceptually not much different from
> running totally separate BackupPC instances and all just for the
> sake of "philosophy"...
>
> BTW, running separate instances of BackupPC need not require separate
> virtual machines. One could probably just make changes to the
> config.pl to keep separate processes from colliding...
>
>
> ------------------------------------------------------------------------------
> Everyone hates slow websites. So do we.
> Make your web apps faster with AppDynamics
> Download AppDynamics Lite for free today:
> http://p.sf.net/sfu/appdyn_d2d_mar
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki: http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/