Michael Stowe wrote at about 14:41:21 -0500 on Monday, August 31, 2009: > > > This still is not a solution for all of us. First, I store the backups > > on a consumer-level NAS device that does not easily facilitate adding > > partitions without additional hacking and risks to data integrity. The > > device also does not support LVM. I do not want to have copy a whole > > 1TB partition just to copy over a few hundred GB of backuppc > > data. Second, at some point, I may want to move the pool to another > > drive or server and I don't want to have to fiddle with low level block > > copy and partition resizing in the hope that I can get it right > > without making mistakes in either the partition itself, the underlying > > LVM setup, or the further underlying (software) RAID setup -- I have > > done this manually before and it takes real care to get the sequence > > right and not do something stupid. > > While I'm not sure I'd go so far as to call a consumer NAS a fringe case, > it's certainly limits your options. I suspect that the majority of > BackupPC users put their backups on a file system with some measure of > redundancy and leave it at that, rather than take the additional step of > copying the backup elsewhere, and those who do have no doubt architected > their hardware and software to handle this extra step.
Well, there is an active group that uses these devices -- and I'm not sure what makes one fringe vs. not. I could just as easily argue the opposite -- backuppc was designed and is best-suited for those of us not in big enterprise situations. SOHO users are a well-defined subset of the target user base and a linux-based consumer NAS device with RAID is a very good way of getting cheap, reliable storage for that user group. > > In other words, I'd suggest that working around the limitations of your > consumer-grade NAS is probably beyond the scope of any backup system. How nice of you. And please remind me of all the code you have contributed to BackupPC and to this user group... > > > Finally, at a minimum, the installation document for BackupPC should > > clearly warn users to use a dedicated partition with LVM for TopDir if > > they want to have any hope of practically backing up, transferring, or > > expanding their backup directory in the future. > > I call shenanigans. I'd suggest that it's beyond the scope of the > documentation of a backup solution to provide a basic education on LVM, > nor, frankly, is LVM the only solution, nor is it available on every > platform on which BackupPC runs. No one said "education". I said warn users of the advisability of using a dedicated filesystem that can easily be copied/resized/moved. Because most people don't recognize the problem of copying/moving/resizing their BackupPC database until they have been using it for a while at which point it might be too late. And I didn't say LVM is the only solution - though I am not aware of too many other solutions on Linux that allow easy resizing and spreading of a single filesystem across multiple solutions. If there are many others that allow that, please enlighten me... I think you are the one with "shenanigans" based on your unwillingness to open your mind to other needs and other solutions -- and if you had been a long-time contributor to this newsgroup, you would realize that this issue continues to arise without satisfactory solution no matter how many times people parrot the words "LVM", "ZFS", "block copy", etc. > > > I really fail to understand the dogged resistance to finding a viable > > solution to a well-known and repeated issue with BackupPC that does > > not rely on filesystem level kludges. I could see if this were given > > as a temporary workaround but why should we continue to see this as > > the ideal solution rather than trying to work on a more robust and > > comprehensive solution even if it falls to a long-term roadmap item. > > I'm going to have to object to your use of the term "kludge" here. If > you're referring to hardlinks, they're a basic feature of file systems, > and I don't think actually using them for their intended purpose (i.e., > pointing alternative directory entries at identical files) can be fairly > characterized as a kludge. > It is a kludge to use hard-links as the way of tracking and expiring backups while using an attrib file to store the usual attributes associated with a file. The kludge is not the use per-se of hard links to store the file data but the resulting collapsing of multiple version of the same file to a single inode that correspond to different inodes and file attributes in the source data. This requires the creation of a series of attrib files to add back the usual file attributes and to distinguish which versions of the files are single copies vs. hard links vs. soft links vs. special files. etc. This construction is slow, difficult to extend and divorces the usual filesystem construct where a file's attributes are stored along with its data in a single filesystem entry/inode. In a very real sense, the current implementation already uses an artificial database structure - albeit it a slow, prorprietary, non-extensible, non-optimizable version. To wit, the attrib files present in each and every pc directory. The real essence of my suggestion is to replace the scattered myriad of attrib linear databases with a single relational database that can benefit from all the features, speed, tools, and optimizations of modern databases. As has been mentioned many times in the past, such a move would solve many, many problems though would obviously require some significant development work. If you haven't studied the structure of the attrib file and seen how in practice it is created/read/written/modified then I strongly encourage you do so. Also, check how incrementals are reconstructed. All these operations are difficult to scale and extend in the current implementation. Also check how hard and soft links are implemented in backups (here I mean how hard links and soft links in the source are stored) - you will see that the method is kludgey and asymmetrical (for hard-links) in particular because the hard link notion is already used for the pooling. I have written (and contributed) several add-on tools to manipulate the pool, the pc-heirarchy, and the attrib files and while the standard BackupPC implementation is clever and understandable, I can assure you that it is still very much a kludge and quite difficult to extend. Again, I think BackupPC is a great program and I fully understand how and why it developed as it did -- In particular, I have nothing but the highest respect for Craig Barratt and the other active contributors. I am just saying that the current implementation has limitations that hinder further development and extension. You are welcome to disagree but calling other people's inputs "shenanigans" or implying that they are "fringe" is just not a polite way of discussion - especially when you seem to be new to this group and to my knowledge have contributed nothing other than to parrot back the same tired partial solution as others more knowledgeable than you have suggested and to criticize without understanding. The bottom line is that there are legitimate, serious and passionate issues being debated here with merits to both sides -- so my suggestion would be for you to listen and learn first before jumping in mouth first. ------------------------------------------------------------------------------ Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day trial. Simplify your report design, integration and deployment - and focus on what you do best, core application coding. Discover what's new with Crystal Reports now. http://p.sf.net/sfu/bobj-july _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/