Re: Old releases support
On 03/02/13 19:32, Remko Lodder wrote: I just removed 7.4/7-STABLE from the list and moved it to the 'unsupported' section. Thanks for mentioning this! Thanks to you! Although I'm fine (I know what I asked for :-), I'll point out another little thing: 7.4 is still listed as legacy in the home and here: http://www.freebsd.org/releases/. I guess it should go away there too. bye av. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Grepping though a disk
On 4 Mar 2013, at 01:36, Polytropon free...@edvax.de wrote: Due to a fsck file system repair I lost the content of a file I consider important, but it hasn't been backed up yet. The file name is still present, but no blocks are associated (file size is zero). I hope the data blocks (which are now probably marked unused) are still intact, so I thought I'd search for them because I can remember specific text that should have been in that file. As I don't need any fancy stuff like a progress bar, I decided to write a simple command, and I quickly got something up and running which I _assume_ will do what I need. This is the command I've been running interactively in bash: $ N=0; while true; do echo ${N}; dd if=/dev/ad6 of=/dev/stdout bs=10240 count=1 skip=${N} 2/dev/null | grep PATTERN; if [ $? -eq 0 ]; then break; fi; N=`expr ${N} + 1`; done To make it look a bit better and illustrate the simple logic behind my idea: N=0 while true; do echo ${N} dd if=/dev/ad6 of=/dev/stdout bs=10240 count=1 skip=${N} \ 2/dev/null | grep PATTERN if [ $? -eq 0 ]; then break fi N=`expr ${N} + 1` done Here PATTERN refers to the text. It's only a small, but very distinctive portion. I'm searching in blocks of 10 kB so it's easier to continue in case something has been found. I plan to output the resulting block (it's not a real disk block, I know, it's simply a unit of 10 kB disk space) and maybe the previous and next one (in case the file, the _real_ block containing the data, has been split across more than one of those units. I will then clean the garbage (maybe from other files) because I can easily determine the beginning and the end of the file. Needless to say, it's a _text_ file. I understand that grep operates on text files, but it will also happily return 0 if the text to search for will appear in a binary file, and possibly return the whole file as a search result (in case there are no newlines in it). My questions: 1. Is this the proper way of stupidly searching a disk? 2. Is the block size (bs= parameter to dd) good, or should I use a different value for better performance? 3. Is there a program known that already implements the functionality I need in terms of data recovery? Results so far: The disk in question is a 1 TB SATA disk. The command has been running for more than 12 hours now and returned one false-positive result, so basically it seems to work, but maybe I can do better? I can always continue search by adding 1 to ${N}, set it as start value, and re-run the command. Any suggestion is welcome! Hey that's actually a pretty creative way of doing things ;) Just to make sure, you've stopped daemons and all the stuff that could potentially write to the drive and nuke your blocks right ? ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Confused by restore(8) man page example
In the man page for restore(8) I see the following: The -r flag ... can be detrimental to one's health if not used carefully (not to mention the disk). An example: newfs /dev/da0s1a mount /dev/da0s1a /mnt cd /mnt restore rf /dev/sa0 Personally, I utterly fail to see what point the author is attempting to illustrate with the above example. I mean what part of this, exactly, may be detrimental to one's health ? It's an enigma to me. All I see is a pre-existing BSD partition being explicitly newfs'ed and then mounted, followed by some stuff being restored to that (clean) BSD partition from whatever is currently sitting on the tape drive called /dev/sa0. So? What possible problem could derive from merely that? I don't see any. What's the problem? I'm confused. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Confused by restore(8) man page example
On Mar 4, 2013, at 01:47, Ronald F. Guilmette r...@tristatelogic.com wrote: All I see is a pre-existing BSD partition being explicitly newfs'ed and then mounted, followed by some stuff being restored to that (clean) BSD partition from whatever is currently sitting on the tape drive called /dev/sa0. So? What possible problem could derive from merely that? I don't see any. I guess the same text in the man page could be read several different ways! The way I read it (which may or may not be correct) is that the example given is an example of how to use it *correctly*. It sounds to me like it's warning against deviating too far from the steps given in the example. I can see as how the text might allow other interpretations, though! ~Ben (who is always careful to avoid using out-of-range values with mktime() when setting up lunch with promptness sticklers in Riyadh...) ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Grepping though a disk
On Mon, 4 Mar 2013 10:09:50 +0100, Damien Fleuriot wrote: Hey that's actually a pretty creative way of doing things ;) It could be more optimum. :-) My thought is that I could maybe use a better bs= to make the whole thing run faster. I understand that for every unit, a subprocess dd | grep is started and an if [] test is run. Maybe doing this with 1 MB per unit would be better? Note that I need to grep through 1 TB in 10 kB steps... Just to make sure, you've stopped daemons and all the stuff that could potentially write to the drive and nuke your blocks right ? Of course. The /dev/ad6 disk is a separate data disk which is not in use at the moment (unmounted). Still it is possible that the block has already been overwritten, but when the search has finished, it's almost certain in what state the data is. I would rewrite the file, but my eidetic memory is not working well anymore, so I can only remember parts of it... :-( -- Polytropon Magdeburg, Germany Happy FreeBSD user since 4.0 Andra moi ennepe, Mousa, ... ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
p5-Bit-Vector SHA256 Checksum mismatch (was PERL problem installing SQLgrey)
Hi, I have the same problem. I use poudriere to package all needed software for my servers. SQLgrey is the only one failing because of the dependency for p5-Bit-Vector. I tried it several times. Even downloading manually to /usr/ports/distfiles doesn't work. I test the sha256 checksum manually, and it is correct. Here the log from poudriere: # cat p5-Bit-Vector-7.2_2.log build started at Fri Mar 1 12:38:24 CET 2013 port directory: /usr/ports/math/p5-Bit-Vector building for: 9.1-RELEASE amd64 maintained by: to...@freebsd.org Makefile ident: $FreeBSD: ports/math/p5-Bit-Vector/Makefile,v 1.22 2013/01/22 09:50:13 svnexp Exp $ ---Begin Environment--- OSVERSION=901000 UNAME_v=FreeBSD 9.1-RELEASE UNAME_r=9.1-RELEASE BLOCKSIZE=K MAIL=/var/mail/root PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin:/root/bin STATUS=1 PKG_EXT=txz WITH_CCACHE_BUILD=yes FORCE_PACKAGE=yes tpid=15131 POUDRIERE_BUILD_TYPE=bulk PKGNG=1 PKG_DELETE=/usr/local/sbin/pkg delete -y -f PKG_ADD=/usr/local/sbin/pkg add CCACHE_DIR=/var/cache/ccache PWD=/mnt/system/DATEN/poudriere/basefs/data/logs/bulk/91amd64/hostportstree LOGS=/mnt/system/DATEN/poudriere/basefs/data/logs HOME=/root USER=root SKIPSANITY=0 LOCALBASE=/usr/local PACKAGE_BUILDING=yes ---End Environment--- ---Begin OPTIONS List--- ---End OPTIONS List--- ===phase: depends == === p5-Bit-Vector-7.2_2 depends on file: /usr/local/sbin/pkg - not found ===Verifying install for /usr/local/sbin/pkg in /usr/ports/ports-mgmt/pkg === Installing existing package /usr/ports/packages/All/pkg-1.0.8.txz Installing pkg-1.0.8... done If you are upgrading from the old package format, first run: # pkg2ng === Returning to build of p5-Bit-Vector-7.2_2 === p5-Bit-Vector-7.2_2 depends on file: /usr/local/bin/perl5.14.2 - not found ===Verifying install for /usr/local/bin/perl5.14.2 in /usr/ports/lang/perl5.14 === Installing existing package /usr/ports/packages/All/perl-5.14.2_2.txz Installing perl-5.14.2_2...Removing stale symlinks from /usr/bin... Skipping /usr/bin/perl Skipping /usr/bin/perl5 Done. Creating various symlinks in /usr/bin... Symlinking /usr/local/bin/perl5.14.2 to /usr/bin/perl Symlinking /usr/local/bin/perl5.14.2 to /usr/bin/perl5 Done. Cleaning up /etc/make.conf... Done. Spamming /etc/make.conf... Done. done === Returning to build of p5-Bit-Vector-7.2_2 === p5-Bit-Vector-7.2_2 depends on file: /usr/local/bin/perl5.14.2 - found === p5-Bit-Vector-7.2_2 depends on package: p5-Carp-Clan=0 - not found ===Verifying install for p5-Carp-Clan=0 in /usr/ports/devel/p5-Carp-Clan === Installing existing package /usr/ports/packages/All/p5-Carp-Clan-6.04.txz Installing p5-Carp-Clan-6.04... done === Returning to build of p5-Bit-Vector-7.2_2 === p5-Bit-Vector-7.2_2 depends on file: /usr/local/bin/perl5.14.2 - found === p5-Bit-Vector-7.2_2 depends on file: /usr/local/bin/ccache - not found ===Verifying install for /usr/local/bin/ccache in /usr/ports/devel/ccache === Installing existing package /usr/ports/packages/All/ccache-3.1.9.txz Installing ccache-3.1.9...Create compiler links... create symlink for cc create symlink for cc (world) create symlink for c++ create symlink for c++ (world) create symlink for gcc create symlink for gcc (world) create symlink for g++ create symlink for g++ (world) create symlink for clang create symlink for clang (world) create symlink for clang++ create symlink for clang++ (world) done NOTE: Please read /usr/local/share/doc/ccache/ccache-howto-freebsd.txt for information on using ccache with FreeBSD ports and src. === Returning to build of p5-Bit-Vector-7.2_2 == === Cleaning for p5-Bit-Vector-7.2_2 ===phase: check-config== == ===phase: fetch == === p5-Bit-Vector-7.2_2 depends on file: /usr/local/sbin/pkg - found == ===phase: checksum== === p5-Bit-Vector-7.2_2 depends on file: /usr/local/sbin/pkg - found = SHA256 Checksum mismatch for Bit-Vector-7.2.tar.gz. === Refetch for 1 more times files: Bit-Vector-7.2.tar.gz === p5-Bit-Vector-7.2_2 depends on file: /usr/local/sbin/pkg - found = Bit-Vector-7.2.tar.gz doesn't seem to exist in /usr/ports/distfiles/. = Attempting to fetch ftp://ftp.cpan.org/pub/CPAN/modules/by-module/Bit/Bit-Vector-7.2.tar.gz fetch: ftp://ftp.cpan.org/pub/CPAN/modules/by-module/Bit/Bit-Vector-7.2.tar.gz: Unknown FTP error = Attempting to fetch http://www.cpan.dk/modules/by-module/Bit/Bit-Vector-7.2.tar.gz fetch: http://www.cpan.dk/modules/by-module/Bit/Bit-Vector-7.2.tar.gz: Requested Range Not Satisfiable = Attempting to fetch
Re: Grepping though a disk
On 3/3/2013 6:36 PM, Polytropon wrote: Due to a fsck file system repair I lost the content of a file I consider important, but it hasn't been backed up yet. The file name is still present, but no blocks are associated (file size is zero). I hope the data blocks (which are now probably marked unused) are still intact, so I thought I'd search for them because I can remember specific text that should have been in that file. As I don't need any fancy stuff like a progress bar, I decided to write a simple command, and I quickly got something up and running which I _assume_ will do what I need. This is the command I've been running interactively in bash: $ N=0; while true; do echo ${N}; dd if=/dev/ad6 of=/dev/stdout bs=10240 count=1 skip=${N} 2/dev/null | grep PATTERN; if [ $? -eq 0 ]; then break; fi; N=`expr ${N} + 1`; done To make it look a bit better and illustrate the simple logic behind my idea: N=0 while true; do echo ${N} dd if=/dev/ad6 of=/dev/stdout bs=10240 count=1 skip=${N} \ 2/dev/null | grep PATTERN if [ $? -eq 0 ]; then break fi N=`expr ${N} + 1` done Here PATTERN refers to the text. It's only a small, but very distinctive portion. I'm searching in blocks of 10 kB so it's easier to continue in case something has been found. I plan to output the resulting block (it's not a real disk block, I know, it's simply a unit of 10 kB disk space) and maybe the previous and next one (in case the file, the _real_ block containing the data, has been split across more than one of those units. I will then clean the garbage (maybe from other files) because I can easily determine the beginning and the end of the file. Needless to say, it's a _text_ file. I understand that grep operates on text files, but it will also happily return 0 if the text to search for will appear in a binary file, and possibly return the whole file as a search result (in case there are no newlines in it). My questions: 1. Is this the proper way of stupidly searching a disk? 2. Is the block size (bs= parameter to dd) good, or should I use a different value for better performance? 3. Is there a program known that already implements the functionality I need in terms of data recovery? Results so far: The disk in question is a 1 TB SATA disk. The command has been running for more than 12 hours now and returned one false-positive result, so basically it seems to work, but maybe I can do better? I can always continue search by adding 1 to ${N}, set it as start value, and re-run the command. Any suggestion is welcome! I'd call bs= essential for speed. Any copying will be faster with something higher. Also, there's the possibility, very annoying, that your search string overlaps a place where you read. I'd probably check 1M blocks, but advance maybe 950k each time. Make sure you're reading from block offsets for maximum speed. I know disk editors exist, I remember using one on Mac OS 8.6 for find a lost file. That was back on a 6 gig hard drive. Depending on the file size, you could open the disk in vi and just search from there, or just run strings on the disk and pipe it to vi. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Confused by restore(8) man page example
On Mon, 04 Mar 2013 01:47:24 -0800 Ronald F. Guilmette r...@tristatelogic.com wrote: In the man page for restore(8) I see the following: The -r flag ... can be detrimental to one's health if not used carefully (not to mention the disk). An example: newfs /dev/da0s1a mount /dev/da0s1a /mnt cd /mnt restore rf /dev/sa0 Personally, I utterly fail to see what point the author is attempting to illustrate with the above example. I mean what part of this, exactly, may be detrimental to one's health ? It's an enigma to me. There's nothing wrong with the example. I think An example: should be in a new paragraph to make it clear that it is not related to the warning. The detrimental effects cut in when you use -r on a filesystem that is not pristine, or at least in the expected state for restoring an incremental dump. -- Steve O'Hara-Smith st...@sohara.org ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Grepping though a disk
On Mon, 04 Mar 2013 04:15:48 -0600, Joshua Isom wrote: I'd call bs= essential for speed. Any copying will be faster with something higher. I thought about that. Narrowing down _if_ something has found is easy, e. g. when the positive 1 MB unit is dd'ed to a file, further work can easily be applied. Also, there's the possibility, very annoying, that your search string overlaps a place where you read. I'd probably check 1M blocks, but advance maybe 950k each time. I also thought about that, that's why the distinctive phrase I'm searching for is less than 10 characters long. Still it's possible that it appears across a boundary of units, no matter how big or small I select bs=. But I don't know how to do this. From reading man dd my impression (consistent with my experience) is that the option skip= operates in units of bs= size, so I'm not sure how to compose a command that reads units of 1 MB, but skips in units of 950 kB. Maybe some parts of my memory have also been marked unused by fsck. :-) Make sure you're reading from block offsets for maximum speed. How do I do that? The disk is a normal HDD which has been initialized with newfs -U and no further options. ad6: 953869MB Hitachi HDS721010DLE630 MS2OA5R0 at ata3-master UDMA100 SATA 1.5Gb/s The file system spans the whole disk. I know disk editors exist, I remember using one on Mac OS 8.6 for find a lost file. That was back on a 6 gig hard drive. Ha, I've done stuff like that on DOS with important business data many years ago, using the Norton Disk Doctor (NDD.EXE) when Norton (today: Symantec) wasn't yet synonymous for The Yellow Plague. This program actually was quite cool, and you could search for things, manipulate disks on several levels (files, file system and below). I had even rewritten an entire partition table from scratch, memory and handheld calculator after an OS/2 installation went crazy. :-) Depending on the file size, you could open the disk in vi and just search from there, or just run strings on the disk and pipe it to vi. You mean like strings /dev/ad6 | something, without dd? That would give me _no_ progress indicator (with my initial approach I have increasing numbers at least), but I doubt I can load a 1 TB partition in a vi session with only 2 GB RAM in the machine. If I try strings /dev/ad6 I get a warning: strings: Warning: '/dev/ad6' is not an ordinary file. True. But this opens a useful use of cat: cat /dev/ad6 | strings. (Interesting idea, I will investigate this further.) The file size of the file I'm searching for is less than 10 kB. It's a relatively small text file which got some subsequent additions in the last days, but hasn't been part of the backup job yet. I can only remember parts of those additions, because as I said my brain is not good with computer. :-) Or do you think of something different? If yes, please explain. The urge to learn is strong when something went wrong. :-) -- Polytropon Magdeburg, Germany Happy FreeBSD user since 4.0 Andra moi ennepe, Mousa, ... ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Grepping though a disk
On Mon, 4 Mar 2013 12:15:24 +0100 Polytropon free...@edvax.de wrote: But I don't know how to do this. From reading man dd my impression (consistent with my experience) is that the option skip= operates in units of bs= size, so I'm not sure how to compose a command that reads units of 1 MB, but skips in units of 950 kB. Maybe some parts of my memory have also been marked unused by fsck. :-) Not too hard (you'll kick yourself when you read down) - translation to valid shell script is left as an exercise for the reader :) bs=50k count=(n*20) skip=(n*20 - 1) Probably nicer to use powers of 2 bs=64k count=(n*16) skip=(n*16 - 1) -- Steve O'Hara-Smith st...@sohara.org ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
backups using rsync
As a result of this past Black Friday weekend, I now enjoy a true abundance of disk space, for the first time in my life. I wanna make a full backup, on a weekly basis, of my main system's shiny new 1TB drive onto another 1TB drive that I also picked up cheap back on Black Friday. I've been planning to set this up for some long time now, but I've only gotten 'round to working on it now. Now, unfortunately, I have just been bitten by the evil... and apparently widely known (except to me)... ``You can't use dump(8) to dump a journaled filesystem with soft updates'' bug-a-boo. Sigh. The best laid plans of mice and men... I _had_ planned on using dump/restore and making backups from live mounted filesystems while the system was running. But I really don't want to have to take the system down to single-user mode every week for a few hours while I'm making my disk-to-disk backup. So now I'm looking at doing the backups using rsync. I see that rsync can nowadays properly cope with all sorts of oddities, like fer instance device files, hard-linked files, ACLs, file attributes, and all sorts of other unusual but important filesystem thingies. That's good news, but I still have to ask the obvious question: If I use all of the following rsync options... -a,-H,-A, -X, and -S when trying to make my backups, and if I do whatever additional fiddling is necessary to insure that I separately copy over the MBR and boot loader also to my backup drive, then is there any reason that, in the event of a sudden meteor shower that takes out my primary disk drive while leaving my backup drive intact, I can't just unplug my old primary drive, plug in my (rsync-created) backup drive, reboot and be back in the sadddle again, almost immediately, and with -zero- problems? Regards, rfg P.S. My apologies if I've already asked this exact same question here before. I'm getting a sense of deja vu... or else a feeling that I am often running around in circles, chasing my own tail. P.P.S. Before anyone asks, no I really _do not_ want to just use RAID as my one and only backup strategy. RAID is swell if your only problem is hardware failures. As far as I know however it will not save your bacon in the event of a fumble fingers rm -rf * moment. Only frequent and routine actual backups can do that. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Grepping though a disk
On Mon, 4 Mar 2013 11:29:00 +, Steve O'Hara-Smith wrote: On Mon, 4 Mar 2013 12:15:24 +0100 Polytropon free...@edvax.de wrote: But I don't know how to do this. From reading man dd my impression (consistent with my experience) is that the option skip= operates in units of bs= size, so I'm not sure how to compose a command that reads units of 1 MB, but skips in units of 950 kB. Maybe some parts of my memory have also been marked unused by fsck. :-) Not too hard (you'll kick yourself when you read down) - translation to valid shell script is left as an exercise for the reader :) bs=50k count=(n*20) skip=(n*20 - 1) Probably nicer to use powers of 2 bs=64k count=(n*16) skip=(n*16 - 1) Thanks for the pointer. I was so concentrated on finding the answer within dd that I hadn't thought about that. It's easy to write this in shell code. As a conclusion, I will apply for further IQ reduction, seems that I have enough spare brain power I don't use anyway. :-) -- Polytropon Magdeburg, Germany Happy FreeBSD user since 4.0 Andra moi ennepe, Mousa, ... ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
chmod... what am I missing?
I must not be attending the Right conferences, or else the Right parties, because I don't get the joke. Could somebody please explain to me the meaning of the BUGS section of the chmod(1) man page, as distributed with 9.1-RELEASE? ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: backups using rsync
On Mon, 04 Mar 2013 03:35:30 -0800, Ronald F. Guilmette wrote: Now, unfortunately, I have just been bitten by the evil... and apparently widely known (except to me)... ``You can't use dump(8) to dump a journaled filesystem with soft updates'' bug-a-boo. There are other tools you can use, for example tar or cpdup or rsync, as you've mentioned in the subject. I _had_ planned on using dump/restore and making backups from live mounted filesystems while the system was running. But I really don't want to have to take the system down to single-user mode every week for a few hours while I'm making my disk-to-disk backup. So now I'm looking at doing the backups using rsync. The same problems that apply when dumping live systems can bite you using rsync, but support for this on file system level seems to be better in rsync than what dump does on block level. If I use all of the following rsync options... -a,-H,-A, -X, and -S when trying to make my backups, and if I do whatever additional fiddling is necessary to insure that I separately copy over the MBR and boot loader also to my backup drive, then is there any reason that, in the event of a sudden meteor shower that takes out my primary disk drive while leaving my backup drive intact, I can't just unplug my old primary drive, plug in my (rsync-created) backup drive, reboot and be back in the sadddle again, almost immediately, and with -zero- problems? You would have to make sure _many_ things are consistent on the backup disk. Regarding terminology, that would make the disk a failover disk, even if the act of making it the actual work disk is something you do manually. The disk would need to have an initialized file system and a working boot mechanism, both things rsync does not deal with, if I remember correctly. But as soon as you have initialized the disk for the first time and made sure (by testing your first result of a rsync run), it should work with any subsequent change of data you transfer to that disk. P.P.S. Before anyone asks, no I really _do not_ want to just use RAID as my one and only backup strategy. RAID _is_ **NO** backup. It's for dedundancy and performance. If something is erased or corrupted, it's on all disks. And all the disks permanently run. A backup disk only runs twice: when backing something up, or when restoring. In your case, restoring means that the disk is put into operation in its role as a failover disk. RAID is swell if your only problem is hardware failures. Still hardware failures can corrupt data on all participating disks. As far as I know however it will not save your bacon in the event of a fumble fingers rm -rf * moment. Only frequent and routine actual backups can do that. Correct. It's important to learn that lesson _before_ it is actually needed. :-) -- Polytropon Magdeburg, Germany Happy FreeBSD user since 4.0 Andra moi ennepe, Mousa, ... ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: p5-Bit-Vector SHA256 Checksum mismatch (was PERL problem installing SQLgrey)
On Fri, 1 Mar 2013 14:06+0100, Wolfgang Riegler wrote: SQLgrey is the only one failing because of the dependency for p5-Bit-Vector. I tried it several times. Even downloading manually to /usr/ports/distfiles doesn't work. I test the sha256 checksum manually, and it is correct. I came across the same problem using portupgrade the other day, this was actually a few weeks ago. My solution was to delete Bit-Vector-* from /usr/ports/distfiles and retry the portupgrade operation. The Bit-Vector-7.2.tar.gz file on my system is dated 2012-05-17T13:32:09+, has a size of 135586 bytes, and produce this SHA256 hash: SHA256 (Bit-Vector-7.2.tar.gz) = d60630f0eb033edabfe904416c6e7324fefff13699e97302361d1bad80f62e0f This seem consistent with the contents of the distinfo file. Make sure Bit-Vector-7.2.tar.gz is the only file in the distfiles directory with filenames beginning with Bit-Vector-7.2. Here the log from poudriere: # cat p5-Bit-Vector-7.2_2.log build started at Fri Mar 1 12:38:24 CET 2013 port directory: /usr/ports/math/p5-Bit-Vector building for: 9.1-RELEASE amd64 maintained by: to...@freebsd.org Makefile ident: $FreeBSD: ports/math/p5-Bit-Vector/Makefile,v 1.22 2013/01/22 09:50:13 svnexp Exp $ ---Begin Environment--- OSVERSION=901000 UNAME_v=FreeBSD 9.1-RELEASE UNAME_r=9.1-RELEASE BLOCKSIZE=K MAIL=/var/mail/root PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin:/root/bin STATUS=1 PKG_EXT=txz WITH_CCACHE_BUILD=yes FORCE_PACKAGE=yes tpid=15131 POUDRIERE_BUILD_TYPE=bulk PKGNG=1 PKG_DELETE=/usr/local/sbin/pkg delete -y -f PKG_ADD=/usr/local/sbin/pkg add CCACHE_DIR=/var/cache/ccache PWD=/mnt/system/DATEN/poudriere/basefs/data/logs/bulk/91amd64/hostportstree LOGS=/mnt/system/DATEN/poudriere/basefs/data/logs HOME=/root USER=root SKIPSANITY=0 LOCALBASE=/usr/local PACKAGE_BUILDING=yes ---End Environment--- ---Begin OPTIONS List--- ---End OPTIONS List--- ===phase: depends == === p5-Bit-Vector-7.2_2 depends on file: /usr/local/sbin/pkg - not found ===Verifying install for /usr/local/sbin/pkg in /usr/ports/ports-mgmt/pkg === Installing existing package /usr/ports/packages/All/pkg-1.0.8.txz Installing pkg-1.0.8... done If you are upgrading from the old package format, first run: # pkg2ng === Returning to build of p5-Bit-Vector-7.2_2 === p5-Bit-Vector-7.2_2 depends on file: /usr/local/bin/perl5.14.2 - not found ===Verifying install for /usr/local/bin/perl5.14.2 in /usr/ports/lang/perl5.14 === Installing existing package /usr/ports/packages/All/perl-5.14.2_2.txz Installing perl-5.14.2_2...Removing stale symlinks from /usr/bin... Skipping /usr/bin/perl Skipping /usr/bin/perl5 Done. Creating various symlinks in /usr/bin... Symlinking /usr/local/bin/perl5.14.2 to /usr/bin/perl Symlinking /usr/local/bin/perl5.14.2 to /usr/bin/perl5 Done. Cleaning up /etc/make.conf... Done. Spamming /etc/make.conf... Done. done === Returning to build of p5-Bit-Vector-7.2_2 === p5-Bit-Vector-7.2_2 depends on file: /usr/local/bin/perl5.14.2 - found === p5-Bit-Vector-7.2_2 depends on package: p5-Carp-Clan=0 - not found ===Verifying install for p5-Carp-Clan=0 in /usr/ports/devel/p5-Carp-Clan === Installing existing package /usr/ports/packages/All/p5-Carp-Clan-6.04.txz Installing p5-Carp-Clan-6.04... done === Returning to build of p5-Bit-Vector-7.2_2 === p5-Bit-Vector-7.2_2 depends on file: /usr/local/bin/perl5.14.2 - found === p5-Bit-Vector-7.2_2 depends on file: /usr/local/bin/ccache - not found ===Verifying install for /usr/local/bin/ccache in /usr/ports/devel/ccache === Installing existing package /usr/ports/packages/All/ccache-3.1.9.txz Installing ccache-3.1.9...Create compiler links... create symlink for cc create symlink for cc (world) create symlink for c++ create symlink for c++ (world) create symlink for gcc create symlink for gcc (world) create symlink for g++ create symlink for g++ (world) create symlink for clang create symlink for clang (world) create symlink for clang++ create symlink for clang++ (world) done NOTE: Please read /usr/local/share/doc/ccache/ccache-howto-freebsd.txt for information on using ccache with FreeBSD ports and src. === Returning to build of p5-Bit-Vector-7.2_2 == === Cleaning for p5-Bit-Vector-7.2_2 ===phase: check-config== == ===phase: fetch == === p5-Bit-Vector-7.2_2 depends on file: /usr/local/sbin/pkg - found == ===phase: checksum== === p5-Bit-Vector-7.2_2 depends on file: /usr/local/sbin/pkg
Re: chmod... what am I missing?
On 03/04/2013 12:40 PM, Ronald F. Guilmette wrote: I must not be attending the Right conferences, or else the Right parties, because I don't get the joke. Could somebody please explain to me the meaning of the BUGS section of the chmod(1) man page, as distributed with 9.1-RELEASE? http://www.mail-archive.com/svn-src-all@freebsd.org/msg04124.html ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Grepping though a disk
Hi Polytropon cc questions@ Any suggestion is welcome! Ideas: A themed list: freebsd...@freebsd.org There's a bunch of fs tools in /usr/ports/sysutils/ My http://www.berklix.com/~jhs/src/bsd/jhs/bin/public/slice/ slices large images such as tapes disks (also the slice names would give numbers convertable to offsets probaably useful to eg ..a) man fsdb A bit of custom C should run a lot faster than shells greps, eg when I was looking for nasty files from a bad scsi controller, I wrote http://www.berklix.com/~jhs/src/bsd/jhs/bin/public/8f/ One could run eg slice asynchronously suspend ^Z when you run out of space, periodicaly run some custom C (like 8f.c) or some find grep -v rm loop to discard most slices as of no interest. Then resume slicing. OK, thats doing writes too, so slower than just read a later dd with seek=whatever, depends how conservative one's feeling, about doing reruns with other search criteria. You mentioned risk of text string chopped across a slice/block boundary. Certainly a risk. Presumably solution is to search twice. 2nd time after a dd with a half block/ slice size offset, then slice/search again. If you runout of space to do that, you might write a temporary disklabel/bsdlabel with an extra partition with a half block offset .. dodgy stuff that, do it while you'r wide awake :-) Always a pain these scenarios, loosing hours of human CPU time, I hope data's worth it, good luck. Cheers, Julian -- Julian Stacey, BSD Unix Linux C Sys Eng Consultant, Munich http://berklix.com Reply below not above, like a play script. Indent old text with . Send plain text. No quoted-printable, HTML, base64, multipart/alternative. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Soekris or .. ?
On 03/01/2013 14:24, C. P. Ghost wrote: On Fri, Mar 1, 2013 at 11:49 AM, Julien Cigar jci...@ulb.ac.be wrote: Hello, I'm looking for a small Soekris-like (http://soekris.com/) box which support FreeBSD, any experience or brand to advise .. ? I'm using Soekris net4801 boxes with FreeBSD without problems since many years as small routers with pf, dhcp, bind, lighttpd etc... Last version i've tested is 8.3. I didn't update to 9.X yet for no other reasons than lack of time to try it, and I don't know if clang supports Geode well enough so I can't say anything about -CURRENT. But save for this, Soekris boxes and FreeBSD are a great match. Thank you, Julien -cpghost. Thanks for all your answers ..! it's to replace our old linux router, so I think I'll go with a Soekris box..! -- No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Grepping though a disk
On Mon, Mar 4, 2013 at 1:36 AM, Polytropon free...@edvax.de wrote: Any suggestion is welcome! How about crawling the metadata, locating each block that is already allocated, and skip those blocks when you scan the disk? That could reduce the searching space significantly. blkls(1) et al. from the Sleuth Kit are your friends. Good luck, -cpghost. -- Cordula's Web. http://www.cordula.ws/ ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Zlib version in FreeBSD - 3 releases behind?
The Zlib baked into FreeBSD is Zlib 1.2.4 even on 9.1R. However, Zlib has gone to 1.2.7 sometime ago after stepping through 1.2.5, 1.2.6 and 1.2.7 with bug fixes. Is there any reason for not using Zlib 1.2.7? Thanks. Kris ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Grepping though a disk
On Mon, 4 Mar 2013, Polytropon wrote: The file size of the file I'm searching for is less than 10 kB. It's a relatively small text file which got some subsequent additions in the last days, but hasn't been part of the backup job yet. There have been some good suggestions. I would use a large buffer with dd, say 1M or more, both for speed and to reduce the chance of hitting only part of the search string. For the future, look at sysutils/rsnapshot. Easy to set up, space-efficient, and provides an easily-accessed file history. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: backups using rsync
On Mon, 4 Mar 2013, Ronald F. Guilmette wrote: Now, unfortunately, I have just been bitten by the evil... and apparently widely known (except to me)... ``You can't use dump(8) to dump a journaled filesystem with soft updates'' bug-a-boo. Until SUJ has been deemed 100%, I avoid it and suggest others do also. It can be disabled on an existing filesystem from single user mode. If I use all of the following rsync options... -a,-H,-A, -X, and -S when trying to make my backups, and if I do whatever additional fiddling is necessary to insure that I separately copy over the MBR and boot loader also to my backup drive, then is there any reason that, in the event of a sudden meteor shower that takes out my primary disk drive while leaving my backup drive intact, I can't just unplug my old primary drive, plug in my (rsync-created) backup drive, reboot and be back in the sadddle again, almost immediately, and with -zero- problems? It works. I use this to slow mirror SSDs to a hard disk, avoiding the speed penalty of combining an SSD with a hard disk in RAID1. Use the latest net/rsync port, and enable the FLAGS option. I use these options, copying each filesystem individually: -axHAXS --delete --fileflags --force-change --delete removes files present on the copy that are not on the original. Some people may want to leave those. --exclude= is used on certain filesystems to skip directories that are full of easily recreated data that changes often, like /usr/obj. Yes, the partitions and bootcode must be set up beforehand. After that, it works. Like any disk redundancy scheme, test it before an emergency. P.P.S. Before anyone asks, no I really _do not_ want to just use RAID as my one and only backup strategy. RAID is swell if your only problem is hardware failures. As far as I know however it will not save your bacon in the event of a fumble fingers rm -rf * moment. Only frequent and routine actual backups can do that. Yes, RAID is not a backup. Another suggestion I've been making often: use sysutils/rsnapshot to make an accessible history of files. The archive go on another partition on the mirror drive, which likely has more space than the original. rsnapshot uses rsync with hard links to make an archive that lets you easily get to old versions of files that have changed in the last few hours/days/weeks/months. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Confused by restore(8) man page example
Subject: Re: Confused by restore(8) man page example On Mon, Mar 04, 2013 at 10:08:37AM +, Steve O'Hara-Smith wrote: On Mon, 04 Mar 2013 01:47:24 -0800 Ronald F. Guilmette r...@tristatelogic.com wrote: In the man page for restore(8) I see the following: The -r flag ... can be detrimental to one's health if not used carefully (not to mention the disk). An example: newfs /dev/da0s1a mount /dev/da0s1a /mnt cd /mnt restore rf /dev/sa0 Personally, I utterly fail to see what point the author is attempting to illustrate with the above example. I mean what part of this, exactly, may be detrimental to one's health ? It's an enigma to me. There's nothing wrong with the example. I think An example: should be in a new paragraph to make it clear that it is not related to the warning. The detrimental effects cut in when you use -r on a filesystem that is not pristine, or at least in the expected state for restoring an incremental dump. This and the previous reply are correct. This example shows a correct way to use 'restore -r' The '-r' flag causes it to write where you are cd-ed to without any warning what you are doing or overwriting. If there are other files in the directory that is to receive the files from a 'restore -r' has other files, you may unexpectedly overwrite some of them. Also, if you are not cd-ed in to the correct place (the mount point, for example) using the '-r' will quickly write all over whatever directory you are cd-ed to without warning.In other words '-r' causes it to splat out everything right where you are without warning and too fast to interrupt it before too much damage is done. I often do a 'restore -r' into an existing -eg not newly newfs-ed, directory, but have to make sure I am clear about what I am doing. For example, I usually keep a large (large for my little stuff) drive mounted as '/work'. Within that filesystem I may create a directory such as './unroll' eg '/work/unroll' or some other similar name and mass restore a dump in to it using 'restore -r' so I can easily shuffle files around from the backup in to several new directories. If there are a bunch of destination directories, it is easier this way than doing a 'restore -i'. But, as said, I have to be careful just how I am using it. It works well. Have fun, jerry -- Steve O'Hara-Smith st...@sohara.org ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Confused by restore(8) man page example
In message 63618304-837e-4b76-8157-d99c744ac...@wolfhut.org, Ben Cottrell tam...@wolfhut.org wrote: I guess the same text in the man page could be read several different ways! The way I read it (which may or may not be correct) is that the example given is an example of how to use it *correctly*. It sounds to me like it's warning against deviating too far from the steps given in the example. I can see as how the text might allow other interpretations, though! Thanks for the response Ben. As others have pointed out, it would probably be less confusing if the material starting with An example: were in a different paragraph. As the text stands now, first we have a sentence that gives a frightening warning about possible mangling of a disk/partition if restore -r is not used correctly, and then immediately following that is An example: with an example of _correct_ usage. I hope and trust that folks can understand my earlier befuddlement. Anyway, I have just now filed a PR suggesting a new paragraph at the appropriate point in the man page. Thanks to all who responded. Regards, rfg ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: backups using rsync
In message 20130304125634.8450cfaf.free...@edvax.de, Polytropon free...@edvax.de wrote: On Mon, 04 Mar 2013 03:35:30 -0800, Ronald F. Guilmette wrote: Now, unfortunately, I have just been bitten by the evil... and apparently widely known (except to me)... ``You can't use dump(8) to dump a journaled filesystem with soft updates'' bug-a-boo. There are other tools you can use, for example tar or cpdup or rsync, as you've mentioned in the subject. tar I already knew about, but I think you will agree that it has lots of limitations that make it entirely inappropriate for mirroring an entire system. This cpdup thing is entirely new to me. Thanks for mentioning it! I really never heard of it before, but I just now installed it from ports, and I'm perusing the man page. It looks very promising. Too bad it doesn't properly handle sparse files, but oh well. That's just a very minor nit. (Does it properly handle everything else that rsync claims to be able to properly handle, e.g. ACLs, file attributes, etc., etc.?) The same problems that apply when dumping live systems can bite you using rsync, What problems are we talking about, in particular? I am guessing that if I use rsync, then I *won't* encounter this rather annoying issue/problem relating to UFS filesystems that have both soft updates and journaling enabled, correct? but support for this on file system level seems to be better in rsync than what dump does on block level. What exactly did you mean by this ? If I use all of the following rsync options... -a,-H,-A, -X, and -S when trying to make my backups, and if I do whatever additional fiddling is necessary to insure that I separately copy over the MBR and boot loader also to my backup drive, then is there any reason that, in the event of a sudden meteor shower that takes out my primary disk drive while leaving my backup drive intact, I can't just unplug my old primary drive, plug in my (rsync-created) backup drive, reboot and be back in the sadddle again, almost immediately, and with -zero- problems? You would have to make sure _many_ things are consistent on the backup disk. Well, this is what I am getting at. This is/was the whole point of my post and my question. I want to know: What is that set of things, exactly? Regarding terminology, that would make the disk a failover disk OK. Thank you. I will henceforth use that terminology. The disk would need to have an initialized file system and a working boot mechanism, both things rsync does not deal with Check and check. I implicitly understood the former, and I explicitly mentioned the latter in my original post in this thread. But is there anything else, other than those two things (which, just as you say, are both clearly outside of the scope of what rsync does)? Anything else I need to do or worry about in order to be able to use rsync to create maintain a full-blown fully-working system failover drive? If so, I'd much rather learn about it now... you know... as opposed to learning about it if and when I actually have to _use_ my failover drive. Regards, rfg ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: backups using rsync
In message alpine.bsf.2.00.1303040645420.66...@wonkity.com, Warren Block wbl...@wonkity.com wrote: Until SUJ has been deemed 100%, I avoid it and suggest others do also. It can be disabled on an existing filesystem from single user mode. hehe Silly me! What do *I* know? I just go about my business and try not to create too much trouble for myself. To be honest and truthful I have to say that this journaling stuff entirely snuck up on me. I confess... I wasn't paying attention (to the world of FreeBSD innovations) and thus, when I moved myself recently to 9.x (from 8.3) I did so without even having been aware that the new filesystems that I was creating during my clean/fresh install of 9.1 had journaling turned on by default. (As the saying goes, I didn't get the memo.) Not that I mind, really. It sounds like a great concept and a great feature and I was happy to have it right up until the moment that dump -L told me to go pound sand. :-( So, um, I was reading about this last night, but I was sleepy and my eyes glazed over... Please remind me, what is the exact procedire for turning off the journaling? I boot to single user mode (from a live cd?) and then what? Is it tunefs with some special option? If I use all of the following rsync options... -a,-H,-A, -X, and -S when trying to make my backups, and if I do whatever additional fiddling is necessary to insure that I separately copy over the MBR and boot loader also to my backup drive, then is there any reason that, in the event of a sudden meteor shower that takes out my primary disk drive while leaving my backup drive intact, I can't just unplug my old primary drive, plug in my (rsync-created) backup drive, reboot and be back in the sadddle again, almost immediately, and with -zero- problems? It works. I use this to slow mirror SSDs to a hard disk, avoiding the speed penalty of combining an SSD with a hard disk in RAID1. Great! Thanks Warren. Use the latest net/rsync port, and enable the FLAGS option. I use these options, copying each filesystem individually: -axHAXS --delete --fileflags --force-change Hummm... I guess that I have some non-current rsync installed. In the man page I have there is no mention of any --force-change option. What does it do? Yes, the partitions and bootcode must be set up beforehand. After that, it works. Good to know. Thanks again Warren. Like any disk redundancy scheme, test it before an emergency. Naw. I like to live dangerously. :-) Regards, rfg ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Confused by restore(8) man page example
In message 20130304151707.gc76...@jerrymc.net, Jerry McAllister jerr...@msu.edu wrote: This and the previous reply are correct. This example shows a correct way to use 'restore -r' The '-r' flag causes it to write where you are cd-ed to without any warning what you are doing or overwriting. If there are other files in the directory that is to receive the files from a 'restore -r' has other files, you may unexpectedly overwrite some of them. I'm thinking: If it is worth putting a warning into the man page, perhaps it is worth putting a warning into the code itself, to protect the unwary. Anybody here ever used Clonezilla? A nice useful tool. When Clonezilla runs, and when it is just about to overwrite a target drive, it first asks you explicitly Do you really want to proceed (Y/n)? After you respond Y it asks you again, one more time, the same question. I for one have never felt put upon by these safety catches. I know they are there for my own protection. Maybe restore should have something similar, along with some special option to disable the extra security check, you know, for use in non-interactive batch scripts. Also, if you are not cd-ed in to the correct place (the mount point, for example) using the '-r' will quickly write all over whatever directory you are cd-ed to without warning.In other words '-r' causes it to splat out everything right where you are without warning and too fast to interrupt it before too much damage is done. I understand. This is quite obviously different than rm -fr *, but I can see how it could be equally disasterous. Regards, rfg ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: backups using rsync
On 03/04/2013 05:35 AM, Ronald F. Guilmette wrote: As a result of this past Black Friday weekend, I now enjoy a true abundance of disk space, for the first time in my life. I wanna make a full backup, on a weekly basis, of my main system's shiny new 1TB drive onto another 1TB drive that I also picked up cheap back on Black Friday. I've been planning to set this up for some long time now, but I've only gotten 'round to working on it now. Now, unfortunately, I have just been bitten by the evil... and apparently widely known (except to me)... ``You can't use dump(8) to dump a journaled filesystem with soft updates'' bug-a-boo. You can use dump(8) to dump a SU-journaled filesystem; you just cannot create a snapshot. This implies that dump(8) will be run against the live and possibly changing filesystem, which can lead to issues with the consistency of the contents of files thus dumped; but not necessarily with the consistency of the dump itself. Any tool that backs up a live filesystem, such as rsync or tar, will have these issues. Sigh. The best laid plans of mice and men... I _had_ planned on using dump/restore and making backups from live mounted filesystems while the system was running. But I really don't want to have to take the system down to single-user mode every week for a few hours while I'm making my disk-to-disk backup. So now I'm looking at doing the backups using rsync. I've used rsync to back up Linux and FreeBSD machines daily for years, and I've never had a problem with the backups nor subsequent restorations. Especially for restorations of the laptop that ate SSDs. Having a decent snapshot capability on the backup target filesystem can help a lot if you want to maintain multiple sparse backup revisions; otherwise, you're stuck using creative scripting around rsync's --link-dest option. I see that rsync can nowadays properly cope with all sorts of oddities, like fer instance device files, hard-linked files, ACLs, file attributes, and all sorts of other unusual but important filesystem thingies. That's good news, but I still have to ask the obvious question: If I use all of the following rsync options... -a,-H,-A, -X, and -S when trying to make my backups, and if I do whatever additional fiddling is necessary to insure that I separately copy over the MBR and boot loader also to my backup drive, then is there any reason that, in the event of a sudden meteor shower that takes out my primary disk drive while leaving my backup drive intact, I can't just unplug my old primary drive, plug in my (rsync-created) backup drive, reboot and be back in the sadddle again, almost immediately, and with -zero- problems? There will /always/ be problems. The best you can do is become familiar with the tools and procedures so you can tackle them when they happen. My suggestion for something that you can use as a warm standby is to create it as a warm standby: go through the entire installation procedure again for the backup drive, and then use rsync or suchlike to periodically synchronize the second filesystem with the first. When you update the boot code on one, do so on the other. Be extremely careful if you decide to do this with both disks attached to the same machine: if you use geom labels (gpt, ufs, glabel, et alia) or dynamically numbered storage devices, you can easily run into a situation where a reboot with both devices attached suddenly starts using your backup instead without you realizing it, or flips back and forth. P.S. My apologies if I've already asked this exact same question here before. I'm getting a sense of deja vu... or else a feeling that I am often running around in circles, chasing my own tail. P.P.S. Before anyone asks, no I really _do not_ want to just use RAID as my one and only backup strategy. RAID is swell if your only problem is hardware failures. As far as I know however it will not save your bacon in the event of a fumble fingers rm -rf * moment. Only frequent and routine actual backups can do that. -- Fuzzy love, -CyberLeo Furry Peace! - http://www.fur.com/peace/ ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: backups using rsync
- Original Message - On Mon, 04 Mar 2013 03:35:30 -0800, Ronald F. Guilmette wrote: Now, unfortunately, I have just been bitten by the evil... and apparently widely known (except to me)... ``You can't use dump(8) to dump a journaled filesystem with soft updates'' bug-a-boo. There are other tools you can use, for example tar or cpdup or rsync, as you've mentioned in the subject. Or if you want to be ambitious you could install something like 'sysutils/backuppc' (where one of its methods is rsync, its what I use for all the systems I back up with it. - Windows, Linux, Mac OSX.) And, then could get more than just the weekly rsync to itthough it could probably be made to only do fulls every week. But, you could potentially then restore from an older full. I do system fulls of my other systems to it...can't do a baremetal restore, but it can get me back up and running faster. IE: I recently had harddrive failures in a couple of FreeBSD systems. I did a fresh install and at first I restored /home and /usr/local (and some other dirs, like /var/db/pkg /var/db/ports)...and then other dirs and files as I found things missing. Had to rebuild a handful of ports after that and then things were good. The second system didn't go as well, because it had been silently corrupting things for a long time beforebut I still did the same kind of restore at first, but ended up rebuilding all the ports to get things good again. Not sure if losing the system disk, if I could recover from a local backuppc... but I have my old backuppc system, getting most of my current system (mainly omit the backuppc pool, think my backup storage requirements would grow exponentially if I didn'tmy main backuppc pool is currently 6300G out of 7300G zpool.) But, I've suffered bit rot on the old backuppc pool in the past..when it was a RAID 1+0 arrayprobably worse now that its a 2.7TB volume without raid (the only volume on that system that isn't mirrored.) Though wonder if I want to try zfs on linux again, or replace it with FreeBSD. I was faced with something like this on my Windows boxwhere eventually, I ended up writing off restoring from the local backup (a commercial time machine like product)...the mistake was using a windows fake raid5 external array as my backup drive. And, losing the system due to problems in the fake raid. I did briefly put together a CentOS live CD that could access the array, but the drives I copied the data to promptly failed on me shortly after I had broken the array and turned them into a raidz pool. Someday I need to get back to going through the disk image of the failed system drive and recover as much as possible from that. The box that was my Windows desktop is now my FreeBSD desktop Lawrence ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: backups using rsync
On Mon, 4 Mar 2013, Ronald F. Guilmette wrote: So, um, I was reading about this last night, but I was sleepy and my eyes glazed over... Please remind me, what is the exact procedire for turning off the journaling? I boot to single user mode (from a live cd?) and then what? Is it tunefs with some special option? Just boot in single user mode so all the filesystems are unmounted or mounted readonly. Then use 'tunefs -j disable /dev/...'. It will also mention the name of the journal file, which can be deleted. Use the latest net/rsync port, and enable the FLAGS option. I use these options, copying each filesystem individually: -axHAXS --delete --fileflags --force-change Hummm... I guess that I have some non-current rsync installed. In the man page I have there is no mention of any --force-change option. What does it do? affect user/system immutable files/dirs. Probably only included in the man page when the port is built with the FLAGS option set. An additional note: the script that runs my rsync backup also modifies the mirrored /etc/fstab to use the appropriate labels for the backup filesystems. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Confused by restore(8) man page example
El día Monday, March 04, 2013 a las 01:12:41PM -0800, Ronald F. Guilmette escribió: I'm thinking: If it is worth putting a warning into the man page, perhaps it is worth putting a warning into the code itself, to protect the unwary. Anybody here ever used Clonezilla? A nice useful tool. When Clonezilla runs, and when it is just about to overwrite a target drive, it first asks you explicitly Do you really want to proceed (Y/n)? After you respond Y it asks you again, one more time, the same question. I for one have never felt put upon by these safety catches. I know they are there for my own protection. ... In the old days of UNIX V7 when newfs(8) was still mkfs(8), there was also a last and final question Last chance before scribbling on disk. to answer. And even after you hit ENTER to confirm, there was an internal wait of some 5 secs to let you interrupt with Ctrl-C in case of error. Just remembering those days :-) matthias -- Matthias Apitz | /\ ASCII Ribbon Campaign: www.asciiribbon.org E-mail: g...@unixarea.de | \ / - No HTML/RTF in E-mail WWW: http://www.unixarea.de/ | X - No proprietary attachments phone: +49-170-4527211 | / \ - Respect for open standards ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org