Tanks for your Feedback!
On 03.01.21 16:19, backu...@kosowsky.org wrote:
Dan Johansson wrote at about 14:24:25 +0100 on Sunday, January 3, 2021:
> On 30.06.20 06:51, backu...@kosowsky.org wrote:
> > Over the years, many have asked and struggled with backing up remote
> >
I can not get it to work. (:-(
So now I have two questions.
a) What is the syntax of the "Conf{ClientShareName2Path}" hash? Examples?
b) How shall the rsyncd be configured (rsyncd.conf, rsyncd.secrets)?
Regards,
--
Dan Johansson,
***
This m
/backuppc.pl?host=192.168.8.100> (rsync
error: error in socket IO (code 10) at clientserver.c(125)
[Receiver=3.1.2.0]
Thanks in advance for any helps ;)
Dan
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforg
nes (my machine specifically being one of them).
On Tue, Jul 11, 2017 at 1:40 PM, Tim Evans wrote:
> On 07/11/2017 01:53 PM, Dan LeVasseur wrote:
>
>> I too am having this issue again with version 4.1.3 fresh from GIT. I
>> did check the version of the file in /bin and it
I too am having this issue again with version 4.1.3 fresh from GIT. I did
check the version of the file in /bin and it does show as being from 4.1.3.
Anything I can run to help you diagnose the problem?
--
Check out the v
I actually have this problem using SMB against windows clients. 2-3 of
them that just at random run forever until I stop them. The next time,
it'll run fine.
On Wed, Nov 2, 2016 at 10:23 AM, Les Mikesell wrote:
> On Wed, Nov 2, 2016 at 7:29 AM, Christian Völker
> wrote:
> > Hi all,
> >
> > B
t-configuration-backing-up.html
>>
>>
>>
>> From: Kent Tenney [mailto:kten...@gmail.com]
>> Sent: Monday, September 26, 2016 10:44 PM
>> To: General list for user discussion, questions and support > backuppc-users@lists.sourceforge.net>
>> Sub
It does appear the fix was committed, not sure what that means for the
version in 16.10
https://github.com/backuppc/backuppc/commit/d7a8403b537ed0068e862abc20065e98209527b7
On Mon, Sep 26, 2016 at 2:40 PM, Kent Tenney wrote:
> Howdy,
>
> I'm having problems using on Ubuntu 16.04
>
> $ aptitude
You probably received an updated smbclient at the same time. From what
I've read (definitely not a pro at this) smbclient changed the output text
that BackupPC uses to determine if a backup is completed. Unfortunately
this change hasn't made it into BackupPC.
I found this via searching around an
same thing, but via a different method.
I have ssh running on an alternate port, running out of xinetd. xinetd
restricts what IPs can connect. The normal sshd doesn't allow root
login; this one does.
You could also accomplish more or less same thing via, e.g., iptables
rules, or
a. This is a
positive change, but a bummer that it's bitten you.
danno
--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan
--
Mobile security can be enabling, not merely restricting. Employees
4 use either this, or the native rsync?
thanks
danno
--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan
--
Mobile security can be enabling, not merely restricting. Employees who
bring their ow
PC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
Dan Pritts
ICPSR Computing & Netwo
s are limited by the speed of the tree walk
on the client's disk,
looking for files to back up. Unless htere are a lot of changes, this
won't stress the server.
danno
--
Dan Pritts
ICPSR Computing & Network Services
Uni
ea
what can I tweak..
After all tweaks stats are: 16GB in 512min! it says 1.8Mb/s .
**
--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan
--
Transform Data into Opportunity.
Accelerate dat
mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
Dan Pritts
ICPSR Computing &
ckupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
Mauro Condarelli <mailto:mc5...@mclink.it>
March 17, 2016 at 10:07 PM
Thanks Dan,
after fiddlin
sts/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan
+1 (734) 615-9529
--
Transform Data into Opportuni
Not sure about the suexec & what program needs what permissions.
Hmm. if your http basic auth user name doesn't match a user name that
is listed in the hosts file, you will not have access to the given host.
Mauro Condarelli <mailto:mc5...@mclink.it>
March 18, 2016 at 11:30 A
selinux failures.
both strace & selinux presume your synology is running linux - if not,
well, those won't work.
Mauro Condarelli <mailto:mc5...@mclink.it>
March 17, 2016 at 12:44 PM
Thanks Dan,
I have a config.pl:
backuppc@syno0:~/BackupPC/conf$ ls -la
total 96
drwxr-
__
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
--
Dan Pritts
ICPSR Comp
Can backuppc run rsync "natively"
somehow, that is, without File::RsyncP but rather by calling rsync on
the server?
thanks
danno
--
Dan Pritts
ICPSR Computing & Network Services
University of Michigan
+1 (734) 615-9529
---
Remove
Thank you,
Dan McGurk
Ransom Memorial Hospital
IT Department
785.229.8444
CONFIDENTIALITY NOTICE: This facsimile/e-mail transmission contains
confidential information, some or all of which may be protected health
information as defined by the federal Health Insurance
\*
[ skipped 15 lines ]
tarExtract: Done: 0 errors, 269815 filesExist, 267869130169 sizeExist,
91347626064 sizeExistComp, 270525 filesTotal, 269039345975 sizeTotal
My thinking is if I002.FCS is locked, it should just skip and move
to the next file...not sure what I am doing wrong.
Thank you,
Dan
contains a LOT of files with names starting with J-Z, but none of
those are getting backed up.
Thank you,
Dan McGurk
Ransom Memorial Hospital
IT Department
785.229.8444
CONFIDENTIALITY NOTICE: This facsimile/e-mail transmission contains
confidential information, some or all of
al
Backup aborted ()
I am not sure what all to post for you to help, as I am new to this.
Thanks,
Dan
--
"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Ins
Alexander,
Thanks, that was exactly what I needed. In my particular case I had to set
just the username by the host since I had the email domain set in
EMailUserDestDomain.
Dan
-Original Message-
From: Alexander Moisseev [mailto:mois...@mezonplus.ru]
Sent: Wednesday, January 08
t pingable, it’s considered okay that it hasn’t been backed up?
Like I said, that’s not the behavior I want but I’m not sure how the program is
actually supposed to work.
Dan
From: Peter Major [mailto:pe...@major.me.uk]
Sent: Saturday, January 04, 2014 3:57 AM
To: General list for user
for a week whether the
target/client was offline or not. But just because I expect it doesn't mean
that is what the authors designed.
Any ideas?
Dan
--
Rapidly troubleshoot problems before they affect your busine
On 29.10.2013 14:50, Timothy J Massey wrote:
> Dan Johansson wrote on 10/27/2013 02:26:06 PM:
>
>>> Have you checked the Event Viewer? It usually shows you what's going
> on
>>> with rsync...
>
> This seemed to have gotten missed. Is there anything
On 26.10.2013 23:32, Timothy J Massey wrote:
> Dan Johansson wrote on 10/26/2013 08:37:48 AM:
>
>> Any suggestions on
>> a) how to find out why rsyncd dies in the first place
>
> Not really: I have never run rsync on Windows 8. I *have* done it on
> Windows Serve
e rsyncd restart even if there
are a .pid and .lock file around
Regards,
--
Dan Johansson, <http://www.dmj.nu>
***
This message is printed on 100% recycled electrons!
***
0x2FB894AD.a
t;, but they all fails.
Any suggestion on what I am doing wrong and how to solve it?
Regards,
--
Dan Johansson, <http://www.dmj.nu>
*
?
How have you done it (samba, rsync, rsyncd)?
Regards,
--
Dan Johansson, <http://www.dmj.nu>
***
This message is printed on 100% recycled ele
SI_KERNEL, si_addr=0} (Segmentation fault) ---
+++ killed by SIGSEGV +++
Segmentation fault
The mountpoint is there (owned by backluppc and mode 777):
$ ls -pal /mnt/backuppc/
total 0
drwxrwxrwx 2 backuppc backuppc 48 Apr 6 14:22 ./
drwxr-xr-x 9 root root 240 Apr 6 14:22 ../
Any suggestions o
On Saturday 31 March 2012 10.45:25 Les Mikesell wrote:
> On Sat, Mar 31, 2012 at 3:01 AM, Dan Johansson wrote:
> >>
> > The talk about backuppc-fuse sounds VERY interesting (instead of running
> > "BackupPC_tarCreate -h host1 -n 887 -s /usr . | tar xf -" to a
e different
versions around (Pieter Wuille's from Nov 2009 and unixtastic.com from Nov
2008).
Are there other "newer" around? Does the one from Pieter work with BPC 3.2.1?
Regards,
--
Dan Johansson, <http://www.dmj.nu>
hive" I get a tar archive (restore.tar) containing
the ru.mo file which then contains 4154 bytes of 0's:
$ ll restore.tar
-rw-r--r-- 1 dan users 10240 Feb 19 15:33 restore.tar
$ tar xvf restore.tar
./ru.mo
$ ll ru.mo
-rw-r--r-- 1
On Saturday 18 February 2012 11.08:23 Les Mikesell wrote:
> On Sat, Feb 18, 2012 at 10:53 AM, Dan Johansson wrote:
> > > Use the BackupPC_tarCreate command line program and pipe it directly
> > > to a 'tar -xf -' to extract instead of saving the output in
On Saturday 18 February 2012 10.18:31 Les Mikesell wrote:
> On Sat, Feb 18, 2012 at 9:17 AM, Dan Johansson wrote:
> > I am looking for a way to do an automatic redirected restore.
> > At the moment I am playing with an Archive Host and are doing the
> > following:
s finished and I can delete the tar
file). Is there/do you know of any way to "skip" the use of a tar file and
directly "export/restore" the backup to a "redirected" location.
With redirected I mean it does not get restored to the original host and or
loca
On Monday 30 January 2012 11.13:04 Flako wrote:
> 2012/1/29 Dan Johansson :
> > Hi,
> >
> > Is it possible to get the Backup# in the DumpPostUserCmd?
> > I have tried with $backup and $backupnumber but none work.
> > I also have tried to get the last Backup# fr
Hi,
Is it possible to get the Backup# in the DumpPostUserCmd?
I have tried with $backup and $backupnumber but none work.
I also have tried to get the last Backup# from the backups file, but this file
is written _after_ is DumpPostUserCmd finished.
Any suggestions?
--
Dan Johansson, <h
his mailing-list).
2) Mount my S3 bucket on /mnt/s3 using the s3ql filesystem
(http://code.google.com/p/s3ql/) which provides among other things encryption.
3) rsync the "last-full-copy" to /mnt/s3
4) umount /mnt/s3
Any thoughts on this procedure?
Pro/Cons/NoNo's?
help.
Any suggestions what could be wrong?
Regards,
--
Dan Johansson, <http://www.dmj.nu>
***
This message is printed on 100% re
forms better than RAID6. Or vice versa.
depends on your use case.
broadly speaking, if you are doing large reads and writes, or doing
mostly reads, raid5/6 will be faster. for random i/o raid10 will be
faster.
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1
On Wednesday 28 September 2011 20.26:25 Arnold Krille wrote:
> On Wednesday 28 September 2011 18:59:38 Tim Fletcher wrote:
> > On Wed, 2011-09-28 at 17:30 +0200, Dan Johansson wrote:
> > > I have a laptop that is dual-boot (Linux and WinXP) and gets the same
> > > IP fr
with one small
exception - I always gets a "Backup-Failed" message for one of them each
night.
Does someone have a suggestion on how to solve this in a more "beautiful" way?
Regards,
--
Dan Johansson, <http://www.dmj.nu>
***
the default 100. RHEL & derivatives support a kernel boot option
"divider=10" (I think) to do this.
Bottom line there is measurable I/O overhead with ESX(i) but it's
generally very low.
vmxnet3 & pvscsi are definitely a win. as is running under vmware in
On Sunday 04 September 2011 19.17:17 Timothy J Massey wrote:
> Dan Johansson wrote on 09/04/2011 10:04:16 AM:
> > As you can see it says that the Pool is 0.00GB. This can not be correct
>
> as
>
> > there are data in the pool and I can do a restore. Even after a
&g
pool and I can do a restore. Even after a backup does it
say 0.00GB.
Any suggestions on what could be wrong?
Regards,
--
Dan Johansson, <http://www.dmj.nu>
***
This message is printed on 100% re
with 3.2.1)?
Are there some "gotchas" with this upgrade?
Regards,
--
Dan Johansson, <http://www.dmj.nu>
***
This message is printed on 100% re
It's unlikely, because dump/restore do not store things on a file-by-file
basis. To do so you'd have to unwrap the dumpfile on the backup server and
break it up into individual files to store.
On Jul 29, 2011, at 5:16 PM, Rory Toma wrote:
> Are there any plans to add dump/restore as a supported
ntroller AND the OS, don't assume that all SATA does it.
danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
--
All of the data generated in your IT infrastructure
efault.
Make sure that your system is using the current version - see the mkfs.xfs man
page for more.
If you have an existing XFS filesystem it probably isn't on, but it appears you
may be able to
change that with xfs_util.
I would make sure I had a backup of the filesystem b
grep rsync (and the following to confirm)
I hope this helps anybody else who might have this issue.
Cheers,
_______
Dan Lavu
System Administrator - Emptoris, Inc.
www.emptoris.com Office: 703.995.6052 - Cell: 703.296.0645
-Original Message-
,
___
Dan Lavu
System Administrator - Emptoris, Inc.
www.emptoris.com Office: 703.995.6052 - Cell: 703.296.0645
-Original Message-
From: Gerald Brandt [mailto:g...@majentis.com]
Sent: Thursday, April 28, 2011 1:51 PM
To: General list for user
ds" if it calculated size/time which
is inclusive of the RMAN (Database Export) backup. This host (above) boggles me
though, these are static files that are being transferred and all other 28
hosts transfer just fine.
Thanks in advance for any input or troubleshooting steps.
______
On Monday 15 November 2010 20.05:31 Les Mikesell wrote:
> On 11/15/2010 12:06 PM, Dan Johansson wrote:
> > On Monday 15 November 2010 18.49:47 Dan Johansson wrote:
> >> Hi, I am new to this list so pleas be kind if this has already been
> >> answered...
> >>
&g
On Dec 7, 2010, at 2:44 PM, Robin Lee Powell wrote:
> On Tue, Dec 07, 2010 at 02:18:51PM -0500, Dan Pritts wrote:
>> umount /var/lib/backuppc
>> dd if=/dev/onedisk of=/dev/someotherdisk bs=1M
>
> Only works if you have identical disks, which is hard when you've
>
that there are just too many hard links on a backuppc data store for this to
work.
danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
--
What happens now with your Lotus
On Monday 15 November 2010 18.49:47 Dan Johansson wrote:
> Hi, I am new to this list so pleas be kind if this has already been
> answered...
>
> After updating perl from 5.8.8 to 5.12.2 my BackupPC installation stopped
> working as suidperl is no longer provided.
> I have n
, OR USE -u AND UNDUMP!
**
Any suggestions?
Regards,
--
Dan Johansson, <http://www.dmj.nu>
***
This message is printe
On Oct 25, 2010, at 3:39 PM, Rob Poe wrote:
> I'm having an irritating problem with BackupPC for a client of mine.
>
> They're still running some Netware (yes, I know ...), and the
> File::RsyncP perl module is barfing on the Netware rsync.
>
> https://rt.cpan.org/Public/Bug/Display.html?id=61
> When tar makes full backups, does-it transfer everything even afer
> the first full backup ? Internet connection being really slow, this is
> not really an option...
I believe it will transfer everything for each full backup.
You can specify lists of files to exclude; I would suggest that you o
I agree with your general praise, BackupPC works very well for us in our
environment, which is maybe half your size. Due to your large size, I'll leave
you with one thought:
One concern I've always had with backuppc is what would happen if i had a
disaster and had to restore everything from ba
On Oct 1, 2010, at 10:54 AM, Wayne Walker wrote:
> BackupPC uses rsync as a transport. Does it use any of rsync's smarts
> to prevent downloading unchanged files? If I run 2 full backups back
> to back, does it pull the entire 90 GB both times?
Make sure to turn on rsync checksum-caching. It d
Oh, for anybody who might have an issue with Compress::Zlib, here is a
solution that worked for me.
http://www.backupcentral.com/phpBB2/two-way-mirrors-of-external-mailing-
lists-3/backuppc-21/backuppc-cant-find-compress-zlib-after-recent-update
-on-cen-106280/
Dan Lavu
System Administrator
74 subtests failed, 98.58%
okay.
make: *** [test_dynamic] Error 255
/usr/bin/make test -- NOT OK
Running make install
make test had returned bad status, won't install without force
###
r
> the internet initially)
>
> Any ideas on the best practice to go about this?
One way would be to attach the USB drive to a host in the data center,
then configure backuppc to think that the USB drive is the remote host
(fiddle with host name aliases, set filesystems to backup app
with you it's OK with me. :)
If you are already committed to ZFS dedup for other reasons,
more power to you and I'm sure this makes good sense, but
it doesn't seem to be worth it just for backuppc.
danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 |
saved. However saving them may not save that much time
> since the full runs will verify the contents against the source anyway.
it probably won't save time but it might save bandwidth, which might help
in his situation.
danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-7
http://www.globalscaletechnologies.com/p-31-guruplug-server-standard.aspx
With a guruplug you can do SMB, NFS, or iSCSI with debian on the device and
connect your external USB drives via USB2. Device has 1 Gigabit ethernet
and 2 USB2 connections for $99. $129 gets you another Gigabit ethernet a
According to the author's site, it is still much slower than a native
filesystem.
As with any FUSE based filesystem, the copying of data from userspace to
kernel and back adds enough latency to severely effect performance.
Backuppc is extremely IO dependent on IO which is where FUSE is crippled.
Oh, just as a reminder, you can do an external journal on ext3 and ext4 as
well as xfs.
On Sat, Mar 6, 2010 at 9:37 AM, dan wrote:
>
>
> On Sat, Mar 6, 2010 at 1:07 AM, Eric Persson wrote:
>
>> dan wrote:
>> > If you are using EXT3 or XFS then I suggest you use an
On Sat, Mar 6, 2010 at 1:07 AM, Eric Persson wrote:
> dan wrote:
> > If you are using EXT3 or XFS then I suggest you use an external journal.
> > get yourself a small SSD or a small 15RPM disk. You could use a regular
> > disk if you like but the faster the better.
>
If you are using EXT3 or XFS then I suggest you use an external journal.
get yourself a small SSD or a small 15RPM disk. You could use a regular
disk if you like but the faster the better.
(EXT3)Destroy the journal and re-create it on the extra disk.
#unmount the backuppc disk
#on the journal dev
On Sun, Feb 28, 2010 at 3:06 PM, Johannes H. Jensen
wrote:
> Actually, one of the files in question is the fformat file of a .svn/
> directory which appears many times in one of the backed up
> filesystems. Since it will have the same md5 sum, BackupPC links it to
> the same file in cpool...
>
I h
your input,
>
> On Sat, Feb 27, 2010 at 3:38 AM, dan wrote:
> > if [ -e /var/lib/backuppc/testfile ];
> >then rsync ;
> >else echo "uh oh!";
> > fi
> >
> > should make sure that the filesystem is mounted.
>
> Yes, that's
if [ -e /var/lib/backuppc/testfile ];
then rsync ;
else echo "uh oh!";
fi
should make sure that the filesystem is mounted.
you could also first do a try run
rsync -avnH --delete /source /destination > /tmp/list
then identify what will be deleted:
cat /tmp/list|grep deleting|sed 's/delet
ill it overwrite the configuration in /etc/backuppc/config.pl?
Thanks very much for any help you can provide.
Dan Smisko
Balanced Audio Technology
1300 First State Blvd.
Suite A
Wilmington, DE 19804
(302)999-8855
(302)999-8818 fax
d...@balanced.com
Les Mikesell wrote:
> On 1/22/2010 4:
you would need to move up to 15K rpm drives to have a very large array and
the cost will grow exponentially trying to get such a large array.
as Les said, look at a zfs array with block level dedup. I have a 3TB setup
right now and I have some been running a backup against a unix server and 2
lin
>
> Any thoughts on why it is taking so long?
>
what are the specs on the backup server and the client? CPU & RAM
specifically.
what is their connectivity?
is the 5GB is small files, large files, or a mix?
what is the system load on the backup server and the client?
would not be much fun either. My preference would be
software RAID
for portability, but I don't know about performance. Does software RAID
work on a Linux
root drive?
Thanks again for your help.
Dan Smisko
Les Mikesell wrote:
The answer to this is going to depend on how you inst
ckest procedure to get the
current backup
system back up and running. I apologize if there's already a document
for this, please
point me to it.
Thanks very much.
--
Dan Smisko
Balanced Audio Technology
1300 First State Blvd.
Suite A
Wilmington, DE 19804
(302)999-8855
(302)999-8818 fax
d...@ba
On Thu, Dec 31, 2009 at 8:36 AM, Peter Vratny wrote:
> mark k wrote:
> > Agreed sas drives are the way to go, just built a backup server with
> > 10 300gb sas running in a raid 50, going to hopefully replace 2 backup
> > servers that were using SATA storage.
>
> This is just a question of price.
an update to my zfs dedupe test.
opensolaris-dev build 129 (first build to enable dedupe)
I create a zfs volume that I export with iSCSI, then did a gigabit crossover
cable to my test backuppc server running ubuntu 9.04. I mount the iscsi
volume to /var/lib/backuppc.
let just say that this has
Dont know how many of you follow solaris/nexenta/ZFS but here is a fresh
tidbit:
The beta of nexenta core 3 can install backuppc out of box and the release
will have zfs v21 which includes online dedupe as well as 'send' dedupe so
you can mirror systems at the block level while only sending the ch
FYI, try microsofts imageX. its free and does shadowcopy based disk
images. It can also be used like ghost used to be in that you can put the
image on bootable media with freedos and run the commandline version to
re-image a PC from cd/dvd which is nice for remote bare-metal restores.
On Wed, No
with 370,000 files rsync should use 370,000*100B=35MB+/- 10% on each side.
How fast is your CPU? are you sure that you can process the checksums fast
enough?
Are you compressing the rsync process and if so what compression level?
rsync compression at level 3 is only slightly worse that level 9 bu
I have been successfully deleting files from backups for some time. I use
basic a basic `find -iname -exec rm {} \;` to hunt down files
and delete them. I have never had any problems recovering the data as the
restore process doesnt have a file list it goes off of but just processes
each file
Any thoughts on ZFS and deduplication? Its coming in build 128 of
opensolaris and also in nexenta core 3.
It does block level, online deduplication but apparently eats up tons of RAM
and needs some CPU power.
Its a pretty interesting thought to rsync over data to a pc/hostname/date/
folder and h
nt in backuppc
even with that caveat.
danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
SC09: Visit the Internet2 Booth #1355
November 14-20, 2009
Portland, Oregon Convention
gt; anyone done this? How easy is it to roll back from USB?
All you need to do is unmount your backuppc filesystem, and dd from the
raw device containing the filesystem to your new device.
danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
SC
On Wed, Oct 28, 2009 at 09:25:31PM -0600, dan wrote:
> You can tar up the whole pool directory and put it on an external drive
> pretty easily.
Serious question: of what value is a backup of (only) the pool directory?
danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-35
read the data on the rest of the tapes.
OTOH, with a tape backup of a filesystem, god knows what you have.
certainly you'll have some data, but hope your filesystem code is good
at ignoring errors without panicing.
danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 |
You can tar up the whole pool directory and put it on an external drive
pretty easily. Just make sure that backuppc is not running when you do this
-OR- do an LVM snapshot and then backup the snapshot.
I have been using rsync to sync two servers for a long time but have
recently started experimen
results of a twisted and desperate mind :)
On Wed, Oct 28, 2009 at 2:45 AM, Tyler J. Wagner wrote:
> On Wednesday 28 October 2009 02:03:52 dan wrote:
> > > The only issue is that it cannot remove existing
> > > files in the restore target directory (think "rs
I have changed a lot of my setup in the past months and am using DPM from
microsoft on our server 2003 and 2008 infrastructure. I used to use
backuppc but it is not the best tools for windows servers. DPM can do
rolling backups in 15 minute intervals and has some nice client side agents
to do suc
>
> The only issue is that it cannot remove existing
> files in the restore target directory (think "rsync -a --delete"), so be
> sure
> to restore to a basic OS install with nothing else on it.
>
> I have gotten around this by touching each file in the target, doing the
restore (which restores ti
1 - 100 of 719 matches
Mail list logo