try
su backuppc -c id
what is your output?
did you create the user backuppc? if so, does backuppc have a home
directory and does it have a default shell of /bin/bash?
what is the output of
su backuppc -c " /usr/local/backuppc/bin/BackupPC -d"
just for fun, try to suid backuppc the init script.
yes, corruption can happen silently in the background! regular fsck can
catch these while they are correctable or before they cause any major loss.
ext3 and xfs have no mechanism for online fault correction to detect silent
corruption. ZFS has this mechanism but I don't believe many people are
us
I am using a product called hamachi that builds a sort of VPN. There is a
version for windows and for linux. the free version allows something like
10 users per private network. they all get an IP address on this network
and you can run backuppc over this link. the IP address functions as stati
WHS is a 'OK' as a backup server in a very small, windows only environment.
It can only backup 10 machines and can only backup machines that are running
windows XP+. Plus is has a major file corruption bug when under load! that
microsoft has yet to patch since it's discovery oct07.
On Mon, Apr 2
the Centos5 RPM works perfectly on RHEL5. I prefer a debian based system as
it is quicker and easier to build a dedicated BPC server with no extra
stuff, Centos5/RHEL5 is a lot heavier for the default install.
On Mon, Apr 28, 2008 at 9:33 PM, Kenneth Porter <[EMAIL PROTECTED]>
wrote:
> --On Satu
sudo settings to all ONLY these programs to
be run with sudo without a password.
On Tue, Apr 29, 2008 at 2:40 AM, Tino Schwarze <[EMAIL PROTECTED]>
wrote:
> On Mon, Apr 28, 2008 at 09:01:05PM -0600, dan wrote:
>
> > I am using a product called hamachi that builds a sort of VPN. T
add backuppc to a more privileged group and setup sudo so that you can run
some very specific Backuppc_* commands without a password. Also, remove the
password for the user backuppc in the CGI interface and config some other
user to have access to all so that the CGI can't be exploited to run
comm
are as easy as an ubuntu server
> setup.
>
>
> On Wed, 30 Apr 2008 07:11:58 -0700,
> [EMAIL PROTECTED] wrote:
> >
> > On Apr 29, 2008, at 11:26 PM, dan wrote:
> >
> > > the Centos5 RPM works perfectly on RHEL5. I prefer a debian based
> > > system
This is true, samba does in fact support hard links but it is really just
like another NFS! the filesystem on the other end of the cifs share must
support hardlinks as samba is just the layer translating over the network.
Samba can scale better than NFS also.
I don't know if samba support hardlin
aster than samba, but others say samba is faster.
It seems like the the OS involved on each side is the determining factor,
and there is no chart that I could find to determin which was faster.
On Sat, May 3, 2008 at 11:34 PM, dan <[EMAIL PROTECTED]> wrote:
> This is true, samba
You should use Either raid1 or raid10. avoid raid5. the reason software
raid is likely to be faster than hardware raid is that the software raid
gets the benefit of your CPU, which is quite fast while many of the more
affordable hardware raid cards have a weaker CPU. Also, and maybe more
importa
<[EMAIL PROTECTED]>
wrote:
> Hi dan,
>
> On Sun, May 04, 2008 at 12:14:41AM -0600, dan wrote:
>
> Thanks for the work. This clears things up.
>
> > I did a test on ubuntu 8.04
> >
> > created a samba share at /root/share
> >
> > i installed smbfs
just an FYI, if you are trying to use backuppc as an email server, even if
only for use with backuppc, you need to make sure email traffic is not
blocked by your isp. if it is, you need to see if you can connect to a
smart host on an open port to send email. otherwise, backuppc(sendmail)
will sen
does the ReadyNAS have rsync or are you just mounting up the NFS share
locally? If you are trying to use rsync then I suspect that the ReadyNAS
doesn't have enough RAM. Also, how much RAM does your system have? and are
you using rsync? rsync eats up RAM like candy when you have a high file
count
It would be nice to have a logout button. :)
On Mon, May 5, 2008 at 7:12 AM, Rob Owens <[EMAIL PROTECTED]>
wrote:
> This is a function of your browser, I believe. Firefox remembers your
> authorization credentials until you close the browser. You must close
> the browser, and not just the parti
I could add that any system crashes from (linux software raid)raid-member
disk failure is likely a problem with the disk controller driver! All
hotswappable SATA devices should work without no adverse effects on the
system and allow hot swapping and rebuilds assuming the SATA controller
driver is
I can verify this on ubuntu. ubuntu defaults with exim, just install
postfix and exim will be removed and the appropriate links will be put in
place. This should also be true of any debian system.
I use postfix on all of by ubuntu servers as I have never bothered to mess
with exim and I don't li
definitely stay away from rsync on the readynas because it just doesnt have
the ram or cpu to handle a massive rsync. as stated about, your timeout
looks to be too low. raise that number and good luck.
On Tue, May 6, 2008 at 7:37 AM, Les Mikesell <[EMAIL PROTECTED]> wrote:
> Leandro Tracchia wr
you must change the host in the main config file, rename the individual host
config file, and rename the directory for the host in the 'pc' directory.
that should handle it.
On Tue, May 6, 2008 at 2:23 PM, Tino Schwarze <[EMAIL PROTECTED]>
wrote:
> On Fri, May 02, 2008 at 05:08:42PM +1000, Nick T
I have problems with a lot of drivers on 64bit. I have had too be a little
bit picky on hardware selection, usually building my own servers with Tyan
hardware as they seem to do a good job of choosing 64bit linux friendly
hardware(HP, and Dell are ok too).
I have to say that 64bit linux is much m
I think that you could do this with a bash script in cron. do a `df
/var/lib/backuppc`, determine if the drive is over the limit, which you can
include the backuppc config file and use that variable in there for
percentage, then if you are over that percent do an `ls -l $TopDIR/pc/` and
choose th
?? How is the interface complicated? I have a number of VERY low tech users
that are capable of using the standard interface to backup their computers.
I man a fairly simple PDF walk through for them as a refernce and I have
virtually zero issues.
Are you trying to avoid a potential issue or do y
instead of using the entry in /etc/hosts, just put the IP in client alias in
the hosts config file. backuppc uses nmblookup to resolve IP from HOSTNAME
and nmblookup does not check /etc/hosts.
On Mon, May 12, 2008 at 5:10 PM, Les Mikesell <[EMAIL PROTECTED]> wrote:
> Kurt Jasper wrote:
> >
> >
Jasper <[EMAIL PROTECTED]>
wrote:
> Dan,
>
> thanks for the hint, I haven't use $Conf{ClientNameAlias} before.
> But the 'problem' with this setup will be that I need to edit more than
> one file when the IP-address of the backup client will be changed
>
overwhelmed and feel like they can do a restore without contacting
> me first. I might even take out the option to do a direct restore, just
> so they don't overwrite anything important. We'll see.
>
> So how about it? Can anybody help me get started on figuring out how
>
It actually does support "resuming" of backups because the failed backup is
a partial and the net one will take advantage of the files that have already
been transfered. unfortunately, rsync will still need to scan all the files
and do checksums and all that good stuff so it still takes time.
On
definitely use rsync/rsyncd if possible. I do find that smb can be much
faster on a local lan but you have to deal with file locks on the client
side as well as permission issues as samba does not give all the same
information as the real filesystem the file sit on.
On Sat, May 17, 2008 at 10:1
rsync backups are quite slow. over slow links this is more than made up for
by the decrease in network bandwidth used but on a high speed lan it is very
slow. I am on gigabit from my desktop to my backuppc server but rarely
exceed 6-8MB/s during a backup.
That makes sense some of the time as rsy
postfix perfectly emulates the command line functionaling for sending
emails. it may not handle all the same status syntax but that is not
important for just sending a mail. if you are on ubuntu or debian, just
installing postfix with apt will automatically make the appropriate links.
On Fri, Ma
Samba will copy the every file in it's entirety eating up bandwidth! and it
doesn't work remotely very well. It also is delivered via an abstraction
layer to the filesystem so you can't pull over permissions, every file ends
up having the permissions assigned to the samba share, and ownership
chan
archive bit = evil. the archive bit is actually satan's unkle on his
mother's side.
anyway,
I don't know if there is a solution to the combination of I/O bottlenecks
and file checksumming with rsync taking a log time.
Rsync v3 does greatly(samba's words) improve memory usage and file list
trans
ng on cgi?
>
> -Rob
>
> dan wrote:
> > I think that you can do this without too much difficulty. just read up
> > on cgi a little bit. as long as you dont alter the scripts and just
> > mess with the layout and add or remove things you should be alright.
> >
>
If you run a recent version of samba, it supports hardlinks. Windows XP and
higher support hardlinks with their version of NTFS and this does work quite
well over samba.
On Thu, May 29, 2008 at 6:59 AM, Tino Schwarze <[EMAIL PROTECTED]>
wrote:
> On Thu, May 29, 2008 at 02:58:14PM +0200, Daniel D
Why do you want to avoid LVM so much? You are stuck with having just 1
filesystem because hardlinks dont traverse filesystems so you are stuck with
1 large drive, a raid device or LVM. I have been experimenting with
redhat's GFS via centos5 and it looks promising but I have not tried it with
back
thing. It says it is very good at it but has no numbers or
anything other than a sales pitch.
On Sun, Jun 1, 2008 at 2:25 AM, Tino Schwarze <[EMAIL PROTECTED]>
wrote:
> On Sat, May 31, 2008 at 07:11:21PM -0600, dan wrote:
> > Why do you want to avoid LVM so much? You are stuc
Samba?
>
> Greetings,
> Hendrik
>
> --
> *Von:* [EMAIL PROTECTED] [mailto:
> [EMAIL PROTECTED] *Im Auftrag von *dan
> *Gesendet:* Samstag, 31. Mai 2008 17:54
> *An:* backuppc-users@lists.sourceforge.net
> *Betreff:* Re: [BackupPC-users]
did you recently update the windows servers? any chance you have installed
a new antivirus or enabled windows defender? are you running active
directory? windows servers do so much self-modification with windows
updates or installed software or even changes to group policy and possible
exclusion
ok, so you know the username has read access everywhere, you havent change
any antivirus or antispyware.
have you updated the backuppc OS?
did you check drive space on the backuppc server? and check the setting in
the config of maxdisk usage?
can you mount one of the remote drives on the
>> Is there a way to create a hard-link on a (Unix or Win (NTFS)) Server from
>> Windows over Samba?
>>
>> Greetings,
>> Hendrik
>>
>> --
>> *Von:* [EMAIL PROTECTED] [mailto:
>> [EMAIL PROTECTED] *Im Auftrag von *dan
>
if you are concerned you can tune your virtual memory system. google for
your distro and "tune vm subsystem" and you will have great luck.
linux tends to cache commonly used files and libraries for faster access but
can de-commit them as needed with very little performance hit.
On Mon, Jun 2, 20
the dumps can take a lot of RAM if you are using rsync as it needs about
100bytes per file, which adds up quickly. I do not believe that the nightly
jobs take nearly as much as rsync. rsync is 1 large job that depends on all
files being known for hardlinks while the nightly jobs are more sequenti
what distro are you using? you should be able to install samba with your
package manager and be done. you could change your workgroup to match your
domain if you like but shouldn't even have to do that. backuppc uses
nmblookup which is part of the samba package to determine ip address from
the h
be sure to enable "backports" so you can get v3.0. it is quite easy to
upgrade 3.0 to 3.1 with the installer from the backuppc site as well
On Tue, Jun 3, 2008 at 12:05 PM, Nils Breunese (Lemonbit) <[EMAIL PROTECTED]>
wrote:
> fatima ech-charif wrote:
>
> > I would like to install backuppc in ubu
I actually do this on my tru64 unix and my sco unix machines. This works
quite well. It also uses up way less RAM with rsync, which is good because
some of the tru64 machines don't have a lot of extra RAM.
I have one specific tru64 alpha 4cpu machine that I have 4 hosts setup for
and I trigger a
WLAN is going to slow you down because of latency. rsync does quite a bit
of 2 way communication. in fact it does it for every single file it
inspects as it sends the checksum back and forth. wireless is a killer
here, it often has latencies that are longer than a remote DSL site!
The other thi
Install a new backuppc server ( or one in a virtual machine ) and then mount
your NFS network drive to the $TopDIR which is likely to be
/var/lib/backuppc if you are using debian or ubuntu. You can go to vmware
and download an ubuntu 8.04 server image that is ready to run and do a quick
'apt-get i
a)you can only use 1 filesystem for backuppc so you either need just 1
filesystem or to use LVM or something to merge up multiple disks into 1
filesystem.
b)rsync is perfect for many non-changing files as it will make no attempt to
back them up, it will just check to make sure they have not change
it has been asked before, and i believe the answer was that backuppc is
coded to use rsync2.x and will only speak that and not use any of the new
features. hopefully 3.2 will include rsync3 support.
On Sun, Jun 8, 2008 at 11:56 PM, Hendrik Friedel <[EMAIL PROTECTED]> wrote:
> Hello,
>
> Has anyo
what filesystem are the files sitting on?
On Tue, Jun 10, 2008 at 12:38 PM, Evren Yurtesen <[EMAIL PROTECTED]> wrote:
> Tino Schwarze wrote:
> > On Sun, Jun 08, 2008 at 10:11:04PM +0300, Evren Yurtesen wrote:
> >
> >> I have created an bzip2 archive with split and it is in 2 files. However
> I
>
install via "apt-get install backuppc"
then install with the installer script. this handles all deps and also gets
you the most up-to-date version.
On Mon, Jun 16, 2008 at 11:41 AM, Nils Breunese (Lemonbit) <[EMAIL PROTECTED]>
wrote:
> fatima ech-charif wrote:
>
> > http://backuppc.sourceforge.n
SIGSTOP on the client side will cause the rsync job to fail, but SIGSTOP on
the server side should not cause a failure.
On Thu, Jun 19, 2008 at 4:38 AM, mark k <[EMAIL PROTECTED]> wrote:
> Unfortunately a SIGSTOP and SIGCONT will not work, after a SIGCONT the
> rsync job just dies.
>
> I did howe
do you have messages queued up? run 'mailq' and see if there are messages.
Is this server have a mail server installed? is the smtp traffic being
filtered by a firewall or your ISP?
what is your OS? and distribution if linux?
what is your MTA? sendmail? postfix? exim?
You may need to setup the
su backuppc -c /usr/share/backuppc/bin/BackupPC_tarCreate -h host -n
backupnumber -s sharename sharepath > restore.tar
example:
su backuppc -c /usr/share/backuppc/bin/BackupPC_tarCreate -h centos5 -n 37
-s /root / > restore.tar
On Wed, Jul 2, 2008 at 5:38 PM, Ryan Manikowski <[EMAIL PROTECTED]>
just for reference:
time for i in `seq 1 1`; do mkdir $i; done
ext3
real0m15.536s
user0m7.764s
sys0m6.108s
bonnie++
xfs
real0m14.365s
user0m7.856s
sys0m6.148s
reiserfs
real0m13.679s
user0m8.053s
sys0m6.024s
On Wed, Jul 2, 2008 at 8:19 AM, Christoph Litau
more situations for the -W flag
1)if either your client or server have a slow CPU. You should time a full
backup with and without the -W. Many times I find that the -W improves
backup times even over remote connections if you have a slow CPU in the
mix. I backup an older Alphaserver 1Ghz box and
do a pre-backup script to mount the ftp site using your favorite method, and
a post-backup script to dismount it.
doing it by cron is a hack and will likely cause you to have failed backups
if a minor change is made. you should use the pre and post backup commands
to handle this. you could mirror
?? this is the backuppc list my friend. i suggest you find the cygwin list
for specific help. may i suggest you also unistall cygwin and then
reinstall it as a more priviledged user.
On Mon, Jul 7, 2008 at 9:35 AM, fatima ech-charif <[EMAIL PROTECTED]>
wrote:
> i would like install cygwin in wi
??why would you do that?? you will certainly have many failed backups if
you do this. Also, the quota would have no useful function except to break
backups.
You might be ok if you just setup SOFT quotas and put the hard limits
extremely high. Then the backuppc server would send an email each ti
nadia kheffache <[EMAIL PROTECTED]>
wrote:
>
> ok dan, thank you for your reply.
> I asked this question, because I just finished my stage on installing
> backuppc, and I am preparing to questions of jury, like quotas, security,..
>
> Regards
>
> Nadia
>
> --- E
1)Encrypted disk. The only reason to encrypt the backup volume is so that
the files cannot be access if a drive is pulled or if someone tries to
backdoor linux security by booting a livecd or something. Once the system
is running, the excryption is already bypassed.
2)rsync has no ability to encr
add a third member drive to your software raid so you have a 3 device
raid1. setup a script for when you hotplug the drive to automatically add
it to the raid1 and another script to disconnect it from the raid(by serial
number or UUID or something unique to each disk).
you can access the disk on
for remote windows hosts, you are probably best advised to either use just
rsync via cygwin or deltacopy, or tunnel that over cygwins ssh or a seperate
VPN connection.
for a remote backuppc server, what would you mirror back to? it wouldn't
make much sense to mirror back to the 'primary' backuppc
WindowsXP can do shadow copies but it does not have the powershell or
whatever the heck it is called and so the tools are not there by default.
On Wed, Jul 9, 2008 at 11:08 AM, Les Mikesell <[EMAIL PROTECTED]> wrote:
> Paul Mantz wrote:
> >
> > The BackupPCd agent that's mentioned in the limitati
n Thu, Jul 10, 2008 at 6:45 AM, Carl Wilhelm Soderstrom <
[EMAIL PROTECTED]> wrote:
> On 07/09 06:23 , dan wrote:
> > for remote windows hosts, you are probably best advised to either use
> just
> > rsync via cygwin or deltacopy, or tunnel that over cygwins ssh or a
> seper
1) Is is also possible to add a 3rd harddrive via USB or Firewire to the
RAID1 and then using your idea to detach and reattach the harddrive?
yes, that is a very good method. I suggest you try to do esata but usb or
firewire will work. esata is just much faster.
2) When I attach the harddrive t
that
(Unable to read 4 bytes)
that usually means a problem with authentication.
also, rsync over ssh is problematic on windows clients. better to either
use just rsyncd or use rsyncd over a vpn tunnel
On Thu, Jul 10, 2008 at 9:34 AM, fatima ech-charif <
[EMAIL PROTECTED]> wrote:
>
> hi
>
> i
there is no official message that says its dead, but its dead. no activity
for 2 full years, and keene's last post to this like is like 2 full years
ago.
id guess it's completely dead!
On Thu, Jul 10, 2008 at 3:19 PM, dnk <[EMAIL PROTECTED]> wrote:
> In my search to become more familiar with bac
Has anyone every tried using backuppc with gluster?
I have setup a quick glusterfs on some test rigs and find it to be a very
very interesting filesystem.
gluster does run under fuse but performs like a native filesystem. I get
11.7MB/s on a 100Mb line, which is just as fast as iscsi. I can eve
if its the server that the write was going to, its lost. if it is the
remote server, then it will resync when it is back online.
On Fri, Jul 11, 2008 at 1:07 AM, Les Mikesell <[EMAIL PROTECTED]> wrote:
> dan wrote:
>
> you can also
>> write directly to the exported direct
I have the same issue when restoring large amounts of data, so I just use
the command line interface. I typically backup to DVD so I pull a tar.gz
off and split it out into 100MB chunks, and then par2 it at 25% and drop
those on DVD.
On Fri, Jul 11, 2008 at 3:22 AM, Ishan Patel <[EMAIL PROTECTED]
d/e0.0 && mkdir /mnt/e0.0 && mount /dev/etherd/e0.0
/mnt/e0.0"
or better yet, add it to your raid array with mdadm.
On Sun, Jul 13, 2008 at 6:44 PM, Kurt Tunkko <[EMAIL PROTECTED]> wrote:
> Hello Dan & ...,
>
> back from a 2 weeks vacation time to implemend y
29 AM, Les Mikesell <[EMAIL PROTECTED]> wrote:
> dan wrote:
>
>> if its the server that the write was going to, its lost. if it is the
>> remote server, then it will resync when it is back online.
>>
>
> Being able to recover from a complete loss of the loca
Definitely do a 3 device raid with 2 always on devices and a 3rd swappable.
As far as an automated script to sync the drives, I say DONT DO IT! your
best bet is to make a script that adds the drive to the mirror set, waits
for it to sync completely, and then removes it from the set. then you can
1:09 PM, Kurt Tunkko <[EMAIL PROTECTED]> wrote:
> Hey Dan,
>
> dan wrote:
> > AOE is quite simple.
>
> :-o ... wait, not THAT simple ... simple is something like: throw in
> coin here -> get a beer :-)
>
> I just think that I don't fully understand the
i have tested the send and receive functionality of zfs on openbsd. the
problem is that it sends the entire fileset, block by block. this is not
going to work for remote replication within a reasonable timeframe.
rsync is about the only option. i know it does not scale well with file
count but
, Jul 25, 2008 at 12:03 PM, Les Mikesell <[EMAIL PROTECTED]>
wrote:
> dan wrote:
>
>> i have tested the send and receive functionality of zfs on openbsd. the
>> problem is that it sends the entire fileset, block by block. this is not
>> going to work for remote r
I have investigated cluster filesystems for backuppc. You will have very
poor I/O on all cluster filesystems. I/O is the most important factor for
backuppc and the performance difference between a local filesystem(or even
iscsi or aoe) and a cluster filesystem can be an order of magnitude slower
rsync 3 will work but will function just like rsync 2 did without the new
features.
On Tue, Aug 5, 2008 at 8:00 PM, Bernhard Egger <[EMAIL PROTECTED]>wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> I had similar issues (backups wouldn't complete), and after upgrading
> both servers a
quite simply, raid5 syncronizes platters and has a wait for parity writes,
which leads to a 1:x-1 performance hit because of that parity compute, wait,
write for each block. on smaller arrays such as a 3 device raid5, that is
1:(3-1) or 1:2 which is 1 parity write to 2 data writes or 1/3 of all wr
you can get a 3.1 package from
http://us.archive.ubuntu.com/ubuntu/pool/main/b/backuppc/backuppc_3.1.0-3ubuntu2_all.deb
you can install this quite easily via
apt-get install sendmail
dpkg -i backuppc_3.1.0-3ubuntu2_all.deb
apt-get -f install
sudo htpasswd /etc/backuppc/htpasswd backuppc
done
I second the comment on the reformat.
reasons
1)even though ntfs-fuse is probably the most optimized and fastest fuse
plugin, it is still in fuse and still has everything processed on the CPU
with no DMA at all because of that. For transfering files ntfs-fuse can be
pretty fast but I/O basically
I'm not entirely sure of the issue here but you could do something to stop
the big file from filling up the whole drive and possibly catch some of the
contents.
make a new file that you will mount on loopback and direct the error log
there.
dd if=/dev/null of=/logtmp.diskimage bs=1M count=1 seek=
you would get faster performance if you formatted the drive ext3 and then
installed the ext3 filesystem driver for windows. fuse is is pretty slow
and is terrible with I/O speed while ext3 is descent. the ext3 driver for
windows is a native filesystem pluggin so it runs just like ntfs(though a
bi
fix the servername issue by either fixing your hostname in /etc/hosts and
/etc/hostname or add: 'ServerName hostnamewhatever' to you
/etc/apache2/apache2.conf file.
now try, looks like apache is just not liking the setup. This usually
happens because your hostname in /etc/hostname is not in /etc/
I also have found that antivirus software running on the client can kill
performance as the AV will try to scan each file on access, which will of
course kill performance. just something else to consider.
good luck
On Sun, Aug 10, 2008 at 10:54 PM, naigy <[EMAIL PROTECTED]>wrote:
>
> Thanks aga
gnu tar can be compiled for arm cpus. do you have a space issue that you
cant use full gnu tar? I suggest you ditch busybox tar and move to gnu.
this would be your simplest solution. unless you lack a machine with gcc
and make to build it on but any linux machine with those tools should be
able
a simpler solution is to use the free program Deltacopy, which is a nice
packaging of rsync with a GUI. You install it and then set it up to run as
a service as the admin user. You may select the directories to backup via a
GUI and start and stop the service via the GUI. This option works perfec
which version of windows on the client? do you have exclusions setup in the
master config file to skip those filetypes?
On Mon, Sep 1, 2008 at 8:48 AM, Alex Dehaini <[EMAIL PROTECTED]> wrote:
> Hi Guys,
>
> I am backing up a windows machine using smb. The backup works but I can't
> backup certai
sing windows XP professional. I haven't setup any inclusion in
> > the master config file; what settings should I edit to allow these
> > file types or to allow all file types?
> >
> > Lex
> >
> > On Mon, Sep 1, 2008 at 3:26 PM, dan <[EMAIL PROTECTED]
> > <ma
I have run some thorough tests with various NAS/SAN and network filesystems
like NFS/SMB/CIFS so here is a bit of what I know.
local filesystem > iscsi > aoe > NFS > SMB/CIFS
a local filesystem will bascially always be faster until you move up to a
many-drive SAN with bonded Gigabit NICS. 1 Gig
wow. id have to say that your cant trust that backup! partial backups
every night? no consistent time which there would actually be a full
backup? *maybe* every now and then? This is as good as not having a
backup.
consider
1) do you need to backup all of the files on that system? are there
full or incremental really discribes how they are stored on backuppc. a
full will hang around longer than an incremental. As far as the actual
transfer, the only real difference is that a 'full' doesnt skipp files that
have the same mtime where an incremental skips those files. This causes a
ful
I have a thought here, thought id run it through the users list before
dropping it on the devs.
My idea was to add a small step to the backuppc process to validate that all
files that should be transfered were transfered and automatically flag the
backup as a partial if there is a discrepancy.
Th
. Then the next backup would have very little to
transfer and may pick up the missing file.
On Wed, Sep 3, 2008 at 3:16 PM, Les Mikesell <[EMAIL PROTECTED]> wrote:
> dan wrote:
>
>> I have a thought here, thought id run it through the users list before
>> dropping it on the
You are giving the wrong info here so you might be looking at it wrong.
that 3002ms is the total time the ping operation took. This isnt really
comparable to the actual ping times.
here:
PING www.yahoo-ht3.akadns.net (209.131.36.158) 56(84) bytes of data.
64 bytes from f1.www.vip.sp1.yahoo.com (
the real difference between an rsync full and incremental in backuppc are
the 3 following
1) a full has rsync ignore matching mtimes and checksum all files while and
incremental trusts matching mtimes
2)backuppc has a different keep/delete schedule for fulls vs incrementals.
3)an incremental backu
not to hijack the thread here but this is not just a valid point, but a very
inteligent point. Picking a specific stable distro and package list allows
a person to maintain all identical setups, document the system and the
configuration, and redeploy identical systems. I run ubuntu 7.04 with
back
I guess I never thought to say this before but I run 2 local DNS servers. I
have DHCP update DNS so that I can have an acurate local name service and
dont have to rely on nmb or wins.
I find that many network services run significantly faster with this setup.
I also have my remote dhcp servers up
backuppc will even send warning emails for disabled hosts! you must remove
the email notify address!
On Thu, Sep 4, 2008 at 8:11 PM, Craig Barratt <
[EMAIL PROTECTED]> wrote:
> John writes:
>
> > We were having a weird problem with nmblookup firing for a host that
> > was disabled with:
> >
> >
ll <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote:
>
>> On Wed, Sep 03, 2008 at 06:57:39PM -0600, dan wrote:
>>
>>> this was on an rsync incremental. There was no error in the Xfers log,
>>> it
>>> is like the file was not there. The atime
1 - 100 of 719 matches
Mail list logo