On 3/26/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
Lets hope this doesnt wrap around... as you can see load is in 0.1-0.01
range.
1 usersLoad 0.12 0.05 0.01 Mar 27 07:30
Mem:KBREALVIRTUAL VN PAGER SWAP PAGER
Tot
David Rees wrote:
On 3/26/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
Lets hope this doesnt wrap around... as you can see load is in 0.1-0.01
range.
1 usersLoad 0.12 0.05 0.01 Mar 27 07:30
Mem:KBREALVIRTUAL VN PAGER SWAP
brien dieterle wrote:
Jason Hughes wrote:
Evren Yurtesen wrote:
Jason Hughes wrote:
That drive should be more than adequate. Mine is a 5400rpm 2mb
buffer clunker. Works fine.
Are you running anything else on the backup server, besides
BackupPC? What OS? What filesystem? How
Craig Barratt wrote:
Evren writes:
HostUser#Full Full Age (days) Full Size (GB)
Speed
(MB/s)#Incr Incr Age (days) Last Backup (days)
State
Last attempt
host14 5.4 3.88
Evren Yurtesen wrote:
Server info:
HP DL380 G4
debian sarge
dual processor 3.2ghz xeon
2GB Ram
5x10k rpm scsi disks, raid5
128MB battery backed cache (50/50 r/w)
ext3 filesystems
brien
You are distributing the reads and writes on 5 disks here. Dont you
think that 2.40mb/s is a
Les Mikesell wrote:
That doesn't sound difficult at all. I suspect your real problem is
that you are running a *bsd UFS filesystem with it's default sync
metadata handling which is going to wait for the physical disk action to
complete on every directory operation. I think there are
David Rees wrote:
On 3/26/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
Lets hope this doesnt wrap around... as you can see load is in 0.1-0.01
range.
1 usersLoad 0.12 0.05 0.01 Mar 27 07:30
Mem:KBREALVIRTUAL VN PAGER SWAP
Les Mikesell wrote:
Evren Yurtesen wrote:
Server info:
HP DL380 G4
debian sarge
dual processor 3.2ghz xeon
2GB Ram
5x10k rpm scsi disks, raid5
128MB battery backed cache (50/50 r/w)
ext3 filesystems
brien
You are distributing the reads and writes on 5 disks here. Dont you
think
Evren Yurtesen wrote:
The problem is the reads, not writes. It takes long time for BackupPC to
figure out if the file should be backed up or not. Backups take very
long time even when there are few files which are backed up.
That's only true with rsync and especially rsync fulls where the
On Mon, 26 Mar 2007, David Rees wrote:
No kidding! My backuppc stats are like this:
For my biggest server, my stats are:
There are 36 hosts that have been backed up, for a total of:
503 full backups of total size 4200.30GB (prior to pooling and compression),
247 incr backups of total size
Les Mikesell wrote:
Evren Yurtesen wrote:
The problem is the reads, not writes. It takes long time for BackupPC
to figure out if the file should be backed up or not. Backups take
very long time even when there are few files which are backed up.
That's only true with rsync and especially
That spare hard drive wouldn't happen to be an external USB or Firewire,
would it?
Aaron Ciarlotta
Linux/UNIX Systems Guru
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Winston Chan
Sent: Monday, March 26, 2007 8:14 PM
To:
Evren Yurtesen wrote:
What is wall clock time for a run and is it
reasonable for having to read through both the client and server copies?
I am using rsync but the problem is that it still has to go through a
lot of hard links to figure out if files should be backed up or not.
From the
On 03/27 11:48 , Les Mikesell wrote:
Worry about your air conditioner instead - it has more effect on the
drive life. I've had as much trouble with drives that have been powered
off for much of their lives as ones that stay busy.
as an interesting data point, I have a 512MB drive that ran my
On 3/27/07, Les Mikesell [EMAIL PROTECTED] wrote:
Evren Yurtesen wrote:
What is wall clock time for a run and is it
reasonable for having to read through both the client and server copies?
I am using rsync but the problem is that it still has to go through a
lot of hard links to figure
Evren writes:
I use rsync with 3.0.0 but it was the same speed with 2.x.x versions.
$Conf{IncrLevels} = [1];
I wonder, since I have:
$Conf{IncrKeepCnt} = 6;
Wouldnt it make more sense to use this?:
$Conf{IncrLevels} = [1, 2, 3, 4, 5, 6];
or does this make
On 3/27/07, David Rees [EMAIL PROTECTED] wrote:
Can you try mounting the backup partition async so we can see if it
really is read performance or write performance that is killing backup
performance?
I must wonder if ufs2 is really bad at storing inodes on disk...
I went and did some
Carl Wilhelm Soderstrom wrote:
I would really like to see hard drives made to be more reliable, rather than
just bigger.
I'm not sure that can be improved enough to matter. A failure of an
inexpensive part once in five years isn't a big deal other than the side
effects it might cause, and
On 03/27 01:40 , Les Mikesell wrote:
Carl Wilhelm Soderstrom wrote:
I would really like to see hard drives made to be more reliable, rather than
just bigger.
I'm not sure that can be improved enough to matter. A failure of an
inexpensive part once in five years isn't a big deal other than
Hi Les,
Apologies for the late reply.
Michael Mansour wrote:
Hi,
Hi,
I have the following host summary:
server1 username1 20.63.360.772 10.4
10.4idledone server2 username1 20.63.23
0.743
Michael Mansour wrote:
I have servers which have multiple full backups and incrementals, but neither
is removed.
I don't think they are removed until the replacements are completed.
Is there a way I can just manually delete some of the backups (incrementals
and fulls) to start this process
Hi Les,
Michael Mansour wrote:
I have servers which have multiple full backups and incrementals, but
neither
is removed.
I don't think they are removed until the replacements are completed.
Is there a way I can just manually delete some of the backups (incrementals
and fulls) to
Michael writes:
Michael Mansour wrote:
I have servers which have multiple full backups and incrementals, but
neither
is removed.
I don't think they are removed until the replacements are completed.
Is there a way I can just manually delete some of the backups
Carl Wilhelm Soderstrom wrote:
I would really like to see hard drives made to be more reliable, rather than
just bigger.
I'm not sure that can be improved enough to matter. A failure of an
inexpensive part once in five years isn't a big deal other than the side
effects it might cause, and
On 03/27 02:49 , Les Mikesell wrote:
I find complete-system redundancy with commodity boxes to be cheaper and
more efficient than using military strength parts that cost 10x as much
and fail half as often.
That's Google's solution, and it works for them. Nothing fancy... rebooting
servers
David Rees wrote:
On 3/27/07, Les Mikesell [EMAIL PROTECTED] wrote:
Evren Yurtesen wrote:
What is wall clock time for a run and is it
reasonable for having to read through both the client and server copies?
I am using rsync but the problem is that it still has to go through a
lot of hard
Evren Yurtesen wrote:
Totals Existing Files New Files
Backup# Type#Files Size/MB MB/sec #Files Size/MB
#Files Size/MB
245 full152228 2095.2 0.06152177 2076.9 108 18.3
246 incr118 17.30.0076
Evren Yurtesen wrote:
David Rees wrote:
On 3/27/07, Les Mikesell [EMAIL PROTECTED] wrote:
Evren Yurtesen wrote:
What is wall clock time for a run and is it
reasonable for having to read through both the client and server copies?
Evren Yurtesen wrote:
# Type #Files Size/MB MB/sec #Files Size/MB #Files Size/MB
245 full 152228 2095.2 0.06 152177 2076.910818.3
On Linux with raid setup with async io etc. people are getting slightly
better results. I think ufs2 is just fine. I wonder if there is
Evren Yurtesen wrote:
Raid5 doesn't distribute disk activity - it puts the drives in
lockstep and is slower than a single drive, especially on small writes
where it has to do extra reads to re-compute parity on the existing data.
I am confused, when a write is done the data is distributed
On 3/27/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
David Rees wrote:
Evren, I didn't see that you mentioned a wall clock time for your
backups? I want to know how many files are in a single backup, how
much data is in that backup and how long it takes to perform that
backup.
I sent the
Hi,
Les Mikesell wrote on 27.03.2007 at 01:03:32 [Re: [BackupPC-users] RSync v.
Tar]:
Jesse Proudman wrote:
I've got one customer who's server has taken 3600 minutes to
backup. 77 Gigs of Data. 1,972,859 small files. Would tar be
better or make this faster? It's directly
Les Mikesell wrote:
Evren Yurtesen wrote:
Raid5 doesn't distribute disk activity - it puts the drives in
lockstep and is slower than a single drive, especially on small writes
where it has to do extra reads to re-compute parity on the existing data.
I am confused, when a write
On Mon, 2007-03-26 at 22:50 -0700, Craig Barratt wrote:
Winston writes:
I had been running BackupPC on an Ubuntu computer for several months to
back the computer to a spare hard drive without problem. About the time
I added a new host (Windows XP computer using Samba), I started getting
It is a SATA drive as is the main drive.
Winston
On Tue, 2007-03-27 at 09:54 -0600, Ciarlotta, Aaron wrote:
That spare hard drive wouldn't happen to be an external USB or Firewire,
would it?
Aaron Ciarlotta
Linux/UNIX Systems Guru
-Original Message-
From: [EMAIL
Winston Chan wrote:
I had been running BackupPC on an Ubuntu computer for several months to
back the computer to a spare hard drive without problem. About the time
I added a new host (Windows XP computer using Samba), I started getting
the following behavior:
BackupPC backs both hosts
I have a bit of hard data to offer on this subject, as I recently switched a
backup from tar+ssh (over cygwin) to rsyncd.
The backuppc server is on the same physical LAN, and connect to each other
via a 192.168 address. All the cabling and switches support 100 MB full
duplex communications, and
I found a strange characteristic of IDE drive.
When I connected CDROM and IDE hdd on same channel the data transfer with
hdparm -t command is @37 MB.
As I removed CDROM from that channel now the data transfer rate is @60 MB.
This seems to work with 4 to 5 linux pc.
--
Nilesh Vaghela
Winston writes:
I hadn't thought about the file system being full. After checking just
now, this is not the answer. /var/lib has 48G available on my main hard
drive. /var/lib/backuppc, to which the spare hard drive is mounted, has
59G available.
The directory /var/lib/backuppc/log is
Les Mikesell wrote:
Evren Yurtesen wrote:
# Type #Files Size/MB MB/sec #Files Size/MB #Files Size/MB
245 full 152228 2095.2 0.06 152177 2076.910818.3
On Linux with raid setup with async io etc. people are getting
slightly better results. I think ufs2 is just fine. I
David Rees wrote:
On 3/27/07, Evren Yurtesen [EMAIL PROTECTED] wrote:
David Rees wrote:
Evren, I didn't see that you mentioned a wall clock time for your
backups? I want to know how many files are in a single backup, how
much data is in that backup and how long it takes to perform that
nilesh vaghela wrote:
I found a strange characteristic of IDE drive.
When I connected CDROM and IDE hdd on same channel the data transfer
with hdparm -t command is @37 MB.
As I removed CDROM from that channel now the data transfer rate is @60 MB.
This seems to work with 4 to 5 linux
I have given you all the information you asked for(didnt I?), even tried
async option. Incremental backup of 1 machine still took about 300 minutes.
The machine is working fine. I was using another backup program which
was working way faster to backup the same machines. So I dont think
43 matches
Mail list logo