y log that will tell me what's its transferring at this
moment (or if anything).
Any suggestions?
--
Jeremy Mann
jer...@biochem.uthscsa.edu
University of Texas Health Science Center
Bioinformatics Core Facility
http://www.bioinformatics.uthscsa.edu
P
rver to create the
> tarball,
> remember that netcat is your friend.)
Right now I've stopped the backup and am doing each directory manually.
I'm just curious if the sheer amount of data (~400 GB) I selected to
restore confused the program.
--
Jeremy Mann
jer...@biochem.uthscsa.
create, I type:
/usr/share/BackupPC/bin/BackupPC_tarCreate -h -n -1 -l -s usr /
and I get:
No language setting
BackupPC::Lib->new failed
--
Jeremy Mann
jer...@biochem.uthscsa.edu
University of Texas Health Science Center
Bioinformatics Core Facility
http://www.bioinformatics.uthscsa.
Just curious if there is a bandwidth speed setting for BackupPC because
I'm not seeing a lot of bandwidth when the backups occur. All my servers
are on gigE and I'm not even seeing 50Mbit speeds between the servers and
backupPC server.
--
Jeremy Mann
jer...@biochem.uthscsa.edu
Uni
when I'm used to our old rsync
scripts that utilized our full gigE network.
--
Jeremy Mann
jer...@biochem.uthscsa.edu
University of Texas Health Science Center
Bioinformatics Core Facility
http://www.bioinformatics.uthscsa.edu
Phone: (210) 567-2672
-
l
rsync scripts I've seen sustained speeds up to 95 MB/s from all over
campus. Now since I've switched to BackupPC for deduplication, I'm seeing
1-5 MB/s on the same servers.
--
Jeremy Mann
jer...@biochem.uthscsa.edu
University of Texas Health Science Center
Bioinformatics Co
ptable for a full backup.
Now if you tell me my hardware isn't fast enough, the BackupPC server is a
dual Opteron 2.2 Ghz with 8 GB RAM and 24 300GB drives in a 3ware RAID5
array, it isn't.
--
Jeremy Mann
jer...@biochem.uthscsa.edu
University of Texas Hea
Jon Craig wrote:
> On Mon, Aug 17, 2009 at 5:38 PM, Les Mikesell
> wrote:
>> Jeremy Mann wrote:
>
>>> I'm watching a live output of Ganglia showing network usage while the
>>> backups are going. Also simple math.. I just finished one full backup,
>&
RAM and 24 300GB drives in a 3ware RAID5
>> array, it isn't.
>
> Either end can limit the speed. How many concurrent runs do you do?
> Also, you should expect much faster rates if you have a few large files
> than if you have millions of tiny ones.
I am retrying the same
mons
aren't even at the top of the list.
I thought this would solve our storage problem, but I'm afraid I cannot
spend more time on this. I'm going to revert back to our old rsync
scripts.
--
Jeremy Mann
jer...@biochem.uthscsa.edu
University of Texas Health Sci
add a line for each
subdirectory. Would this speed it up?
--
Jeremy Mann
jer...@biochem.uthscsa.edu
University of Texas Health Science Center
Bioinformatics Core Facility
http://www.bioinformatics.uthscsa.edu
Phone: (210
morning and saw all 28 servers were done.
--
Jeremy Mann
jer...@biochem.uthscsa.edu
University of Texas Health Science Center
Bioinformatics Core Facility
http://www.bioinformatics.uthscsa.edu
Phone: (210) 567-2672
-
es for approximately 20 GB of files. I
reenabled the server and added each directory in /export to the config
file. I will see what happens tomorrow when it runs tonight.
--
Jeremy Mann
jer...@biochem.uthscsa.edu
University of Texas Health Science Center
Bioinformatics Core Facility
http://www.bio
Jeffrey J. Kosowsky wrote:
> What version of rsync are you running on the client and server?
All servers (and BackupPC client) is version 3.0.6.
--
Jeremy Mann
jer...@biochem.uthscsa.edu
University of Texas Health Science Center
Bioinformatics Core Facility
h
is a no-no as is ext3?
>
> What is the best way to set up the RAID array for BackupPC?
In my personal experience, on our large arrays we use RAID5 with XFS
filesystem.
--
Jeremy Mann
jer...@biochem.uthscsa.edu
University of Texas Health Science Center
Bioinformatics Core Facility
http://ww
15 matches
Mail list logo