John,
The dir and database are on the same machine and memory is not a problem. I
tried a partial restore - it restores files but not recursively. Meaning no
subdirectories. Then I tried restoring the subdirectory. It get that too but
no sub-sub directories.
Yudhvir
Although the cpu is pinged at 100%
Yudhvir
The dir and database are on the same machine and memory is not a problem.
--
___
Bacula-users mailing list
Did you wait till the cpu went back to low cpu usage?
No, it stays high overnight and my patience runs out before cpu pegging
does.
Depending on
your configuration and optimization of your database this could take
anywhere from a few minutes to a few hours to finish.
I assume the disk /
HELP
How do I actually restore 11.6 million files from a backup job?
SETUP
Bacula 2.4.4 DIR and SD on a FreeBSD 7.1. backed up 11.6 Million files
compressed into 372 G Bytes . I am trying to restore them onto a different
system. I use bconsole to say:
BCONSOLE COMMAND
* restore client=client1-fd
Does the compression happen at the fd side or at the sd side?
* My fd side is Solaris and gzip version is: gzip 1.3.5 (2002-09-30)
* And another fd side is Solaris with the same version 1.3.5 - and it does
compression fine.
* My sd side is FreeBSD and gzip version is: FreeBSD gzip 20070711
PROBLEM
dir and sd, and catalog are on a FreeBSd/zfs machine and clients are on a
SUN Solaris zfs. One client does compression fine and the other does no
compression. Both client configuration Filesets have the same compression
lines.
BACKING UP
a zfs partition:
05-Jun 18:08 titanic-dir JobId
bconsole 'status jobs' shows:
Terminated Jobs:
JobId LevelFiles Bytes Status FinishedName
===
2 Full 0 0 OK 12-Apr-09 01:41
BackupCatalog
30
PROBLEM - I have run multiple fulls and have NEVER been able to complete a
PG insert operation. Granted my fileset is large - about 9.8 million files,
~700GB. What am I doing wrong?
SYSTEM
FreeBSD 7.1-RELEASE system, 8 GB RAM, 4 core AMD 64 processor, SUN Ultra 40
DATABASE
Postgresql version
Michael,
This is a work in progress and I'll keep everyone posted on what my configs
are once I know something works. I have re-compiled bacula with batch mode
turned on.
Yudhvir
===
On Sun, Nov 30, 2008 at 9:18 AM, Michael Galloway m...@ornl.gov wrote:
On Thu, Nov 27, 2008 at 04:03:50PM
SWAGGER
Yes, I mean to swagger here. I am migrating to Bacula and just the mail and
user directories are about 690 GB. I expect our full dataset to backup will
end up in the 30 TB range.
REASON TO SWAGGER
I am looking for anyone else in the same boat. Looking for hardware which
will support
MY SITUATION
I can take your Megabytes and shame you with my 9,868,868 mostly Maildir
files and 690.8 GB space they take up. Take that! It took 25 hours to
transfer and is currently indexing. Before I ramble on, here is some
confguration info:
CONFIGURATION
dir Version: 2.4.2 (26 July 2008),
I have over 200 users with about 20 TB's of data that we backup. And I have
a problem...
THE PROBLEM
The concept of backup is starting to lose it's meaning with desktops
sporting 1.5 TB drives. How can we stay abreast (sp?) of backing up ever
increasing drive sizes in workstations?
CENTRALIZED
AWESOME: Total newbie. Great software! I just came up with an alternate to
it comes in the night... - Works great, less filling
VERSIONS: On FreeBSD 7.0, installed bacula 2.2.5 and trying to veer away
from Networker.
225 CLIENTS: About 100 of my users are on Linux and 100 on OS X with a
couple
101 - 113 of 113 matches
Mail list logo