On 12/21/2014 11:17 AM, D S wrote:
Re: [Bacula-users] Bacula MySQL Agressive Tuning
Hello,
just to add one more related question/problem to this thread:
- using MySQL/MyISAM
- about 15M files from several clients
- 2.5-3M files per client
- the problem: the Dir inserting Attributes
On 12/22/14 11:01, Josh Fisher wrote:
MyISAM can be quick at inserting records at the end of a table, but
has both table locking and a single key buffer lock causing contention,
so is much, much slower at inserts that are not at the end of a table.
MyISAM also does not have a change buffer. In
For us,
even InnoDB was way to slow. It took hours to insert attributes.
We switched to PostgreSQL three years ago. Since then, no more performance
problems.
Also, creating the tree for restores was magnitudes faster with PostgreSQL.
It heavily depends on the number of files and attributes in
I have a requirement to keep 6 monthly full backups and a month of incrementals
of the same fileset, a directory containing local daily mysqlbackups of the
last 2 days.
Do I need to use 2 different jobs with different schedules to do this?
If so, how do I stop the full from running on the same
I have a requirement to keep 6 monthly full backups and a month of
incrementals of the same fileset, a directory containing local daily
mysqlbackups of the last 2 days.
Do I need to use 2 different jobs with different schedules to do this?
If so, how do I stop the full from running on the
This is what I've done. I use 2 different jobs with 2 different schedules to 2
different pools in the same storage device, since it's the same client.
On the 1st of every month I do a full.
On the 2-31 days I do an incremental, but the backup on the 2nd is also a full
because it's the first.
I
This is what I've done. I use 2 different jobs with 2 different schedules to
2 different pools in the same storage device, since it's the same client.
On the 1st of every month I do a full.
On the 2-31 days I do an incremental, but the backup on the 2nd is also a
full because it's the first.
What about the File Retention and Job Retention set in the Client. Should I not
set those?
-Original Message-
From: Heitor Faria [mailto:hei...@bacula.com.br]
Sent: Monday, December 22, 2014 2:54 PM
To: Polcari, Joe (Contractor)
Cc: bacula-users@lists.sourceforge.net
Subject: Re:
Ahh! Let me configure that and think about it. It may be what I need.
-Original Message-
From: Heitor Faria [mailto:hei...@bacula.com.br]
Sent: Monday, December 22, 2014 2:54 PM
To: Polcari, Joe (Contractor)
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Disk Based
Also, do I not define the Pool spec in the Job?
-Original Message-
From: Heitor Faria [mailto:hei...@bacula.com.br]
Sent: Monday, December 22, 2014 2:54 PM
To: Polcari, Joe (Contractor)
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Disk Based Scheduling question
Also, do I not define the Pool spec in the Job?
The Pool directive is always need by the Job resource. However it's just the
default for ad hoc job running, since schedule Pool definition will override it.
Regards,
Heitor Medrado de
I'm looking for feedback on a scheme dynamic filesets.
We backup to LTO4, with a schedule where each fileset gets a full every
2 months, a differential weekly, and incrementals nightly. I'm willing
to change the differential backup to monthly (alternating with Fulls).
-The Problem-
As our data
Hi,
I had a major crash that destroyed my bacula database, and my bootstrap
file. The only thing I have to start recovering from are my bacula
tapes.
I tried to scan them all with a bscan.bsr file which looks like:
Volume=KF3969L3
MediaType=LTO-3
Slot=1
Volume=ATT477L3
MediaType=LTO-3
13 matches
Mail list logo