Mandi! Bill Arlofski via Bacula-users
In chel di` si favelave...
> First, you are passing them incorrectly. Just quote the whole line like:
Bingo! For google searches, because level have spaces, the correct row is:
Run After Job = "/etc/bacula/scripts/deleteFailedJobs %c \"%l\""
>
On 5/31/24 8:56 AM, Marco Gaiarin wrote:
If you *really* want to automatically delete failed jobs (I personally don't
think this is a good idea), you can use a
RunScript in an Admin type Job like:
Why in and 'Admin' job? I've tried to add something like:
Run After Job =
Mandi! Martin Simmons
In chel di` si favelave...
> Yes, that is how volumes work. Bacula can only append at the end of a volume,
> so the volume size would increase forever if it could switch back to Append
> after purging some jobs. To reuse a volume, it needs to be recycled, which
> only
Mandi! Bill Arlofski via Bacula-users
In chel di` si favelave...
> Why not set in your Pool(s) `MaximumVolumeJobs = 1`
Ah! Brilliant! Never minded abou that!!!
> If you prefer to have volumes stay in a pool they were initially created in
> forever, ignore that previous paragraph. :)
I've
On 5/24/24 2:39 AM, Marco Gaiarin wrote:
I suspect that 'job counter' get resetted if and only if all jobs in a
volume get purged; this lead me to think that my configuration simpy does
not work in a real situation, because sooner or later jobs get 'scattered'
between volumes and virtual job
> On Fri, 24 May 2024 10:39:06 +0200, Marco Gaiarin said:
>
> > I suspect that 'job counter' get resetted if and only if all jobs in a
> > volume get purged; this lead me to think that my configuration simpy does
> > not work in a real situation, because sooner or later jobs get 'scattered'
>
> I suspect that 'job counter' get resetted if and only if all jobs in a
> volume get purged; this lead me to think that my configuration simpy does
> not work in a real situation, because sooner or later jobs get 'scattered'
> between volumes and virtual job of consolidation stop to work, so
Mandi! Josh Fisher via Bacula-users
In chel di` si favelave...
> I use the following in my query.sql file:
OK, thanks! Now i can verify that:
*list media pool=FVG-PP-HFA3-1FilePool
On 5/9/24 10:02, Marco Gaiarin wrote:
I've setup some backup jobs for some (mostly windows) client computer; i
mean 'client' as 'not always on'.
...
2) There's some way i can get the 'jobs in volume X'? I can query jobs for
volume, but i've not found a way to query volumes for jobs
I use
I've setup some backup jobs for some (mostly windows) client computer; i
mean 'client' as 'not always on'.
I've setup a job like this:
Job {
Name = FVG-SV-EEG
JobDefs = DefaultJob
Storage = SVPVE3FileMulti
Pool = FVG-SV-EEGFilePool
Messages =
Hello Marco,
Virtual Full jobs will need at least one device for reading and another
device for writing.
On Sat, Nov 19, 2022 at 8:12 AM Marco Gaiarin
wrote:
>
> > 'Storage daemon didn't accept Device "FileStorage" command.'?!
>
> OK, i've tried to create another pool, set that pool as
> 'Storage daemon didn't accept Device "FileStorage" command.'?!
OK, i've tried to create another pool, set that pool as 'NextPool', create a
volume but nothing changed.
Still a VirtualFull job is stuck, now with:
Jobs waiting to reserve a drive:
3603 JobId=1890 File device "FileStorage"
Mandi! Marco Gaiarin
In chel di` si favelave...
> job still running, seems stuck, tomorrow some more info. ;-)
Ok, job definitevly stuck; this morning mailbox full of:
10-Nov 09:46 pppve3-sd JobId 1725: JobId=1725, Job
VDMTMS1.2022-11-09_18.27.16_45 waiting to reserve a device.
In storage
Mandi! Josh Fisher via Bacula-users
In chel di` si favelave...
>> 02-Nov 19:06 lnfbacula-dir JobId 1596: Start Virtual Backup JobId 1596,
>> Job=VDMTMS1.2022-11-02_19.06.49_02
>> 02-Nov 19:06 lnfbacula-dir JobId 1596: Warning: This Job is not an
>> Accurate backup so is not equivalent to
On 11/2/22 14:13, Marco Gaiarin wrote:
Mandi! Marco Gaiarin
In chel di` si favelave...
Pool definition:
Pool {
Name = VDMTMS1FilePool
Pool Type = Backup
Volume Use Duration = 1 week
Recycle = yes # Bacula can automatically
I think "Warning: Insufficient Backups to Keep." is an error. You can debug
it with the "setdebug" command or "-d500" execution option.
Else you can check in source code.
R.
śr., 2 lis 2022 o 22:11 Marco Gaiarin napisał(a):
> Mandi! Marco Gaiarin
> In chel di` si favelave...
>
> > Pool
Mandi! Marco Gaiarin
In chel di` si favelave...
> Pool definition:
> Pool {
> Name = VDMTMS1FilePool
> Pool Type = Backup
> Volume Use Duration = 1 week
> Recycle = yes # Bacula can automatically
> recycle Volumes
> AutoPrune =
Mandi! Josh Fisher via Bacula-users
In chel di` si favelave...
> If the client is on the other end of a 10Mbps link, then the options are
> to make the initial full backup over the slow link or temporarily move
> the client to the site where Dir/SD runs just to make the initial full
>
Mandi! Radosław Korzeniewski
In chel di` si favelave...
> I never used rsnapshot with tapes and I'm very curious how do you "put a
> rsnapshot backup on tape"?
> How? What command do you use? Do you use any external tools for that, i.e. a
> tar?
I simply install a bacula client on the same
Hello,
pt., 21 paź 2022 o 12:11 Marco Gaiarin napisał(a):
> Mandi! Radosław Korzeniewski
> In chel di` si favelave...
>
> > I never used rsnapshot with tapes, so I'm just curious how you manage it
> to
> > span multiple tapes?
>
> Aaa... sorry, a total misunderstanding, here!
>
> it
Mandi! Radosław Korzeniewski
In chel di` si favelave...
> I never used rsnapshot with tapes, so I'm just curious how you manage it to
> span multiple tapes?
Aaa... sorry, a total misunderstanding, here!
I simply mean that, if a need a 'second level backup' it is easy to put a
Hello,
niedz., 16 paź 2022 o 18:31 Marco Gaiarin
napisał(a):
> Mandi! Radosław Korzeniewski
> In chel di` si favelave...
>
> >> Ah, simply i put on tape the 'alpha.0' directory, the 'most recent
> backup'
> >> in rsnapshot lingo.
> >> It is marely a folder with all the file within.
> > So you
On 10/16/22 12:21, Marco Gaiarin wrote:
Mandi! Radosław Korzeniewski
In chel di` si favelave...
...
I do not understand your requirements. What is an "initial backup" you want to
make? Are you referring to the first Full backup which has to be executed on
the client?
Exactly. VirtualFull
Mandi! Radosław Korzeniewski
In chel di` si favelave...
>> Ah, simply i put on tape the 'alpha.0' directory, the 'most recent backup'
>> in rsnapshot lingo.
>> It is marely a folder with all the file within.
> So you are using a tar command for that, right? Then how do you restore data
>
Hello,
pon., 10 paź 2022 o 14:32 Marco Gaiarin napisał(a):
> Mandi! Radosław Korzeniewski
> In chel di` si favelave...
>
> > I'm not familiar with rsnapshot, but I'm curious how you manage rsnapshot
> > archives on tape libraries which span multiple tape cartridges (other
> media)?
>
> Ah,
Mandi! Radosław Korzeniewski
In chel di` si favelave...
> I'm not familiar with rsnapshot, but I'm curious how you manage rsnapshot
> archives on tape libraries which span multiple tape cartridges (other media)?
Ah, simply i put on tape the 'alpha.0' directory, the 'most recent backup'
in
Hello,
wt., 27 wrz 2022 o 22:41 Marco Gaiarin napisał(a):
>
> Defining a backup strategy, we are evaluating our previous different
> policies, mostly based on Bacula and rsnapshot.
>
> Bacula here is used with different media (tape, tape library, virtual
> changer with RDX disks, ...) but we
Defining a backup strategy, we are evaluating our previous different
policies, mostly based on Bacula and rsnapshot.
Bacula here is used with different media (tape, tape library, virtual
changer with RDX disks, ...) but we are considering for this a 'File'
storage type.
The goal is to have a
Hello.
I know this question came up in the past, but perhaps things have
changed in the last years...
I've got a setup where i backup a couple of servers to a NAS (using
Full/Diff/Inc schedule) daily.
Then once a week I need to "copy" the most recent backup to an external
HD (so to have an
Hello,
On 07/15/2016 01:19 PM, Wanderlei Huttel wrote:
> Hello Eric
>
> Thanks for response, but I was looking in src/jcr.h .code and I found
> this below, so I thought bacula should save the correct Level code.
>
> /* Backup/Verify level code. These are stored in the DB */
> #define L_FULL
Hello Eric
Thanks for response, but I was looking in src/jcr.h .code and I found this
below, so I thought bacula should save the correct Level code.
/* Backup/Verify level code. These are stored in the DB */
#define L_FULL 'F' /* Full backup */
#define L_INCREMENTAL
Hello,
2016-07-14 20:33 GMT+02:00 Wanderlei Huttel :
> I've made some tests to check how it works VirtualFull backup, and it
> looks worked fine, but the level of backup was saved in the database with
> "F" (Full) not "f" (Virtual Full).
>
> I'm running Bacula 7.4.2
>
Hello Wanderlei,
On 07/14/2016 08:33 PM, Wanderlei Huttel wrote:
> I've made some tests to check how it works VirtualFull backup, and it
> looks worked fine, but the level of backup was saved in the database
> with "F" (Full) not "f" (Virtual Full).
>
> I'm running Bacula 7.4.2
>
> Is this
I've made some tests to check how it works VirtualFull backup, and it looks
worked fine, but the level of backup was saved in the database with "F"
(Full) not "f" (Virtual Full).
I'm running Bacula 7.4.2
Is this correct?
Best Regards
*Wanderlei Hüttel*
http://www.huttel.com.br
Hello Everyone!
I'm trying to use bacula Virtual Full in my lab-environment, unfortunately
doesn't work yet and I don't know how to resolve.
My actual status is: VirtualFull job read Full pool + Incremental pool
correctly, identify all filles for make a Virtual Full but doesn't
continue, stuck
Hi!,
Xabier, mates, could anyone confirm this??
Thank you so much,
El 06/05/2014, a las 14:14, Egoitz Aurrekoetxea ego...@ramattack.net escribió:
So, finally this all means that you can do a virtual full job and no need to
migrate is needed anymore ? neither
for doing a new virtual full
Hi Egoitz,
I cannot confirm the fix, because I am running version 5.2.12 on my
server, but I think it should be easy to test y you have a running
bacula server.
Best Regards,
Xabier
El 07/05/14 09:30, Egoitz Aurrekoetxea escribió:
Hi!,
Xabier, mates, could anyone confirm this??
Thank you
Good afternoon :) ,
I tested in that version and didn’t work… that was the reason of the question…
perhaps it’s done in 7.0?
Regards,
El 07/05/2014, a las 14:19, Xabier Elkano xelk...@hostinet.com escribió:
Hi Egoitz,
I cannot confirm the fix, because I am running version 5.2.12 on my
Hi,
happy to read that this restriction has been removed :-)
I am running virtual fulls for almost three years and I'm very happy
with them. I had to use them because my servers had a very high load
when running full backups jobs. I did a little trick to manage the
migration from virtual pool
So, finally this all means that you can do a virtual full job and no need to
migrate is needed anymore ? neither
for doing a new virtual full or perform some kind of restoration?.
Best regards,
El 06/05/2014, a las 12:40, Xabier Elkano xelk...@hostinet.com escribió:
Hi,
happy to read
Good morning Kern,
So you mean that even you create a virtual full in a different pool there’s no
need to later migrating for operational purposes like using it for a restore or
a new virtual full?. Do you mean this?
Best regards,
El 04/05/2014, a las 14:51, Kern Sibbald k...@sibbald.com
Hello,
I believe that I have removed the restriction on using a different Pool.
Perhaps it is not well documented, in which case if you make it work, as
I think a lot of people have done, a patch for the manual would be
appreciated.
Kern
On 04/30/2014 12:45 PM, Egoitz Aurrekoetxea wrote:
Good
Anyone who knows this please?
El 30/04/2014, a las 12:45, Egoitz Aurrekoetxea ego...@ramattack.net escribió:
Good morning,
I’m at this moment doing real full jobs of my servers. I have some slowness
(in backup) with some servers due to it’s activity. I have tested virtual
full jobs
and
Good morning,
I’m at this moment doing real full jobs of my servers. I have some slowness (in
backup) with some servers due to it’s activity. I have tested virtual full jobs
and work like a charm :) but I have one problem; normally I schedule a real
full job and I’m done… as virtual full jobs
Hi,
how can i enable compression for VirtualFull backups? Thank you.
azur
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
Hello,
2013/3/25 azurIt azu...@pobox.sk
Hi,
how can i enable compression for VirtualFull backups? Thank you.
You cannot enable compression for a Job. You can enable compression for a
FileSet. If you want to have a VirtualFull compressed enable it for a
fileset.
best regards
--
Radosław
Hi,
how can i enable compression for VirtualFull backups? Thank you.
You cannot enable compression for a Job. You can enable compression for a
FileSet. If you want to have a VirtualFull compressed enable it for a
fileset.
I did this but VirtualFull backups are not compressed, only
Hello,
2013/3/25 azurIt azu...@pobox.sk
Hi,
how can i enable compression for VirtualFull backups? Thank you.
You cannot enable compression for a Job. You can enable compression for a
FileSet. If you want to have a VirtualFull compressed enable it for a
fileset.
I did this but
Hi,
how can i enable compression for VirtualFull backups? Thank you.
You cannot enable compression for a Job. You can enable compression for a
FileSet. If you want to have a VirtualFull compressed enable it for a
fileset.
I did this but VirtualFull backups are not compressed, only
Hello,
2013/3/25 azurIt azu...@pobox.sk
Thank you, this will be the case as i'm *never* doing Full backups (it
takes ages to complete).
If you are using a Virtual Full Backups then you had to run at least one
Full Backup on your client. If your VF backups are not compressed then your
first
Hello,
2013/3/25 azurIt azu...@pobox.sk
Thank you, this will be the case as i'm *never* doing Full backups (it
takes ages to complete).
If you are using a Virtual Full Backups then you had to run at least one
Full Backup on your client. If your VF backups are not compressed then your
first
On Mon, 25 Mar 2013 14:26:07 +0100
azurIt azu...@pobox.sk wrote:
Thank you, this will be the case as i'm *never* doing Full backups
(it takes ages to complete).
If you are using a Virtual Full Backups then you had to run at least
one Full Backup on your client. If your VF backups are not
Thank you, this will be the case as i'm *never* doing Full backups (it takes
ages to complete). Is there a way how to compress already backuped files in
volumes?
Store the volumes on a filesystem that compresses like btrfs.
John
Thank you, this will be the case as i'm *never* doing Full backups (it takes
ages to complete). Is there a way how to compress already backuped files in
volumes?
Store the volumes on a filesystem that compresses like btrfs.
Interesting idea but it will then do more compression than it's
Thank you, this will be the case as i'm *never* doing Full backups
(it takes ages to complete).
If you are using a Virtual Full Backups then you had to run at least
one Full Backup on your client. If your VF backups are not
compressed then your first Full backup wasn't compressed either. To
Interesting idea but it will then do more compression than it's needed - for
example, Bacula needs to compress only new/modified files. In the way you
suggested, kernel will need to compress everything again with every
VirtualFull backup.
Most of the filesystems that compress do that on a
Interesting idea but it will then do more compression than it's needed - for
example, Bacula needs to compress only new/modified files. In the way you
suggested, kernel will need to compress everything again with every
VirtualFull backup.
Most of the filesystems that compress do that on a
Hi Christian,
On Fri, Feb 22, 2013 at 09:23:05AM +0100, Masopust, Christian wrote:
I've now defined the necessary two pools but am unsure now if it is a good
practice to
create a loop in these pools:
I googled a lot and found some concerns that these configuration can cause a
deadlock, is
Dear all,
I'm currently completely redefining my Bacula setup and plan now to use Virtual
Full backups
for some of my bigger servers.
I've now defined the necessary two pools but am unsure now if it is a good
practice to
create a loop in these pools:
Pool {
Name = Pool1
Next Pool =
I think we both are using bacula in a very similar way. My offsite storage is
only to be used in case of catastrophe. I too make a copy of the catalog and
bacula configuration files to the external HD. I also add an Ubuntu live CD
image just in case. Thanks for your script. I'm confident I'll
I was thinking of doing something similar although manually since my scripting
skills are a little bit rusted. I would love to see your script if you don't
mind
-Original Message-
From: James Harper [mailto:james.har...@bendigoit.com.au]
Sent: Tuesday, July 10, 2012 5:14 PM
To: Jose
I have a backup_catalog_pre and backup_catalog_post scripts. The _pre script
just calls the catalog backup script so it can be backed up by the job. The
_post script is on the end of this email. It gathers any volumes associated
with jobs in the offsite pool and purges them. Then for good
On 7/9/2012 4:38 PM, Jose Blanco wrote:
I have a problem with VirtualFull and I don’t know whether this
behavior is by design or I messed up my configuration:
I’m trying to run a Virtualfull backup but bacula is trying to read
the last virtualfull (which is stored offsite) so it fails. I
I have a problem with VirtualFull and I don't know whether this behavior is by
design or I messed up my configuration:
I'm trying to run a Virtualfull backup but bacula is trying to read the last
virtualfull (which is stored offsite) so it fails. I thought bacula would
construct
a
I have a problem with VirtualFull and I don't know whether this behavior is by
design or I messed up my configuration:
I'm trying to run a Virtualfull backup but bacula is trying to read the last
virtualfull (which is stored offsite) so it fails. I thought bacula would
construct a VirtualFull
El 30/01/12 09:22, Tom escribió:
Hello,
this is my setup. I defined 2 filebased storage devices, file-1 and
file-2. They are mapped to the folder /bacula-stor/file-1 and
/bacula-stor/file-2. I defined 2 pools, pool-1 and pool-2, that are
mapped to the storage devices. Pool-1 has his Next
I eventuelly copied the volume files into the other directory and
changed the definition of the second file device to that folder. This
works as he is now able to locate the volume. For now it will work, but
if the data grows I will have to come up with a different solution as
the volume will
Hello,
this is my setup. I defined 2 filebased storage devices, file-1 and
file-2. They are mapped to the folder /bacula-stor/file-1 and
/bacula-stor/file-2. I defined 2 pools, pool-1 and pool-2, that are
mapped to the storage devices. Pool-1 has his Next Pool directive set to
pool-2.
I have
Hello,
I have a small question. When virtualfull is made will data pass trought the
director? Question is in the case the storage is on low bandwith will data move
from storage to director and back to storage or storage to storage?
Miikael
On 01.12.2011 13:42, Miikael Havelock Nilson wrote:
Hello,
I have a small question. When virtualfull is made will data pass trought the
director? Question is in the case the storage is on low bandwith will data
move from storage to director and back to storage or storage to storage?
data
question:
I have a Base job for a server.
Full backups based on the Base Job run successfully.
Incrementals run also fine.
When I run a VirtualFull job the end result is not a Full backup but an
Incremental one, based (I guess) on the last Incrementals.
If I try to restore from the
ok, here seems to be the solution:
http://www.bacula.org/manuals/en/concepts/concepts/Migration_Copy.html
would be nice to mention this section in the manual ;)
greets.
--
Fulfilling the Lean Software Promise
Lean
Am 19.04.2011 13:37, schrieb J. Echter:
Hi,
i try to set up VirtualFull backups.
I have 3 pools, inc- diff- and full
what do i have to put into my pool definitions?
as far as my understanding is i can use NextPool = full in my full pool
directive, but if i use this bacula stays with
Hi,
i try to set up VirtualFull backups.
I have 3 pools, inc- diff- and full
what do i have to put into my pool definitions?
as far as my understanding is i can use NextPool = full in my full pool
directive, but if i use this bacula stays with waiting on storage file.
greets
juergen
Hi,
we've 2 tape libraries (tl) with one media changer for each tl.
tl-1 type LTO-4
tl-2 type LTO-5
At the moment we use LTO-4 tapes for both tl.
In bacula we've configured them using the following media type setting:
tl-1: LTO4
tl-2: LTO5
Our differential and incremental jobs use tl-1 and
On 2011-01-06 23:00, James Harper wrote:
Hi
Anyone doing VirtualFull backups using tapedrives only?
Can they shortly describe their setup? pro/con/etc?
I'm not, but can tell you it's probably not a good idea - you'd end up
with extra wear and tear on your tape drives, as well as greatly
On 1/7/2011 7:02 PM, penne...@sapo.pt wrote:
Citando Jim Barberjim.bar...@ddihealth.com:
Yes I am.
I am using a TL2000 tape library with two drives.
The technique can't work if you only have one tape drive.
I'm taking incremental backups Mon-Fri.
Then after
Citando Jim Barber jim.bar...@ddihealth.com:
Yes I am.
I am using a TL2000 tape library with two drives.
The technique can't work if you only have one tape drive.
I'm taking incremental backups Mon-Fri.
Then after the incremental backups are finished on Friday I
On 8/01/2011 8:02 AM, penne...@sapo.pt wrote:
PS : Do you know if replacing full with virtualfull is possible for
standard disk based storage deamons ?
i taught that one can not read and write simultaneously from the same
SD. so you cannot create a virtuallfull from the previous virtualfull
I am using a TL2000 tape library with two drives.
The technique can't work if you only have one tape drive.
I'm taking incremental backups Mon-Fri.
Then after the incremental backups are finished on Friday I consolidate
them into a VirtualFull backup.
For a VirtualFull backup to work it
Hi
Anyone doing VirtualFull backups using tapedrives only?
Can they shortly describe their setup? pro/con/etc?
Thanks.
--
Jesper
--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to
Hi
Anyone doing VirtualFull backups using tapedrives only?
Can they shortly describe their setup? pro/con/etc?
I'm not, but can tell you it's probably not a good idea - you'd end up
with extra wear and tear on your tape drives, as well as greatly reduced
performance while the target tape
On 7/01/2011 3:31 AM, Jesper Krogh wrote:
Hi
Anyone doing VirtualFull backups using tapedrives only?
Can they shortly describe their setup? pro/con/etc?
Thanks.
Yes I am.
I am using a TL2000 tape library with two drives.
The technique can't work if
Hello,
Graham Keeling wrote:
Hello,
I now believe that the 'taking hours' problem that I was having was
down to having additional indexes on my File table, as Eric suggested.
I am using mysql-5.0.45.
I had these indexes:
JobId
JobId, PathId, FilenameId
PathId
FilenameId
Now
Hello,
I now believe that the 'taking hours' problem that I was having was
down to having additional indexes on my File table, as Eric suggested.
I am using mysql-5.0.45.
I had these indexes:
JobId
JobId, PathId, FilenameId
PathId
FilenameId
Now I have these indexes:
JobId
JobId, PathId,
On Thu, Apr 08, 2010 at 12:29:05PM -0700, ebollengier wrote:
Graham Keeling wrote:
On Thu, Apr 08, 2010 at 07:44:14AM -0700, ebollengier wrote:
Hello,
Graham Keeling wrote:
Hello,
I'm still waiting for my test database to fill up with Eric's data
(actually,
Graham Keeling wrote:
I don't understand this at all.
If you cannot trust the JobIds or FileIds in the File table, then the
postgres
query is also broken. The postgres query doesn't even mention JobTDate.
In fact, the postgres query is using StartTime to do the ordering.
And
On Fri, Apr 09, 2010 at 05:27:44AM -0700, ebollengier wrote:
I'm really thinking that the problem is on the MySQL side (bad version
perhaps), or on your
modifications (my tests shows that with a FilenameId, PathId index, results
are 10 times slower than
with the default indexes)
What
On Wed, Apr 07, 2010 at 09:58:51AM -0700, ebollengier wrote:
ebollengier wrote:
Graham Keeling wrote:
On Wed, Apr 07, 2010 at 08:22:09AM -0700, ebollengier wrote:
I tweaked my test to compare both queries, and it shows no difference
with
and without base job part... If you
Hello,
I'm still waiting for my test database to fill up with Eric's data (actually,
it's full now, but generating the right indexes is taking lots of time).
But, I have another proposed solution, better than the last one I made.
My previous solution was still taking a very very long time for
Hello,
Graham Keeling wrote:
Hello,
I'm still waiting for my test database to fill up with Eric's data
(actually,
it's full now, but generating the right indexes is taking lots of time).
But, I have another proposed solution, better than the last one I made.
My previous solution
On Thu, Apr 08, 2010 at 07:44:14AM -0700, ebollengier wrote:
Hello,
Graham Keeling wrote:
Hello,
I'm still waiting for my test database to fill up with Eric's data
(actually,
it's full now, but generating the right indexes is taking lots of time).
But, I have another
Graham Keeling wrote:
On Thu, Apr 08, 2010 at 07:44:14AM -0700, ebollengier wrote:
Hello,
Graham Keeling wrote:
Hello,
I'm still waiting for my test database to fill up with Eric's data
(actually,
it's full now, but generating the right indexes is taking lots of
time).
On Tue, Apr 06, 2010 at 09:01:13AM -0700, ebollengier wrote:
Hello Graham,
Hello, thanks for your reply.
Graham Keeling wrote:
Hello,
I'm using bacula-5.0.1.
I have a 2.33GHz CPU with 2G of RAM.
I am using MySQL.
I had a VirtualFull scheduled for my client.
My log says the
Graham Keeling wrote:
On Tue, Apr 06, 2010 at 09:01:13AM -0700, ebollengier wrote:
Hello Graham,
Hello, thanks for your reply.
Graham Keeling wrote:
Hello,
I'm using bacula-5.0.1.
I have a 2.33GHz CPU with 2G of RAM.
I am using MySQL.
I had a VirtualFull scheduled for my
On Wed, 7 Apr 2010 12:40:24 +0100, Graham Keeling said:
On Tue, Apr 06, 2010 at 09:01:13AM -0700, ebollengier wrote:
Hello Graham,
Hello, thanks for your reply.
Graham Keeling wrote:
Hello,
I'm using bacula-5.0.1.
I have a 2.33GHz CPU with 2G of RAM.
I am using MySQL.
On Wed, Apr 07, 2010 at 02:51:42PM +0100, Martin Simmons wrote:
Does it still run quickly if keep that Job.JobId IN clause but use the numbers
returned by
SELECT DISTINCT BaseJobId
FROM BaseFiles
WHERE JobId IN (22,23,31,34,42,48,52)
in place of the the nested select?
In my case,
Since:
a) I am not using Base jobs
b) I am currently stuck with using MySQL
c) There is not a 'proper' fix yet
I am going to use the attached patch as a temporary solution to the problem.
Index: src/cats/sql_cmds.c
===
RCS file:
ebollengier wrote:
Graham Keeling wrote:
On Tue, Apr 06, 2010 at 09:01:13AM -0700, ebollengier wrote:
Hello Graham,
Hello, thanks for your reply.
Graham Keeling wrote:
Is there anything I can do that would speed it up?
Perhaps even more importantly, what was it doing for
On Wed, Apr 07, 2010 at 08:22:09AM -0700, ebollengier wrote:
I tweaked my test to compare both queries, and it shows no difference with
and without base job part... If you want to test queries on your side with
your data, you can download my tool (accurate-test.pl) on
1 - 100 of 114 matches
Mail list logo