Re: [Bacula-users] migrating to different database backend

2006-02-06 Thread Martin Simmons
 On Fri, 3 Feb 2006 23:15:47 +0100, Magnus Hagander [EMAIL PROTECTED] 
 said:
 Content-class: urn:content-classes:message
 Thread-Topic: [Bacula-users] migrating to different database backend
 
   This sounds like either table or index bloat. Typical 
  reasons for tihs 
   are not doing vacuum (which obviously isn't your problem), 
  or having 
   too few FSM pages. This can also be caused by not running vacuum 
   earlier, but doing it now - if you got far enough away from 
  the good 
   path you'll need a VACUUM FULL to recover.
  
  I get crazy index bloat with PostgreSQL 7.3.4 but running 
  VACUUM FULL ANALYZE once a week keeps it mostly under 
  control.  At least I'm assuming it is index bloat, because 
  running VACUUM ANALYZE once a week didn't fix it but droping 
  and recreating the indexes does.
 
 In a properly configured database, you should never have to do a VACUUM
 FULL. It can be needed if you do one-time operations (say delete 80% of
 a huge table), but never in normal operation.

I don't claim to have a properly configured database :-)

OTOH, at least 15% of the 9 million rows in the File table are deleted (by
pruning) and reinserted (by backup) every weekend.  Within 2 months, almost
100% of the rows will have been deleted and reinserted.


 Also, you really shouldn't be running 7.3.4. If you for some reason
 absolutely need to stick with 7.3, you should absolutely be on 7.3.13.
 But if you can do anything about it, move up to 8.1 - or at least 8.0.
 (which of course means 8.1.2 or 8.0.6, *always* go for the latest
 release in a stable series)
 
 There are plenty of improvements in those, and quite a lot around
 VACUUM. And if you are having problems with index bloat, most (if not
 all) of those were fixded in 7.4. So you really want to look at
 upgrading.

It is mainly inertia and DFIIIAB syndrome, though I'm sure you know lots of
reasons why it is broken!  The db is only used for Bacula.

__Martin


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


RE: [Bacula-users] migrating to different database backend

2006-02-06 Thread Magnus Hagander
This sounds like either table or index bloat. Typical
   reasons for tihs
are not doing vacuum (which obviously isn't your problem),
   or having
too few FSM pages. This can also be caused by not 
 running vacuum 
earlier, but doing it now - if you got far enough away from
   the good
path you'll need a VACUUM FULL to recover.
   
   I get crazy index bloat with PostgreSQL 7.3.4 but running VACUUM 
   FULL ANALYZE once a week keeps it mostly under control.  At least 
   I'm assuming it is index bloat, because running VACUUM 
 ANALYZE once 
   a week didn't fix it but droping and recreating the indexes does.
  
  In a properly configured database, you should never have to do a 
  VACUUM FULL. It can be needed if you do one-time operations (say 
  delete 80% of a huge table), but never in normal operation.
 
 I don't claim to have a properly configured database :-)

:-)


 OTOH, at least 15% of the 9 million rows in the File table 
 are deleted (by
 pruning) and reinserted (by backup) every weekend.  Within 2 
 months, almost 100% of the rows will have been deleted and reinserted.

So run a VACUUM after each of those. Or just run a nightly VACUUM (or
maybe daily if you run the backups at night, but you get the idea). 15%
is fairly much, but that just means you need higher max_fsm_pages
(maybe, depends on db size).


  Also, you really shouldn't be running 7.3.4. If you for some reason 
  absolutely need to stick with 7.3, you should absolutely be 
 on 7.3.13.
  But if you can do anything about it, move up to 8.1 - or at 
 least 8.0.
  (which of course means 8.1.2 or 8.0.6, *always* go for the latest 
  release in a stable series)
  
  There are plenty of improvements in those, and quite a lot around 
  VACUUM. And if you are having problems with index bloat, 
 most (if not
  all) of those were fixded in 7.4. So you really want to look at 
  upgrading.
 
 It is mainly inertia and DFIIIAB syndrome, though I'm sure 
 you know lots of reasons why it is broken!  The db is only 
 used for Bacula.

Heh. Well, for the 7.3 branch that's just plain bugfixes, so it's
certainly broken in some ways.

As for the newer versions, well, they solve your index bloat proble most
likely  but it's in general just new features and optimisations that
you're missing out on. 

Consider Bacula is a very simple user of the database, the upgrade
really shouldn't be hard. I would definitly suggest it, there are plenty
of improvements that will help you.

//Magnus


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid3432bid#0486dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] migrating to different database backend

2006-02-06 Thread Martin Simmons
 On Mon, 6 Feb 2006 21:36:36 +0100, Magnus Hagander [EMAIL PROTECTED] 
 said:

  OTOH, at least 15% of the 9 million rows in the File table 
  are deleted (by
  pruning) and reinserted (by backup) every weekend.  Within 2 
  months, almost 100% of the rows will have been deleted and reinserted.
 
 So run a VACUUM after each of those. Or just run a nightly VACUUM (or
 maybe daily if you run the backups at night, but you get the idea). 15%
 is fairly much, but that just means you need higher max_fsm_pages
 (maybe, depends on db size).

Yes, that is what I've been doing.  VACUUM without FULL didn't fix it.  VACUUM
with FULL takes about 90 mins, but keeps it almost under control.

__Martin


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] migrating to different database backend

2006-02-03 Thread Karl Hakimian
On Thu, Feb 02, 2006 at 07:21:48PM -0500, Dan Langille wrote:
 As the author of the Bacula PostgreSQL module, I'm curious as to why 
 you would go in that direction.  Most people tend to move to 
 PostgreSQL from MySQL.
 
 Is there something missing you need?

I'm also considering switching from postgres to mysql on my home
machine. While I prefer almost everything about postgres, I'm having
trouble getting the performance I need out of it. I have read quite a
bit about tuning postgres and still can't seem to get thing the way I
want.

My current problem has to do with selecting files for a recover utility
that I'm working on. I want to be able to query the database for the
contents of a single directory at a time, this will allow for a much
faster restore when you want to only grab a file or two from a very
large backup.

I had to add a couple of indexes to the File table and I had things
working pretty well until I realized I needed one more bit of data from
the Filename table. At this point, postgres went from returning the data
within a second to taking 30 or so seconds to do it. All of my queries
are using indexes and nothing returned from the explain command shows me
anything that I can see as a problem.

I have run the equivalent query against mysql on a much larger database
and it returns in less than a second.

BTW One of the things I really like about postgres is the fact that
coming up with this query was pretty straight forward using standard sql
references. The changes needed to make the query work with mysql just
about drive me nuts.

-- 
Karl Hakimian
[EMAIL PROTECTED]


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] migrating to different database backend

2006-02-03 Thread Dan Langille
On 3 Feb 2006 at 7:10, Karl Hakimian wrote:

 On Thu, Feb 02, 2006 at 07:21:48PM -0500, Dan Langille wrote:
  As the author of the Bacula PostgreSQL module, I'm curious as to why 
  you would go in that direction.  Most people tend to move to 
  PostgreSQL from MySQL.
  
  Is there something missing you need?
 
 I'm also considering switching from postgres to mysql on my home
 machine. While I prefer almost everything about postgres, I'm having
 trouble getting the performance I need out of it. I have read quite a
 bit about tuning postgres and still can't seem to get thing the way I
 want.
 
 My current problem has to do with selecting files for a recover utility
 that I'm working on. I want to be able to query the database for the
 contents of a single directory at a time, this will allow for a much
 faster restore when you want to only grab a file or two from a very
 large backup.
 
 I had to add a couple of indexes to the File table and I had things
 working pretty well until I realized I needed one more bit of data from
 the Filename table. At this point, postgres went from returning the data
 within a second to taking 30 or so seconds to do it. All of my queries
 are using indexes and nothing returned from the explain command shows me
 anything that I can see as a problem.
 
 I have run the equivalent query against mysql on a much larger database
 and it returns in less than a second.
 
 BTW One of the things I really like about postgres is the fact that
 coming up with this query was pretty straight forward using standard sql
 references. The changes needed to make the query work with mysql just
 about drive me nuts.

I'll try tuning things if you can get the data to me, or give me 
access to the database.  It's not always indexes.  Sometimes it's 
more along the lines of queries or vacuum.

We've long known that the PosgreSQL module can be improved.  What 
we've been lacking is a starting point where we can actually measure 
the improvement.
-- 
Dan Langille : http://www.langille.org/
BSDCan - The Technical BSD Conference - http://www.bsdcan.org/




---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] migrating to different database backend

2006-02-03 Thread Karl Hakimian
 I'll try tuning things if you can get the data to me, or give me 
 access to the database.  It's not always indexes.  Sometimes it's 
 more along the lines of queries or vacuum.

While setting up access to my data, I copied my bacula database to a new
database and had quite an unexpected result. The query runs fast enough
under the new database while still running slowly on the old one.

I am running pg_autovacuum (and I ran one by hand recently to see if
that would help) so I'm surprised that re-creating the database would
make this kind of difference.

-- 
Karl Hakimian
[EMAIL PROTECTED]


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


RE: [Bacula-users] migrating to different database backend

2006-02-03 Thread Magnus Hagander
  I'll try tuning things if you can get the data to me, or give me 
  access to the database.  It's not always indexes.  
 Sometimes it's more 
  along the lines of queries or vacuum.
 
 While setting up access to my data, I copied my bacula 
 database to a new database and had quite an unexpected 
 result. The query runs fast enough under the new database 
 while still running slowly on the old one.

This sounds like either table or index bloat. Typical reasons for tihs
are not doing vacuum (which obviously isn't your problem), or having too
few FSM pages. This can also be caused by not running vacuum earlier,
but doing it now - if you got far enough away from the good path you'll
need a VACUUM FULL to recover.

You'll want to run VACUUM VERBOSE (database wide) to get a hint if this
might be the issue. 


Or do you get significantly different plans from EXPLAIN? Could be a
missing ANALYZE?


 I am running pg_autovacuum (and I ran one by hand recently to 
 see if that would help) so I'm surprised that re-creating the 
 database would make this kind of difference.

In general for a system like Bacula, I'd advise against using
pg_autovacuum. There's always the risk that it'll kick in in the middle
of a Bacula job. Since you almost certainly have a time when there are
no jobs being processed in the database (I use th etime right after the
catalog backup), you should just run a manual database-wide VACUUM
ANALYZE then.


//Magnus


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid3432bid#0486dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] migrating to different database backend

2006-02-03 Thread Martin Simmons
 On Fri, 3 Feb 2006 20:22:34 +0100, Magnus Hagander [EMAIL PROTECTED] 
 said:
 
   I'll try tuning things if you can get the data to me, or give me 
   access to the database.  It's not always indexes.  
  Sometimes it's more 
   along the lines of queries or vacuum.
  
  While setting up access to my data, I copied my bacula 
  database to a new database and had quite an unexpected 
  result. The query runs fast enough under the new database 
  while still running slowly on the old one.
 
 This sounds like either table or index bloat. Typical reasons for tihs
 are not doing vacuum (which obviously isn't your problem), or having too
 few FSM pages. This can also be caused by not running vacuum earlier,
 but doing it now - if you got far enough away from the good path you'll
 need a VACUUM FULL to recover.

I get crazy index bloat with PostgreSQL 7.3.4 but running VACUUM FULL ANALYZE
once a week keeps it mostly under control.  At least I'm assuming it is index
bloat, because running VACUUM ANALYZE once a week didn't fix it but droping
and recreating the indexes does.

__Martin


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


RE: [Bacula-users] migrating to different database backend

2006-02-03 Thread Magnus Hagander
  This sounds like either table or index bloat. Typical 
 reasons for tihs 
  are not doing vacuum (which obviously isn't your problem), 
 or having 
  too few FSM pages. This can also be caused by not running vacuum 
  earlier, but doing it now - if you got far enough away from 
 the good 
  path you'll need a VACUUM FULL to recover.
 
 I get crazy index bloat with PostgreSQL 7.3.4 but running 
 VACUUM FULL ANALYZE once a week keeps it mostly under 
 control.  At least I'm assuming it is index bloat, because 
 running VACUUM ANALYZE once a week didn't fix it but droping 
 and recreating the indexes does.

In a properly configured database, you should never have to do a VACUUM
FULL. It can be needed if you do one-time operations (say delete 80% of
a huge table), but never in normal operation.

Also, you really shouldn't be running 7.3.4. If you for some reason
absolutely need to stick with 7.3, you should absolutely be on 7.3.13.
But if you can do anything about it, move up to 8.1 - or at least 8.0.
(which of course means 8.1.2 or 8.0.6, *always* go for the latest
release in a stable series)

There are plenty of improvements in those, and quite a lot around
VACUUM. And if you are having problems with index bloat, most (if not
all) of those were fixded in 7.4. So you really want to look at
upgrading.

//Magnus


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid3432bid#0486dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] migrating to different database backend

2006-02-02 Thread Aleksandar Milivojevic
I'd like to migrate one of my servers from PostgreSQL to MySQL.  My 
plan was to use pg_dump to create a file with just insert commands, 
recreate tables in MySQL and then run commands from dump file to 
populate them.  Reinstall director (with MySQL backend).  Is this going 
to fly?


Is there anything to watch out?  Any special features (like counters) 
of particular database that Bacula might have used?




This message was sent using IMP, the Internet Messaging Program.




---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] migrating to different database backend

2006-02-02 Thread Dan Langille
On 2 Feb 2006 at 13:57, Aleksandar Milivojevic wrote:

 I'd like to migrate one of my servers from PostgreSQL to MySQL.  My 
 plan was to use pg_dump to create a file with just insert commands, 
 recreate tables in MySQL and then run commands from dump file to 
 populate them.  Reinstall director (with MySQL backend).  Is this going 
 to fly?

As the author of the Bacula PostgreSQL module, I'm curious as to why 
you would go in that direction.  Most people tend to move to 
PostgreSQL from MySQL.

Is there something missing you need?

 Is there anything to watch out?  Any special features (like counters) 
 of particular database that Bacula might have used?

The databases are all pretty similar.  Bacula doesn't do anything 
particular to any one database, pretty much.
-- 
Dan Langille : http://www.langille.org/
BSDCan - The Technical BSD Conference - http://www.bsdcan.org/




---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] migrating to different database backend

2006-02-02 Thread Aleksandar Milivojevic

Dan Langille wrote:

On 2 Feb 2006 at 13:57, Aleksandar Milivojevic wrote:

I'd like to migrate one of my servers from PostgreSQL to MySQL.  My 
plan was to use pg_dump to create a file with just insert commands, 
recreate tables in MySQL and then run commands from dump file to 
populate them.  Reinstall director (with MySQL backend).  Is this going 
to fly?


As the author of the Bacula PostgreSQL module, I'm curious as to why 
you would go in that direction.  Most people tend to move to 
PostgreSQL from MySQL.


Is there something missing you need?


The reasons are completely political in nature.  There's nothing wrong 
with PostgreSQL and Bacula's PostgreSQL module.  It's just that I got 
surrounded by too many MySQL junkies.  Personally, I prefer PostgreSQL.


The databases are all pretty similar.  Bacula doesn't do anything 
particular to any one database, pretty much.


OK, then I guess simply moving the tables around should work.  Thanks.


---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnkkid=103432bid=230486dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users