2006-09-26 14:06:51 LOG: database system is ready
2006-09-26 14:06:51 LOG: transaction ID wrap limit is
2147484148, limited by database postgres
2006-09-26 14:06:51 LOG: autovacuum: processing database
bacula
There, a vacuum begins. Bacula is not doing that.
Vaccum/autovacuum
How can I be sure bacula is using the correct credentials I
specified in the conf file?
Test it. Configure PostgreSQL such that it accepts connections
only from that which you specify.
Or you could monitor the connections. PostgreSQL allows you to
monitor connections. There are
Sorry for not resending the needed informations, I already sent
them when I first encountered the problem. But then I could solve
this by moving the Director together with the DB (I had another
Solaris machine in that situation).
I'm appending the original infos here.
On postgres, I just
I got it to work. I had to add the following line to
/etc/postgresql/8.1/main/pg_hba.conf:
hostbacula bacula,root 127.0.0.1/32 trust
Bacula is connecting with TCP/IP, not a Unix domain socket. This
was confusing me. Thanks for your help.
I would definitely recommend
If it did make a difference, you probably want to check the
values for
your freespace map size. If there's not enough space in the
freespace
map, vacuum won't be able to properly deal with the indexes so this
can happen.
Well ... feel free to correct me if I'm wrong, but vacuum
Additionally, MySQL with MyISAM will _always_ be faster than
PG, because MyISAM doesn't support transactions. The
overhead of ensuring your data is ACID slows things down a
bit. If you want to try to compare MySQL to PostgreSQL, use
InnoDB or BDB tables.
You'd really want InnoDB if
there is a way to 'cheat' : either use fsync = off in
postgresql (that sucks), or try a writeback enabled raid controller.
In both cases you're taking a risk, but it's rather low in
the second one.
That's not entirely correct.
fsync=off is pretty much the same thing you have with MyISAM, so
Also, ensure that you vacuum and analyze PostgreSQL databases
frequently.
An occasional REINDEX helps as well. These are normal
maintenance
tasks for PostgreSQL. ANALYZE is especially important
right after
populating a new database.
REINDEX really shouldn't be needed
(...)
Does bacula support UTF-8 ?
Yes, but there are problems with PostgreSQL. I assume that
these problems are due to the fact that users can create or
have created non-UTF-8 filenames, but I am not sure.
They are. PostgreSQL will validate that input into a UTF-8 database is
my postgresql-log got errors about a missing table ( and an
index on it) now the wired thing is that table is not defined
in the create script at all (checked with CVS)
wtf is wrong here ?
Nothing. Bacula forcibly deletes these these objects even if they don't
exist, to make sure they go
Yikes. You have corrupt files on disk. Have you had
hardware problem
or OS crashes lately? Or running some funky beta-version of a
filesystem
;-)
nah .. ext3 and no crashes
Interesting. Because it really sounds like your filesystem just dropped
a file, which normally shouldn't
Just a few quick questions before we start testing.
Is this necessary to keep the DB size at a norm?
I found it very necessary when using Postgres 7.3.4. It
needs a full vacuum every week for me.
Vacuum is necessary in all PostgreSQL versions, if you ever do UPDATEs
or DELETEs. So as
If there was a way to do it in a stored proc, that would
probably be
fine even if the syntax of the proc differed - because the
proc would
be part of the schema, just like table and index
definitions. As long
as the call syntax would be the same.
However, I don't beleive
When I show the processlist several times in a row it is nearly
always SELECTing some filename in the Filename table.
That's expected - it selects to find out if the filename has
to be inserted. By the way - is there a portable way of
having unique file names, and, in case of
First, create the directory /var/lib/pgsql/data/dump and
/var/lib/pgsql/data/dump/fifo , chown postgres:bacula, chmod 750.
Ensure that the database user postgres running on the local
host has trust access to all databases (no passwords
needed). This script also works for backup of remote
Using two transactions, one for the vital components, the other for
non-vital portions.
Or do we need to revisit how these tables are updated?
Yes, it would be possible to commit the filename/path inserts
immediately (i.e. 1 insert per transaction) but still do the
file inserts
Yes, the creation of the tables is very database dependent
and was not
designed for portability. In hind-sight, one could
probably make them
much more portable.
Foreign keys were initially used in PostgreSQL, but they slowed it
down considerably -- by a factor of 2-10 if I
Cutting my postgres update time to minutes from hours would
certainly
make my backups run far smoother.
I think transactions are more important here. We need to
look more closely at that.
Considering Bacula runs a *lot* of commands that are almost the same,
differeing only in data,
I think transactions are more important here. We need to
look more
closely at that.
Considering Bacula runs a *lot* of commands that are almost
the same,
differeing only in data, it would probalby be a noticable
gain using
prepared statements (that probably goes for all
Hi there,
currently we are using postgresql 8.0.6 (default configuration) under
FC4 for the catalog (with bacula 1.38.5).
I decided to spool the attributes of our jobs before storing
in the database because of a bad performance. Some jobs are
having a lot of files and so the attribute
This sounds like either table or index bloat. Typical
reasons for tihs
are not doing vacuum (which obviously isn't your problem),
or having
too few FSM pages. This can also be caused by not
running vacuum
earlier, but doing it now - if you got far enough away from
the
[...] I tried to tune postgreSQL by modifying some
settings in the
postgresql.conf: shared_buffers = 2048 wal_buffers = 64
max_connections = 40 Nevertheless, it doesn't help. :-(
What kind of hardware are you on? 2048 shared buffers is
very low for
a lot of systems. I'd bump it
I'll try tuning things if you can get the data to me, or give me
access to the database. It's not always indexes.
Sometimes it's more
along the lines of queries or vacuum.
While setting up access to my data, I copied my bacula
database to a new database and had quite an unexpected
This sounds like either table or index bloat. Typical
reasons for tihs
are not doing vacuum (which obviously isn't your problem),
or having
too few FSM pages. This can also be caused by not running vacuum
earlier, but doing it now - if you got far enough away from
the good
path
Running bacula 1.38.3 with bacula-web 1.2 and a postgresql DB.
The user configured can access the db with psql from both
localhost and remotely over tcpip. The test.php shows all OK
(except .bmp images).
http error.log shows:
PHP Fatal error: Call to undefined function: numrows()
Running bacula 1.38.3 with bacula-web 1.2 and a postgresql DB.
The user configured can access the db with psql from both localhost
and remotely over tcpip. The test.php shows all OK (except .bmp
images).
http error.log shows:
PHP Fatal error: Call to undefined function:
26 matches
Mail list logo