Re: [Bacula-users] Bacula spool on SSD -- solid state drive performance testing?

2009-07-29 Thread Marc Cousin
On Tuesday 28 July 2009 16:46:43 Alan Brown wrote: > On Tue, 28 Jul 2009, Marc Cousin wrote: > > At peak during the night, we have around 40-50 write streams at the same > > time, and we are despooling to 3 LTO3 and 3 LTO1. > > That's larger than my installation and

Re: [Bacula-users] Bacula spool on SSD -- solid state drive performance testing?

2009-07-28 Thread Marc Cousin
On Monday 27 July 2009 15:21:17 Alan Brown wrote: > On Fri, 24 Jul 2009, Marc Cousin wrote: > > All you really need is to be able to read and write big streams at the > > same time. So the real problem is to help your disk scheduler to be able > > to read while having a lot

Re: [Bacula-users] Bacula spool on SSD -- solid state drive performance testing?

2009-07-23 Thread Marc Cousin
> In theory, the latency from random IO should be much closer to zero on a > flash drive than on a thrashing hard drive, so I was hoping I might need > only 1 or two 64GB or 128GB flash drives to provide decent spool size, > perhaps not even raid-ed. > > In addition, SSD/flash drives should be sile

Re: [Bacula-users] iowait on client

2009-07-12 Thread Marc Cousin
Le Saturday 11 July 2009 23:34:30, Arno Lehmann a écrit : > Hello, > > 09.07.2009 16:03, Stoyan Petkov wrote: > > Hello, > > > > I've an FD that's making too much iowait on the backed up machine. I > > tried searching info on the subject but nothing useful comes up. Here is > > an excerpt from the

Re: [Bacula-users] [Bacula-devel] Fwd: Re: Performance with MySQL queries since 3.0.0 (Dir inserting attributes hang)

2009-06-26 Thread Marc Cousin
On Friday 26 June 2009 09:38:37 Tom Sommer wrote: > Tom Sommer wrote: > > Okay, I added 12GB more RAM. Made my mysql tmp directory a tmpfs. Stole > > some settings from Jason's my.cnf. Upgraded to latest MySQL version. > > This seem to have done the trick. My FULL backup only took 7 hours > today,

Re: [Bacula-users] Very big File-Table: 14 GB, and 10 GB File-Index && growing

2009-03-06 Thread Marc Cousin
There are 3 big insert that are running (one for filename, one for path, then one for file). Which one is slow exactly ? (or are they all slow ?) At first sight, I'd say you 'server' is very small... RAM is really small. For the insert into file, the important indexes aren't on file but on path

Re: [Bacula-users] Hung on "Dir inserting attributes"

2008-11-14 Thread Marc Cousin
Le Thursday 13 November 2008 23:39:19 Robert Treat, vous avez écrit : > Jason Dixon wrote: > > Date: Thu, 13 Nov 2008 11:44:28 +0100 > > From: Marc Cousin <[EMAIL PROTECTED]> > > Subject: Re: [Bacula-users] Hung on "Dir inserting attributes" > > To: bacul

Re: [Bacula-users] Hung on "Dir inserting attributes"

2008-11-13 Thread Marc Cousin
Le Wednesday 12 November 2008 16:39:00 Jason Dixon, vous avez écrit : > On Wed, Nov 12, 2008 at 07:07:13AM -0800, Dan Langille wrote: > > On Nov 11, 2008, at 2:32 PM, Jason Dixon wrote: > >> We have a new Bacula server (2.4.2 on Solaris 10 x86) that runs fine for > >> most backup jobs. However, we

Re: [Bacula-users] can't restore ACL of /tmp/bacula-restores/*

2007-11-11 Thread Marc Cousin
> >> Bacula 2.2.5 and PGSQL 7.4.18. I am seeing the same errors on another > >> FreeBSD 6.2 machine with Bacula 2.2.5 and PGSQL 8.0.14 > > I'm not sure which version of PostgreSQL is required for the batch > inserts. I coldn't find that in the ReleaseNotes, but there was a > discussion on the mail

Re: [Bacula-users] Mysql - INSERT INTO batch error

2007-10-03 Thread Marc Cousin
On Monday 01 October 2007 12:58:27 Alejandro Alfonso wrote: > Thank you for the fast answer! > > Um... maybe the problem is related with my server? Its a big backup > (about 1'7 Tb, many small files), and 770Gb of SQL sentences > > >>01-Oct 04:05 poe-sd: Sending spooled attrs to the Director. Despo

Re: [Bacula-users] dbcheck slowness

2007-09-19 Thread Marc Cousin
On Wednesday 19 September 2007 16:59:10 Martin Simmons wrote: > > On Wed, 19 Sep 2007 11:54:37 +0200, Cousin Marc said: > > > > I think the problem is linked to the fact dbcheck works more or less row > > by row. > > > > If I understand correctly, the problem is that you have duplicates in the

Re: [Bacula-users] dbcheck slowness

2007-09-19 Thread Marc Cousin
On Wednesday 19 September 2007 17:52:40 Cedric Devillers wrote: > Martin Simmons wrote: > >> On Wed, 19 Sep 2007 11:54:37 +0200, Cousin Marc said: > >> > >> I think the problem is linked to the fact dbcheck works more or less row > >> by row. > >> > >> If I understand correctly, the problem is

Re: [Bacula-users] performance

2007-09-07 Thread Marc Cousin
I'd say you'll get the best performance with postgresql right now : batch insert has been made primarily for it (and uses a special bulk insert statement with postgresql). I guess some optimizations could be done for mysql too, but I don't think they've been done for now ... On Friday 07 Septem

Re: [Bacula-users] migrate catalog from PostgreSQL to MySQL

2007-06-14 Thread Marc Cousin
bles (file_pathid_idx, > file_filenameid_idx, file_jpfid_idx) halps pushing up the Performance? > > > -Ursprungliche Nachricht- > > Von: [EMAIL PROTECTED] > > [mailto:[EMAIL PROTECTED] Auftrag von Marc > > Cousin > > Gesendet: Donnerstag, 14. Juni 2007 08:45 >

Re: [Bacula-users] migrate catalog from PostgreSQL to MySQL

2007-06-13 Thread Marc Cousin
On Thursday 14 June 2007 08:36:19 Berner Martin wrote: > Yes it is because I have some bottleneck with PostgreSQL as backend. So I > try to give MySQL a try on a Testenvirement. But it has no sense if I cant > migrate the catalogue. And to make a true comparison it is important to > have as much en

Re: [Bacula-users] migrate catalog from PostgreSQL to MySQL

2007-06-13 Thread Marc Cousin
On Wednesday 13 June 2007 10:18:44 Berner Martin wrote: > Hello > Dos someone have already do a migration of the Catalogue from a PostgreSQL > to y MySQL? Or know how it has to work? I tried to dump Postgres so that he > dump only the Data and use INSERT in stead of COPY. Then I grep only the > lin

Re: [Bacula-users] [Bacula-devel] Backup of databases and Exchange

2007-06-12 Thread Marc Cousin
On Saturday 09 June 2007 15:47:01 db wrote: > Kern Sibbald wrote: > > I don't foresee the Bacula project undertaking these > > kinds of projects because the Bacula project is about Bacula not about > > MySQL, PostgreSQL, Exchange, Oracle, DB2, ... > > Bacula is about Bacula? I thought Bacula was ab

Re: [Bacula-users] [Bacula-devel] Releasing the new batch DB insert code

2007-03-23 Thread Marc Cousin
I don't think ORA 04030 is a good example ... It means that the oracle process has tried to allocate memory as asked by the DBA, and couldn't, because either the server has no more memory, the process has hit an administrative (OS) limit, the OS has done an optimistic memory allocation... As f

Re: [Bacula-users] [Bacula-devel] Releasing the new batch DB insert code

2007-03-21 Thread Marc Cousin
> > This is exactly what I was seeing with dbcheck. > > Why have a dog and then do all the barking yourself? > > In this case the dog is the SQL database and the barking is the needless > extraction and [counting|deleting] of individual NULL JobIds > > > The comments about SQL crashes are because I

Re: [Bacula-users] [Bacula-devel] Releasing the new batch DB insert code

2007-03-21 Thread Marc Cousin
I think I haven't explained the memory issue correctly : The example Kern gave is : "SELECT JobMedia.JobMediaId,Job.JobId FROM JobMedia " "LEFT OUTER JOIN Job ON (JobMedia.JobId=Job.JobId) " "WHERE Job.JobId IS NULL LIMIT 30"; and it only fails if I remove the

Re: [Bacula-users] Database Diff/Incremental Backups

2007-02-05 Thread Marc Cousin
On Monday 05 February 2007 20:19, Darien Hager wrote: > I'm wondering if anyone has advice for backing up databases. I have > some python scripts working to do per-database backup/restore over > FIFOs (pg_dump, pg_restore, nonblocking fifo polling), but the nature > of the method means that there i

Re: [Bacula-users] Performace between MySQL and Postgres 7 & 8?

2006-07-12 Thread Marc Cousin
On Wednesday 12 July 2006 14:14, Magnus Hagander wrote: > > there is a way to 'cheat' : either use fsync = off in > > postgresql (that sucks), or try a writeback enabled raid controller. > > In both cases you're taking a risk, but it's rather low in > > the second one. > > That's not entirely corre

Re: [Bacula-users] Performace between MySQL and Postgres 7 & 8?

2006-07-05 Thread Marc Cousin
there is a way to 'cheat' : either use fsync = off in postgresql (that sucks), or try a writeback enabled raid controller. In both cases you're taking a risk, but it's rather low in the second one. These two cheats both remove the sync wait created by the transactions on the 'WAL' (journal) files

Re: [Bacula-users] The table 'File' is full

2006-02-01 Thread Marc Cousin
is there no performance impact ? innodb is not that good at inserting data ... Le Mercredi 01 Février 2006 16:08, Daniel Holtkamp a écrit : > Hi ! > > Roger Kvam wrote: > > My MySQL database is 6.8G, while the File.MYD is 4.0G > > You ran into a mysql-limit there. The File Table apparently uses My

Re: [Bacula-users] Spooling speed

2006-01-31 Thread Marc Cousin
Hi, When bacula is sending from the spool to the tape, are there some tasks that are locked ? (all the backups using the spool, the backup being sent to tape...) ? Le Mardi 31 Janvier 2006 17:32, Steve Loughran a écrit : > IHi Ryan > > ts because the data is being transferred twice, once from

Re: [Bacula-users] bscan and DVD

2006-01-29 Thread Marc Cousin
Le Lundi 30 Janvier 2006 03:27, Jeffrey L. Taylor a écrit : > Quoting Marc Cousin <[EMAIL PROTECTED]>: > > Hi, > > > > I'm trying to restore an old DVD backup which is not in my db anymore, > > and I get the following messages : > > > > bscan

[Bacula-users] bscan and DVD

2006-01-29 Thread Marc Cousin
Hi, I'm trying to restore an old DVD backup which is not in my db anymore, and I get the following messages : bscan -c /etc/bacula/bacula-sd.conf -v -V DVD1 /dev/hdc bscan: butil.c:266 Using device: "/dev/hdc" for reading. 29-Jan 13:44 bscan: Fatal Error at dev.c:501 because: The media in the de

Re: [Bacula-users] Travan 40 (Seagate Hornet, STT3401A)

2005-11-27 Thread Marc Cousin
Hi, I've never tried to use a travan in bacula, but had to use some of them in a previous job... So I think I'd better warn you about their reliability, in case you care about your backups :) Travan are much worse than DAT. I've had several of them writing tapes for months a few years ago, wit