On Tuesday 28 July 2009 16:46:43 Alan Brown wrote:
> On Tue, 28 Jul 2009, Marc Cousin wrote:
> > At peak during the night, we have around 40-50 write streams at the same
> > time, and we are despooling to 3 LTO3 and 3 LTO1.
>
> That's larger than my installation and
On Monday 27 July 2009 15:21:17 Alan Brown wrote:
> On Fri, 24 Jul 2009, Marc Cousin wrote:
> > All you really need is to be able to read and write big streams at the
> > same time. So the real problem is to help your disk scheduler to be able
> > to read while having a lot
> In theory, the latency from random IO should be much closer to zero on a
> flash drive than on a thrashing hard drive, so I was hoping I might need
> only 1 or two 64GB or 128GB flash drives to provide decent spool size,
> perhaps not even raid-ed.
>
> In addition, SSD/flash drives should be sile
Le Saturday 11 July 2009 23:34:30, Arno Lehmann a écrit :
> Hello,
>
> 09.07.2009 16:03, Stoyan Petkov wrote:
> > Hello,
> >
> > I've an FD that's making too much iowait on the backed up machine. I
> > tried searching info on the subject but nothing useful comes up. Here is
> > an excerpt from the
On Friday 26 June 2009 09:38:37 Tom Sommer wrote:
> Tom Sommer wrote:
> > Okay, I added 12GB more RAM. Made my mysql tmp directory a tmpfs. Stole
> > some settings from Jason's my.cnf. Upgraded to latest MySQL version.
>
> This seem to have done the trick. My FULL backup only took 7 hours
> today,
There are 3 big insert that are running (one for filename, one for path, then
one for file). Which one is slow exactly ? (or are they all slow ?)
At first sight, I'd say you 'server' is very small... RAM is really small.
For the insert into file, the important indexes aren't on file but on path
Le Thursday 13 November 2008 23:39:19 Robert Treat, vous avez écrit :
> Jason Dixon wrote:
> > Date: Thu, 13 Nov 2008 11:44:28 +0100
> > From: Marc Cousin <[EMAIL PROTECTED]>
> > Subject: Re: [Bacula-users] Hung on "Dir inserting attributes"
> > To: bacul
Le Wednesday 12 November 2008 16:39:00 Jason Dixon, vous avez écrit :
> On Wed, Nov 12, 2008 at 07:07:13AM -0800, Dan Langille wrote:
> > On Nov 11, 2008, at 2:32 PM, Jason Dixon wrote:
> >> We have a new Bacula server (2.4.2 on Solaris 10 x86) that runs fine for
> >> most backup jobs. However, we
> >> Bacula 2.2.5 and PGSQL 7.4.18. I am seeing the same errors on another
> >> FreeBSD 6.2 machine with Bacula 2.2.5 and PGSQL 8.0.14
>
> I'm not sure which version of PostgreSQL is required for the batch
> inserts. I coldn't find that in the ReleaseNotes, but there was a
> discussion on the mail
On Monday 01 October 2007 12:58:27 Alejandro Alfonso wrote:
> Thank you for the fast answer!
>
> Um... maybe the problem is related with my server? Its a big backup
> (about 1'7 Tb, many small files), and 770Gb of SQL sentences
>
> >>01-Oct 04:05 poe-sd: Sending spooled attrs to the Director. Despo
On Wednesday 19 September 2007 16:59:10 Martin Simmons wrote:
> > On Wed, 19 Sep 2007 11:54:37 +0200, Cousin Marc said:
> >
> > I think the problem is linked to the fact dbcheck works more or less row
> > by row.
> >
> > If I understand correctly, the problem is that you have duplicates in the
On Wednesday 19 September 2007 17:52:40 Cedric Devillers wrote:
> Martin Simmons wrote:
> >> On Wed, 19 Sep 2007 11:54:37 +0200, Cousin Marc said:
> >>
> >> I think the problem is linked to the fact dbcheck works more or less row
> >> by row.
> >>
> >> If I understand correctly, the problem is
I'd say you'll get the best performance with postgresql right now : batch
insert has been made primarily for it (and uses a special bulk insert
statement with postgresql).
I guess some optimizations could be done for mysql too, but I don't think
they've been done for now ...
On Friday 07 Septem
bles (file_pathid_idx,
> file_filenameid_idx, file_jpfid_idx) halps pushing up the Performance?
>
> > -Ursprungliche Nachricht-
> > Von: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] Auftrag von Marc
> > Cousin
> > Gesendet: Donnerstag, 14. Juni 2007 08:45
>
On Thursday 14 June 2007 08:36:19 Berner Martin wrote:
> Yes it is because I have some bottleneck with PostgreSQL as backend. So I
> try to give MySQL a try on a Testenvirement. But it has no sense if I cant
> migrate the catalogue. And to make a true comparison it is important to
> have as much en
On Wednesday 13 June 2007 10:18:44 Berner Martin wrote:
> Hello
> Dos someone have already do a migration of the Catalogue from a PostgreSQL
> to y MySQL? Or know how it has to work? I tried to dump Postgres so that he
> dump only the Data and use INSERT in stead of COPY. Then I grep only the
> lin
On Saturday 09 June 2007 15:47:01 db wrote:
> Kern Sibbald wrote:
> > I don't foresee the Bacula project undertaking these
> > kinds of projects because the Bacula project is about Bacula not about
> > MySQL, PostgreSQL, Exchange, Oracle, DB2, ...
>
> Bacula is about Bacula? I thought Bacula was ab
I don't think ORA 04030 is a good example ...
It means that the oracle process has tried to allocate memory as asked by the
DBA, and couldn't, because either the server has no more memory, the process
has hit an administrative (OS) limit, the OS has done an optimistic memory
allocation...
As f
>
> This is exactly what I was seeing with dbcheck.
>
> Why have a dog and then do all the barking yourself?
>
> In this case the dog is the SQL database and the barking is the needless
> extraction and [counting|deleting] of individual NULL JobIds
>
>
> The comments about SQL crashes are because I
I think I haven't explained the memory issue correctly :
The example Kern gave is :
"SELECT JobMedia.JobMediaId,Job.JobId FROM JobMedia "
"LEFT OUTER JOIN Job ON (JobMedia.JobId=Job.JobId) "
"WHERE Job.JobId IS NULL LIMIT 30";
and it only fails if I remove the
On Monday 05 February 2007 20:19, Darien Hager wrote:
> I'm wondering if anyone has advice for backing up databases. I have
> some python scripts working to do per-database backup/restore over
> FIFOs (pg_dump, pg_restore, nonblocking fifo polling), but the nature
> of the method means that there i
On Wednesday 12 July 2006 14:14, Magnus Hagander wrote:
> > there is a way to 'cheat' : either use fsync = off in
> > postgresql (that sucks), or try a writeback enabled raid controller.
> > In both cases you're taking a risk, but it's rather low in
> > the second one.
>
> That's not entirely corre
there is a way to 'cheat' : either use fsync = off in postgresql (that sucks),
or try a writeback enabled raid controller.
In both cases you're taking a risk, but it's rather low in the second one.
These two cheats both remove the sync wait created by the transactions on
the 'WAL' (journal) files
is there no performance impact ? innodb is not that good at inserting data ...
Le Mercredi 01 Février 2006 16:08, Daniel Holtkamp a écrit :
> Hi !
>
> Roger Kvam wrote:
> > My MySQL database is 6.8G, while the File.MYD is 4.0G
>
> You ran into a mysql-limit there. The File Table apparently uses My
Hi,
When bacula is sending from the spool to the tape, are there some tasks that
are locked ? (all the backups using the spool, the backup being sent to
tape...) ?
Le Mardi 31 Janvier 2006 17:32, Steve Loughran a écrit :
> IHi Ryan
>
> ts because the data is being transferred twice, once from
Le Lundi 30 Janvier 2006 03:27, Jeffrey L. Taylor a écrit :
> Quoting Marc Cousin <[EMAIL PROTECTED]>:
> > Hi,
> >
> > I'm trying to restore an old DVD backup which is not in my db anymore,
> > and I get the following messages :
> >
> > bscan
Hi,
I'm trying to restore an old DVD backup which is not in my db anymore, and I
get the following messages :
bscan -c /etc/bacula/bacula-sd.conf -v -V DVD1 /dev/hdc
bscan: butil.c:266 Using device: "/dev/hdc" for reading.
29-Jan 13:44 bscan: Fatal Error at dev.c:501 because:
The media in the de
Hi,
I've never tried to use a travan in bacula, but had to use some of them in a
previous job...
So I think I'd better warn you about their reliability, in case you care about
your backups :) Travan are much worse than DAT. I've had several of them
writing tapes for months a few years ago, wit
28 matches
Mail list logo