[Bacula-users] Cannot build bacula-client 5.0.3 on FreeBSD 7.3
Dan Langille wrote: This is for Bacula on FreeBSD 7.3? If so I'll patch the FreeBSD port. Can you try this: cd /usr/ports/sysutils/bacula-client make clean make patch cd work/bacula-5.0.3 (assuming usual port setup) patch -p1 /path-to/bacula-5.0.3-libz.patch If no errors, try: make Please report back. -- Dan Langille - http://langille.org/ Sorry Dan -- not FreeBSD but Slackware64 13.1 otherwise I'd be happy to try as you requested. +-- |This was sent by catki...@yahoo.co.uk via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- -- Nokia and ATT present the 2010 Calling All Innovators-North America contest Create new apps games for the Nokia N8 for consumers in U.S. and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store http://p.sf.net/sfu/nokia-dev2dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Tuning for large (millions of files) backups?
Hello Henrik, what are you using? MySQL? Thanks, Ondrej. 'Mingus Dew' wrote: Henrik, Have you had any problems with slow queries during backup or restore jobs? I'm thinking about http://bugs.bacula.org/view.php?id=1472 specifically, and considering that the bacula.File table already has 73 million rows in it and I haven't even successfully ran the big job yet. Not really. We have several 10+ million file jobs - all run without problem (backup and restore). I am aware of the fact that a lot of Bacula users run PG ( Bacula Systems also does recommend PG for larger setups ) but nevertheless MySQL has served us very well so far. Just curious as a fellow Solaris deployer... Thanks, Shon On Fri, Oct 8, 2010 at 3:30 PM, Henrik Johansen hen...@scannet.dkmailto:hen...@scannet.dk mailto:hen...@scannet.dk%3cmailto:hen...@scannet.dk wrote: 'Mingus Dew' wrote: All, I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running MySQL 4.1.22 for the database server. I do plan on upgrading to a compatible version of MySQL 5, but migrating to PostgreSQL isn't an option at this time. I am trying to backup to tape a very large number of files for a client. While the data size is manageable at around 2TB, the number of files is incredibly large. The first of the jobs had 27 million files and initially failed because the batch table became Full. I changed the myisam_data_pointer size to a value of 6 in the config. This job was then able to run successfully and did not take too long. I have another job which has 42 million files. I'm not sure what that equates to in rows that need to be inserted, but I can say that I've not been able to successfully run the job, as it seems to hang for over 30 hours in a Dir inserting attributes status. This causes other jobs to backup in the queue and once canceled I have to restart Bacula. I'm looking for way to boost performance of MySQL or Bacula (or both) to get this job completed. You *really* need to upgrade to MySQL 5 and change to InnoDB - there is no way in hell that MySQL 4 + MyISAM is going to perform decent in your situation. Solaris 10 is a Tier 1 platform for MySQL so the latest versions are always available from http://www.mysql.com in the native pkg format so there really is no excuse. We run our Bacula Catalog MySQl servers on Solaris (OpenSolaris) so perhaps I can give you some pointers. Our smallest Bacula DB is currently ~70 GB (381,230,610 rows). Since you are using Solaris 10 I assume that you are going to run MySQL off ZFS - in that case you need to adjust the ZFS recordsize for the filesystem that is going to hold your InnoDB datafiles to match the InnoDB block size. If you are using ZFS you should also consider getting yourself a fast SSD as a SLOG (or to disable the ZIL entirely if you dare) - all InnoDB writes to datafiles are O_SYNC and benefit *greatly* from an SSD in terms of write / transaction speed. If you have enough CPU power to spare you should try turning on compression for the ZFS filesystem holding the datafiles - it also can accelerate DB writes / reads but YMMV. Lastly, our InnoDB related configuration from my.cnf : # InnoDB options skip-innodb_doublewrite innodb_data_home_dir = /tank/db/ innodb_log_group_home_dir = /tank/logs/ innodb_support_xa = false innodb_file_per_table = true innodb_buffer_pool_size = 20G innodb_flush_log_at_trx_commit = 2 innodb_log_buffer_size = 128M innodb_log_file_size = 512M innodb_log_files_in_group = 2 innodb_max_dirty_pages_pct = 90 Thanks, Shon -- Beautiful is writing same markup. Internet Explorer 9 supports standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 L3. Spend less time writing and rewriting code and more time creating great experiences on the web. Be a part of the beta today. http://p.sf.net/sfu/beautyoftheweb ___ Bacula-users mailing list Bacula-users@lists.sourceforge.netmailto:Bacula-users@lists.sourceforge.net mailto:bacula-us...@lists.sourceforge.net%3cmailto:Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Med venlig hilsen / Best Regards Henrik Johansen hen...@scannet.dkmailto:hen...@scannet.dk mailto:hen...@scannet.dk%3cmailto:hen...@scannet.dk Tlf. 75 53 35 00 ScanNet Group A/S ScanNet -- Beautiful is writing same markup. Internet Explorer 9 supports standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 L3. Spend less time writing and rewriting code and more time creating great experiences on the web. Be a part of the beta today. http://p.sf.net/sfu/beautyoftheweb ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net mailto:Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Med venlig hilsen / Best
[Bacula-users] [SPAM] Re: Could not stat some files with Windows-FD (path length problem ?)
Yeah I think my Bacula version is recent enough, I should not have the path length problem. I'm not sure the problem comes from accentuation as there some other files with accent well backed up (and I can restore them). Or maybe long path + accentuation might lead to my problem. -- Tel. Fixe: 01 30 55 44 04 Tel. Portable: 06 89 37 27 79 Skype: matthieu.cameirao frOps http://www.fr-opensource.com/ On Sat, 30 Oct 2010 20:38:13 +0200, Bruno Friedmann br...@ioda-net.ch wrote: Mark sorry to bother you, but if you have read it's clearly indicate : opensource.com-dir 5.0.2 (28Apr10): 25-oct.-2010 So another brillant idea, or we can advise Matthieu to open a bug, I'm pretty sure so long and complicated path/filename aren't tested. The path (if I've trunk it correctly) is 225 char long including c:\ + the filename give 263 total length. Also notice the french accent in the different part. We have some month ago, people not able to restore some path with accent. On 10/29/2010 02:50 PM, Mark Gordon wrote: Path/filenames longer than 260 characters (up to 32,000) are supported beginning with Bacula version 1.39.20. Older Bacula versions support only 260 character path/filenames. Thanks Mark -Original Message- From: Matthieu Cameirao [mailto:matthieu.camei...@fr-opensource.com] Sent: Friday, October 29, 2010 5:33 AM To: Bacula-users@lists.sourceforge.net Subject: [Bacula-users] Could not stat some files with Windows-FD (path length problem ?) Hello Bacula-users list readers ! I'm using bacula to backup a windows XP machine and everything (well 99%) is OK. I've got a problem with 12 files and the only reason I can think of is that the path lenght for all these files exceed 260, which is weird as I've thought that this limitation was an old one. Bacula director and storage daemon runs on a Debian Squeeze machine and are installed with Debian packages: ii bacula-common5.0.2-2 network backup, recovery and verification - common support files ii bacula-common-mysql 5.0.2-2 network backup, recovery and verification - MySQL common files ii bacula-console 5.0.2-2 network backup, recovery and verification - text console ii bacula-director-common 5.0.2-2 network backup, recovery and verification - Director common files ii bacula-director-mysql5.0.2-2 network backup, recovery and verification - MySQL storage for Director ii bacula-fd5.0.2-2 network backup, recovery and verification - file daemon ii bacula-sd5.0.2-2 network backup, recovery and verification - storage daemon ii bacula-sd-mysql 5.0.2-2 network backup, recovery and verification - MySQL SD tools and I've installed bacula-fd 5.0.2 on the Windows XP machine. Here's the fileset I'm using for this backup: FileSet { Name = RedWood Industries FileSet Include { Options { signature = SHA1 compression = GZIP IgnoreCase = yes } File = C:/Documents and Settings/All Users/Documents/REZO REDWOOD } } I've included the message I get after the backup has finished at the end of the message (the ERR message would be in english The system cannot find the path specified). I really don't understand what can be the problem except for the path length... Thank you for your help Matthieu Cameirao -- 25-oct. 20:05 moe.fr-opensource.com-dir JobId 2: Start Backup JobId 2, Job=BackupRedwoodIndustries.2010-10-25_20.05.00_32 25-oct. 20:05 moe.fr-opensource.com-dir JobId 2: Using Device FileStorage 25-oct. 20:05 moe.fr-opensource.com-sd JobId 2: Volume File_2010-10-22_20h05m previously written, moving to end of data. 25-oct. 20:05 moe.fr-opensource.com-sd JobId 2: Ready to append to end of Volume File_2010-10-22_20h05m size=3157235974 25-oct. 20:05 serveur-fd JobId 2: Generate VSS snapshots. Driver=VSS WinXP, Drive(s)=C 25-oct. 20:06 serveur-fd JobId 2: Could not stat C:/Documents and Settings/All Users/Documents/REZO REDWOOD/DOCUMENTATIONS TECHNIQUES 100910/COMPTEURS DEBITMETRES/FLUIDES/TURBINES/EDM/NOTICES CERTIFS/NOTICES EDM HD, ST, HR/NOTICES EDM NOUVELLE GENERATION/NOTICE ANGLAIS/Notice anglais nouveaux afficheur.pdf: ERR=Le chemin d'acc�s sp�cifi� est introuvable. 25-oct. 20:06 serveur-fd JobId 2: Could not stat C:/Documents and Settings/All Users/Documents/REZO REDWOOD/DOCUMENTATIONS TECHNIQUES 100910/COMPTEURS DEBITMETRES/FLUIDES/TURBINES/EDM/NOTICES CERTIFS/NOTICES EDM HD, ST, HR/NOTICES EDM NOUVELLE GENERATION/NOTICE ANGLAIS/Traduction Notice nouveaux afficheur 09 français juin 2010.doc: ERR=Le chemin d'acc�s sp�cifi� est introuvable. 25-oct. 20:06 serveur-fd JobId 2: Could not stat C:/Documents and
Re: [Bacula-users] Tuning for large (millions of files) backups?
'Ondrej PLANKA (Ignum profile)' wrote: Hello Henrik, what are you using? MySQL? Yes - all our catalog servers run MySQL. I forgot to mention this in my last post - we are Bacula System customers and they have proved to very supportive and competent. If you are thinking about doing large scale backups with Bacula I can only encourage you to get a support subscription - it is worth every penny. Thanks, Ondrej. 'Mingus Dew' wrote: Henrik, Have you had any problems with slow queries during backup or restore jobs? I'm thinking about http://bugs.bacula.org/view.php?id=1472 specifically, and considering that the bacula.File table already has 73 million rows in it and I haven't even successfully ran the big job yet. Not really. We have several 10+ million file jobs - all run without problem (backup and restore). I am aware of the fact that a lot of Bacula users run PG ( Bacula Systems also does recommend PG for larger setups ) but nevertheless MySQL has served us very well so far. Just curious as a fellow Solaris deployer... Thanks, Shon On Fri, Oct 8, 2010 at 3:30 PM, Henrik Johansen hen...@scannet.dkmailto:hen...@scannet.dk mailto:hen...@scannet.dk%3cmailto:hen...@scannet.dk wrote: 'Mingus Dew' wrote: All, I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running MySQL 4.1.22 for the database server. I do plan on upgrading to a compatible version of MySQL 5, but migrating to PostgreSQL isn't an option at this time. I am trying to backup to tape a very large number of files for a client. While the data size is manageable at around 2TB, the number of files is incredibly large. The first of the jobs had 27 million files and initially failed because the batch table became Full. I changed the myisam_data_pointer size to a value of 6 in the config. This job was then able to run successfully and did not take too long. I have another job which has 42 million files. I'm not sure what that equates to in rows that need to be inserted, but I can say that I've not been able to successfully run the job, as it seems to hang for over 30 hours in a Dir inserting attributes status. This causes other jobs to backup in the queue and once canceled I have to restart Bacula. I'm looking for way to boost performance of MySQL or Bacula (or both) to get this job completed. You *really* need to upgrade to MySQL 5 and change to InnoDB - there is no way in hell that MySQL 4 + MyISAM is going to perform decent in your situation. Solaris 10 is a Tier 1 platform for MySQL so the latest versions are always available from http://www.mysql.com in the native pkg format so there really is no excuse. We run our Bacula Catalog MySQl servers on Solaris (OpenSolaris) so perhaps I can give you some pointers. Our smallest Bacula DB is currently ~70 GB (381,230,610 rows). Since you are using Solaris 10 I assume that you are going to run MySQL off ZFS - in that case you need to adjust the ZFS recordsize for the filesystem that is going to hold your InnoDB datafiles to match the InnoDB block size. If you are using ZFS you should also consider getting yourself a fast SSD as a SLOG (or to disable the ZIL entirely if you dare) - all InnoDB writes to datafiles are O_SYNC and benefit *greatly* from an SSD in terms of write / transaction speed. If you have enough CPU power to spare you should try turning on compression for the ZFS filesystem holding the datafiles - it also can accelerate DB writes / reads but YMMV. Lastly, our InnoDB related configuration from my.cnf : # InnoDB options skip-innodb_doublewrite innodb_data_home_dir = /tank/db/ innodb_log_group_home_dir = /tank/logs/ innodb_support_xa = false innodb_file_per_table = true innodb_buffer_pool_size = 20G innodb_flush_log_at_trx_commit = 2 innodb_log_buffer_size = 128M innodb_log_file_size = 512M innodb_log_files_in_group = 2 innodb_max_dirty_pages_pct = 90 Thanks, Shon -- Beautiful is writing same markup. Internet Explorer 9 supports standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 L3. Spend less time writing and rewriting code and more time creating great experiences on the web. Be a part of the beta today. http://p.sf.net/sfu/beautyoftheweb ___ Bacula-users mailing list Bacula-users@lists.sourceforge.netmailto:Bacula-users@lists.sourceforge.net mailto:bacula-us...@lists.sourceforge.net%3cmailto:Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Med venlig hilsen / Best Regards Henrik Johansen hen...@scannet.dkmailto:hen...@scannet.dk mailto:hen...@scannet.dk%3cmailto:hen...@scannet.dk Tlf. 75 53 35 00 ScanNet Group A/S ScanNet -- Beautiful is writing same markup. Internet Explorer 9 supports standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 L3. Spend less time writing and rewriting code
[Bacula-users] Using 2 different directories to backp
I recently managed to setup acula version 5.0.2 on FC13, . It is working and managed to backup files. I understood that jobs/pool/fileset/volume are related in order to physically separate 2 distinct types of backups. I created a volume on a 2nd hard drive, a pool, a job and ran the job, the backup went to the same directory /backup. I went through the officiual doc but can not figure out how to do it.Is there any step by step tutorial to follow. Also, my intention of seting bacula is for disaster recovery over network (backing up to a network drive), is there any doc except the one provided by bacula.org Thanks in advance +-- |This was sent by salamli...@free.fr via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- -- Nokia and ATT present the 2010 Calling All Innovators-North America contest Create new apps games for the Nokia N8 for consumers in U.S. and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store http://p.sf.net/sfu/nokia-dev2dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Using 2 different directories to backp
On 10/31/10 17:25, eliassal wrote: I recently managed to setup acula version 5.0.2 on FC13, . It is working and managed to backup files. I understood that jobs/pool/fileset/volume are related in order to physically separate 2 distinct types of backups. I created a volume on a 2nd hard drive, a pool, a job and ran the job, the backup went to the same directory /backup. I went through the officiual doc but can not figure out how to do it. I'm not quite clear what it is that you're trying to do. Are you saying you want to have two separate pools in separate directories on disk? If that's the case, you'll need to have two separate disk storage devices defined, and associate each pool with only one device by assigning them different media types. For instance: Storage { Name = DiskStore [...] } Device { Name = FileStorage1 Device Type = File Media Type = File1 Archive Device = /backup/pool1 [...] } Device { Name = FileStorage2 Device Type = File Media Type = File2 Archive Device = /backup/pool2 [...] } and Pool { Name = DiskPool1 Storage = DiskStore Pool Type = Backup # create only volumes of MediaType File1 in this pool } Pool { Name = DiskPool2 Storage = DiskStore Pool Type = Backup # create only volumes of MediaType File2 in this pool } -- Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355 ala...@caerllewys.net ala...@metrocast.net p...@co.ordinate.org Renaissance Man, Unix ronin, Perl hacker, Free Stater It's not the years, it's the mileage. -- Nokia and ATT present the 2010 Calling All Innovators-North America contest Create new apps games for the Nokia N8 for consumers in U.S. and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store http://p.sf.net/sfu/nokia-dev2dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Using 2 different directories to backp
100% corrcect, so many thanks, I will do it and let you know. Thanks very much +-- |This was sent by salamli...@free.fr via Backup Central. |Forward SPAM to ab...@backupcentral.com. +-- -- Nokia and ATT present the 2010 Calling All Innovators-North America contest Create new apps games for the Nokia N8 for consumers in U.S. and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store http://p.sf.net/sfu/nokia-dev2dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Cannot build bacula-client 5.0.3 on FreeBSD 7.3
On 10/31/2010 8:09 AM, catkins wrote: Dan Langille wrote: This is for Bacula on FreeBSD 7.3? If so I'll patch the FreeBSD port. Can you try this: cd /usr/ports/sysutils/bacula-client make clean make patch cd work/bacula-5.0.3 (assuming usual port setup) patch -p1 /path-to/bacula-5.0.3-libz.patch If no errors, try: make Please report back. -- Dan Langille - http://langille.org/ Sorry Dan -- not FreeBSD but Slackware64 13.1 otherwise I'd be happy to try as you requested. Please... if you are not dealing with the subject in hand (i.e. Cannot build bacula-client 5.0.3 on FreeBSD 7.3), start a new threat with an appropriate subject. -- Dan Langille - http://langille.org/ -- Nokia and ATT present the 2010 Calling All Innovators-North America contest Create new apps games for the Nokia N8 for consumers in U.S. and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store http://p.sf.net/sfu/nokia-dev2dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] broken threading
Over the past few days, I've become increasingly impatient and frustrated by posts that break threading. That is, posts that lack the headers necessary for properly threading of emails. Specifically, the References: and In-Reply-To: headers are not being preserved. cases in point, the following threads: * Cannot build bacula-client 5.0.3 on FreeBSD * Searching for files * PLEASE READ BEFORE POSTING As can be found here: http://marc.info/?l=bacula-usersr=1b=201010w=2 Thanks for the rant. :) -- Dan Langille - http://langille.org/ -- Nokia and ATT present the 2010 Calling All Innovators-North America contest Create new apps games for the Nokia N8 for consumers in U.S. and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store http://p.sf.net/sfu/nokia-dev2dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Tuning for large (millions of files) backups?
Thanks :) Which type of MySQL storage engine are you using? MyISAM or InnoDB for large Bacula system? Can you please copy/paste your MySQL configuration? I mean my.cnf file Thanks, Ondrej. Henrik Johansen napsal(a): 'Ondrej PLANKA (Ignum profile)' wrote: Hello Henrik, what are you using? MySQL? Yes - all our catalog servers run MySQL. I forgot to mention this in my last post - we are Bacula System customers and they have proved to very supportive and competent. If you are thinking about doing large scale backups with Bacula I can only encourage you to get a support subscription - it is worth every penny. Thanks, Ondrej. 'Mingus Dew' wrote: Henrik, Have you had any problems with slow queries during backup or restore jobs? I'm thinking about http://bugs.bacula.org/view.php?id=1472 specifically, and considering that the bacula.File table already has 73 million rows in it and I haven't even successfully ran the big job yet. Not really. We have several 10+ million file jobs - all run without problem (backup and restore). I am aware of the fact that a lot of Bacula users run PG ( Bacula Systems also does recommend PG for larger setups ) but nevertheless MySQL has served us very well so far. Just curious as a fellow Solaris deployer... Thanks, Shon On Fri, Oct 8, 2010 at 3:30 PM, Henrik Johansen hen...@scannet.dkmailto:hen...@scannet.dk mailto:hen...@scannet.dk%3cmailto:hen...@scannet.dk wrote: 'Mingus Dew' wrote: All, I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running MySQL 4.1.22 for the database server. I do plan on upgrading to a compatible version of MySQL 5, but migrating to PostgreSQL isn't an option at this time. I am trying to backup to tape a very large number of files for a client. While the data size is manageable at around 2TB, the number of files is incredibly large. The first of the jobs had 27 million files and initially failed because the batch table became Full. I changed the myisam_data_pointer size to a value of 6 in the config. This job was then able to run successfully and did not take too long. I have another job which has 42 million files. I'm not sure what that equates to in rows that need to be inserted, but I can say that I've not been able to successfully run the job, as it seems to hang for over 30 hours in a Dir inserting attributes status. This causes other jobs to backup in the queue and once canceled I have to restart Bacula. I'm looking for way to boost performance of MySQL or Bacula (or both) to get this job completed. You *really* need to upgrade to MySQL 5 and change to InnoDB - there is no way in hell that MySQL 4 + MyISAM is going to perform decent in your situation. Solaris 10 is a Tier 1 platform for MySQL so the latest versions are always available from http://www.mysql.com in the native pkg format so there really is no excuse. We run our Bacula Catalog MySQl servers on Solaris (OpenSolaris) so perhaps I can give you some pointers. Our smallest Bacula DB is currently ~70 GB (381,230,610 rows). Since you are using Solaris 10 I assume that you are going to run MySQL off ZFS - in that case you need to adjust the ZFS recordsize for the filesystem that is going to hold your InnoDB datafiles to match the InnoDB block size. If you are using ZFS you should also consider getting yourself a fast SSD as a SLOG (or to disable the ZIL entirely if you dare) - all InnoDB writes to datafiles are O_SYNC and benefit *greatly* from an SSD in terms of write / transaction speed. If you have enough CPU power to spare you should try turning on compression for the ZFS filesystem holding the datafiles - it also can accelerate DB writes / reads but YMMV. Lastly, our InnoDB related configuration from my.cnf : # InnoDB options skip-innodb_doublewrite innodb_data_home_dir = /tank/db/ innodb_log_group_home_dir = /tank/logs/ innodb_support_xa = false innodb_file_per_table = true innodb_buffer_pool_size = 20G innodb_flush_log_at_trx_commit = 2 innodb_log_buffer_size = 128M innodb_log_file_size = 512M innodb_log_files_in_group = 2 innodb_max_dirty_pages_pct = 90 Thanks, Shon -- Beautiful is writing same markup. Internet Explorer 9 supports standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 L3. Spend less time writing and rewriting code and more time creating great experiences on the web. Be a part of the beta today. http://p.sf.net/sfu/beautyoftheweb ___ Bacula-users mailing list Bacula-users@lists.sourceforge.netmailto:Bacula-users@lists.sourceforge.net mailto:bacula-us...@lists.sourceforge.net%3cmailto:Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Med venlig hilsen / Best Regards Henrik Johansen