Re: [Bacula-users] Large scale disk-to-disk Bacula deployment

2010-12-15 Thread Henrik Johansen
'Marcello Romani' wrote:
Il 01/12/2010 16:04, Henrik Johansen ha scritto:
 Hi folks,

 I did prepare a paper for this years Bacula Konferenz 2010 about doing
 large scale, high peformance disk-to-disk backups with Bacula but
 unfortunately my workload prohibited me from submitting.

 I have turned the essence of the paper into a few blog posts which will
 explain our setup, why we chose Bacula over the competetion (IBM,
 Symanted and CommVault) and give some real world numbers from our Bacula
 deployment.

 The first post is out now if people should be interested and can be found 
 here :

 http://blog.myunix.dk/2010/12/01/large-scale-disk-to-disk-backups-using-bacula/

 The remaining posts will follow over the next month or so.



Very interesting. I'm looking seriously at bacula, although for a much
smaller setup than yours (to say the least). Your first post got me very
interested. I hope to read soom the other chapters of the tale...

Part VI is now online. This is the last post in the series. I hope that
some of you have found them usefull.

http://blog.myunix.dk/2010/12/15/large-scale-disk-to-disk-backups-using-bacula-part-vi/

-- 
Marcello Romani

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large scale disk-to-disk Bacula deployment

2010-12-14 Thread Henrik Johansen
'Marcello Romani' wrote:
Il 01/12/2010 16:04, Henrik Johansen ha scritto:
 Hi folks,

 I did prepare a paper for this years Bacula Konferenz 2010 about doing
 large scale, high peformance disk-to-disk backups with Bacula but
 unfortunately my workload prohibited me from submitting.

 I have turned the essence of the paper into a few blog posts which will
 explain our setup, why we chose Bacula over the competetion (IBM,
 Symanted and CommVault) and give some real world numbers from our Bacula
 deployment.

 The first post is out now if people should be interested and can be found 
 here :

 http://blog.myunix.dk/2010/12/01/large-scale-disk-to-disk-backups-using-bacula/

 The remaining posts will follow over the next month or so.



Very interesting. I'm looking seriously at bacula, although for a much
smaller setup than yours (to say the least). Your first post got me very
interested. I hope to read soom the other chapters of the tale...

Part V is now online and can be found here : 

http://blog.myunix.dk/2010/12/14/large-scale-disk-to-disk-backups-using-bacula-part-v/

-- 
Marcello Romani

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Lotusphere 2011
Register now for Lotusphere 2011 and learn how
to connect the dots, take your collaborative environment
to the next level, and enter the era of Social Business.
http://p.sf.net/sfu/lotusphere-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Concurrent Jobs Doubts.

2010-12-13 Thread Henrik Johansen
'pedro moreno' wrote:
 Hi my friends.

 I have bacula running on my server with Centos x64 5.5
Raid-5+LTO-2(Tandberg) external.

 My doubts are with bacula-sd concurrent jobs.

 U people that have disk based and tape backups, what is the maximum
jobs are u running on disk or tape at the same time(2,3,4,etc) on
each?

We are currently running 100 concurrent jobs against our disk
storage. Testing has shown that can go even higher without problems.

Our setup is a bit unusual so YMMV.

 Do u had some issues do u found went u setup concurrent jobs that could share?

 Is all my doubts.

 Thanks all for your time!!!

--
Oracle to DB2 Conversion Guide: Learn learn about native support for PL/SQL,
new data types, scalar functions, improved concurrency, built-in packages,
OCI, SQL*Plus, data movement tools, best practices and more.
http://p.sf.net/sfu/oracle-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Oracle to DB2 Conversion Guide: Learn learn about native support for PL/SQL,
new data types, scalar functions, improved concurrency, built-in packages, 
OCI, SQL*Plus, data movement tools, best practices and more.
http://p.sf.net/sfu/oracle-sfdev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large scale disk-to-disk Bacula deployment

2010-12-12 Thread Henrik Johansen
'Marcello Romani' wrote:
Il 01/12/2010 16:04, Henrik Johansen ha scritto:
 Hi folks,

 I did prepare a paper for this years Bacula Konferenz 2010 about doing
 large scale, high peformance disk-to-disk backups with Bacula but
 unfortunately my workload prohibited me from submitting.

 I have turned the essence of the paper into a few blog posts which will
 explain our setup, why we chose Bacula over the competetion (IBM,
 Symanted and CommVault) and give some real world numbers from our Bacula
 deployment.

 The first post is out now if people should be interested and can be found 
 here :

 http://blog.myunix.dk/2010/12/01/large-scale-disk-to-disk-backups-using-bacula/

 The remaining posts will follow over the next month or so.



Very interesting. I'm looking seriously at bacula, although for a much
smaller setup than yours (to say the least). Your first post got me very
interested. I hope to read soom the other chapters of the tale...

Part IV is now online ...

http://blog.myunix.dk/2010/12/12/large-scale-disk-to-disk-backups-using-bacula-part-iv/

-- 
Marcello Romani

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Oracle to DB2 Conversion Guide: Learn learn about native support for PL/SQL,
new data types, scalar functions, improved concurrency, built-in packages, 
OCI, SQL*Plus, data movement tools, best practices and more.
http://p.sf.net/sfu/oracle-sfdev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large scale disk-to-disk Bacula deployment

2010-12-04 Thread Henrik Johansen
'Steve Thompson' wrote:
On Wed, 1 Dec 2010, Henrik Johansen wrote:

 The remaining posts will follow over the next month or so.

Just a minor question from part III. You state that your storage servers
each use three Perc 6/E controllers, allowing the attachment of 9 MD1000
shelves. I believe that you can attach 6 shelves, not 3, to a single Perc
6/E (3 to each port).

Correct, but I also wrote that each MD1000 has 2 EMM units and
multipathing enabled ...

Steve

--
What happens now with your Lotus Notes apps - do you make another costly
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
What happens now with your Lotus Notes apps - do you make another costly 
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus 
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large scale disk-to-disk Bacula deployment

2010-12-03 Thread Henrik Johansen
'Henrik Johansen' wrote:
'Marcello Romani' wrote:
Il 01/12/2010 16:04, Henrik Johansen ha scritto:
Hi folks,

I did prepare a paper for this years Bacula Konferenz 2010 about doing
large scale, high peformance disk-to-disk backups with Bacula but
unfortunately my workload prohibited me from submitting.

I have turned the essence of the paper into a few blog posts which will
explain our setup, why we chose Bacula over the competetion (IBM,
Symanted and CommVault) and give some real world numbers from our Bacula
deployment.

The first post is out now if people should be interested and can be found 
here :

http://blog.myunix.dk/2010/12/01/large-scale-disk-to-disk-backups-using-bacula/

The remaining posts will follow over the next month or so.



Very interesting. I'm looking seriously at bacula, although for a much
smaller setup than yours (to say the least). Your first post got me very
interested. I hope to read soom the other chapters of the tale...

Part II is online now.

Part III is now also online.

-- 
Marcello Romani

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Oracle to DB2 Conversion Guide: Compatibility Made Easier than Ever Before
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large scale disk-to-disk Bacula deployment

2010-12-02 Thread Henrik Johansen
'Marcello Romani' wrote:
Il 01/12/2010 16:04, Henrik Johansen ha scritto:
 Hi folks,

 I did prepare a paper for this years Bacula Konferenz 2010 about doing
 large scale, high peformance disk-to-disk backups with Bacula but
 unfortunately my workload prohibited me from submitting.

 I have turned the essence of the paper into a few blog posts which will
 explain our setup, why we chose Bacula over the competetion (IBM,
 Symanted and CommVault) and give some real world numbers from our Bacula
 deployment.

 The first post is out now if people should be interested and can be found 
 here :

 http://blog.myunix.dk/2010/12/01/large-scale-disk-to-disk-backups-using-bacula/

 The remaining posts will follow over the next month or so.



Very interesting. I'm looking seriously at bacula, although for a much
smaller setup than yours (to say the least). Your first post got me very
interested. I hope to read soom the other chapters of the tale...

Part II is online now.

-- 
Marcello Romani

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Large scale disk-to-disk Bacula deployment

2010-12-01 Thread Henrik Johansen
Hi folks,

I did prepare a paper for this years Bacula Konferenz 2010 about doing
large scale, high peformance disk-to-disk backups with Bacula but
unfortunately my workload prohibited me from submitting.

I have turned the essence of the paper into a few blog posts which will
explain our setup, why we chose Bacula over the competetion (IBM,
Symanted and CommVault) and give some real world numbers from our Bacula
deployment.

The first post is out now if people should be interested and can be found here 
: 

http://blog.myunix.dk/2010/12/01/large-scale-disk-to-disk-backups-using-bacula/

The remaining posts will follow over the next month or so.


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Increase Visibility of Your 3D Game App  Earn a Chance To Win $500!
Tap into the largest installed PC base  get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tuning for large (millions of files) backups?

2010-11-12 Thread Henrik Johansen
'Alan Brown' wrote:
Mikael Fridh wrote:

 Tuning's not going to get any of those 50 million traversed rows
 disappear. Only a differently optimized query plan will.

This applies across both mysql and postgresql...

 This is an Ubuntu Linux server running MySQL v5.1.41.  The mysql data is on 
 an
 MD software RAID 1 array on 7200rpm SATA disks.  The tables are MyISAM 
 (which I
 had understood to be quicker than innodb in low concurrency situations?).  
 The
 tuner script is suggesting I should disable innodb as we're not using it 
 which
 I will do though I wouldn't guess that will make a massive difference.

 No, it will not help.

Disbling innodb won't help right now, but switching to innodb would be a
good idea in the near future as myISAM runs into problems around the 50
million entry mark (assuming Oracle don't remove innodb from future
versions of Mysql, as looks increasingly likely...)

InnoDB is the default storage engine for MySQL 5.5




--
Centralized Desktop Delivery: Dell and VMware Reference Architecture
Simplifying enterprise desktop deployment and management using
Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
client virtualization framework. Read more!
http://p.sf.net/sfu/dell-eql-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Centralized Desktop Delivery: Dell and VMware Reference Architecture
Simplifying enterprise desktop deployment and management using
Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
client virtualization framework. Read more!
http://p.sf.net/sfu/dell-eql-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tuning for large (millions of files) backups?

2010-11-11 Thread Henrik Johansen
'Alan Brown' wrote:
Gavin McCullagh wrote:
 On Tue, 09 Nov 2010, Alan Brown wrote:

 and it still takes 14 minutes to build the tree on one of our bigger 
 clients.
 We have 51 million entries in the file table.

 Add individual indexes for Fileid,  Jobid  and Pathid

 Postgres will work with the combined index for individual table queries,
 but mysql won't.

 The following are the indexes on the file table:

 mysql SHOW INDEXES FROM File;
 +---++--+--+-+---+-+--++--++-+
 | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation 
 | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
 +---++--+--+-+---+-+--++--++-+
 | File  |  0 | PRIMARY  |1 | FileId  | A 
 |55861148 | NULL | NULL   |  | BTREE  | |
 | File  |  1 | PathId   |1 | PathId  | A 
 |  735015 | NULL | NULL   |  | BTREE  | |
 | File  |  1 | FilenameId   |1 | FilenameId  | A 
 | 2539143 | NULL | NULL   |  | BTREE  | |
 | File  |  1 | FilenameId   |2 | PathId  | A 
 |13965287 | NULL | NULL   |  | BTREE  | |
 | File  |  1 | JobId|1 | JobId   | A 
 |1324 | NULL | NULL   |  | BTREE  | |
 | File  |  1 | JobId|2 | PathId  | A 
 | 2940060 | NULL | NULL   |  | BTREE  | |
 | File  |  1 | JobId|3 | FilenameId  | A 
 |55861148 | NULL | NULL   |  | BTREE  | |
 | File  |  1 | jobid_index  |1 | JobId   | A 
 |1324 | NULL | NULL   |  | BTREE  | |
 | File  |  1 | pathid_index |1 | PathId  | A 
 |  735015 | NULL | NULL   |  | BTREE  | |
 +---++--+--+-+---+-+--++--++-+

 I added the last two per your instructions.  Building the tree took about 14
 minutes without these indexes and takes about 17-18 minutes having added
 them.

What tuning (if any) have you performed on your my.cnf and how much
memory do you have?

 Have I done something wrong?  As FileId is a primary key, it doesn't seem
 like I should need an extra index on that one -- is that wrong?

It doesn't need an extra index.

You've also got a duplicate pathid indeax which can be deleted.

This kind of thing is why it makes more sense to switch to postgres when
  mysql databases get large.

I have had about as much of this as I can take now so please, stop spreading
FUD about MySQL.

When it comes to Bacula there is only one valid concern - Postgres has
certain statement constructs which allow certain queries to be performed
faster - that's about it.

It am not buying the postulation that postgres is largely self-tuning,
especially not when dealing with large datasets. 

If you prefer postgres, that's totally fine but please stop telling
people that MySQL is unusable for large DB deployments because this
simply is untrue.



--
Centralized Desktop Delivery: Dell and VMware Reference Architecture
Simplifying enterprise desktop deployment and management using
Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
client virtualization framework. Read more!
http://p.sf.net/sfu/dell-eql-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Centralized Desktop Delivery: Dell and VMware Reference Architecture
Simplifying enterprise desktop deployment and management using
Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
client virtualization framework. Read more!
http://p.sf.net/sfu/dell-eql-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Need some Solaris advice - Bacula, MySQL memory leak?

2010-11-11 Thread Henrik Johansen
Hi,

Sorry for the late response, I must have overlooked your post.

See comments below ...

'Mingus Dew' wrote:
Hi all,

I recently upgraded to MySQL 5.1.57 from Blastwave packages. I then
upgraded to Bacula 5.0.3 I read somewhere about a memory leak on
Solaris and think I'm encountering it. I was wondering if this is a
leak in Bacula, or in MySQL.  I'm running Solaris 10 x86_64 127128-11

Wow - 127128-11 was released 28/4 2008. You *really* should upgrade your
OS (liveupgrade is your friend).

Basically, the system just slowly dies, less memory is available, the
memory isn't released when I stop Bacula or MySQL and I have to reboot
the box.

You can detect memory leaks with dtrace  mdb. Another easy way is to
ld_preload libumem.so with it's debug option enabled ... I am sure that
google can point you in the right direction.

If its MySQL, I was hoping someone could point me to a fix or suggest a
version that I can compile myself that isn't subject to this. If its
Bacula, what is the recommended solution?

Use the offcial packages from mysql.com - they have the latest and
greatest MySQL version for Solaris in native pkg format.

Thanks,
Shon

--
Nokia and ATT present the 2010 Calling All Innovators-North America contest
Create new apps  games for the Nokia N8 for consumers in  U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store
http://p.sf.net/sfu/nokia-dev2dev

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Centralized Desktop Delivery: Dell and VMware Reference Architecture
Simplifying enterprise desktop deployment and management using
Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
client virtualization framework. Read more!
http://p.sf.net/sfu/dell-eql-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tuning for large (millions of files) backups?

2010-11-11 Thread Henrik Johansen
'Alan Brown' wrote:
Henrik Johansen wrote:

 I have had about as much of this as I can take now so please, stop spreading
 FUD about MySQL.

Have you used Mysql with datasets in excess of 100-200 million objects?

Sure - our current Bacula deployment consists of 3 catalog servers with
the smallest DB having ~380 million rows. We have other MySQL DB's in
production that are considerably larger and so do Facebook, Twitter,
Flickr, YouTube, Wikipedia and so on ...

I have. Our current database holds about 400 million File table entries.

MySQL requires significant tuning and kernel tweakery, plus uses a lot
more memory than postgres does for the same dataset.

Almost all large MySQL servers we have run Solaris - absolutely no kernel
tweaking required.

For Bacula users, it's a lot _easier_ to use Postgres on a large
installation than it is to use MySQL.

Large installations usually have DBA's ? Personally I find it a *lot*
easier to apply a few configuration tweaks to a product that I have 8+
years of production experience with than throwing in the towel and
starting from scratch with an entirely different product ...

I held off switching to Postgres for a long time because I was
unfamiliar with it, however having done so I'm glad that I did - it's
required virtually zero tweaking since it was set up and runs
approximately twice as fast as MySQL did, with a ram footprint about
half the size of MySQL's.

MySQL, or more specifically InnoDB, needs a bit of love before performing
well, I'll admit to that. The upcoming MySQL 5.5 will change much of
this however. 

Small datasets are fine with MySQL and will probably work better. Ours
was brilliant up to about 50 million entries and then required tuning.

This discussion is about appropriate tools for the job.

Yes - and I still consider MySQL to be a highly appropriate tool for the
job. Perhaps the MySQL force is particularly strong in me, who knows.

If you wish to usefully contribute to the thread then provide some
assistance to the OP regarding tuning his MySQL for optimum performance.

Re-read the thread - I believe that I already have done so.





-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Centralized Desktop Delivery: Dell and VMware Reference Architecture
Simplifying enterprise desktop deployment and management using
Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
client virtualization framework. Read more!
http://p.sf.net/sfu/dell-eql-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Bacula Project will Die

2010-11-05 Thread Henrik Johansen
'Heitor Medrado de Faria' wrote:
Guys,

Each new Bacula Enterprise feature, like the: New GUI Configurator,
makes me feel that Bacula project will die.
It's very frustrating that a project that become a huge success being a
free software, is being destroyed like that.
I acknowledge that Kern and other developers had lots of development
work on Bacula - and there is not huge contribution. But creating a paid
fork is not the way of get compensation.

Will you please stop whining - you sound like my youngest son when his
favorite toy gets taken away. You refer to 'free' as in 'free beer' -
not as in 'free speech' which in my opinion makes you nothing more than
a leecher.

If you really need the features that currently are restricted to the
BSEE edition why not actually do something smart and PAY for them to
support the project ?  We are gladly paying for BSEE - the price they
offer is ridiculously low compared to other vendors. We saved ~150.000
EUR in upfront license investments alone and are still saving money
every year on the support subscription.

Besides access to the few restricted features you get worldclass
support from the people that know Bacula in and out - that alone is
worth the money.

Bacula is definitivly NOT going to die - I dare to say that the opposite
is happening.

What Kern and the rest of Bacula Systems have done is a very reasonable
and logic approach to the problem that surrounds many open source
projects.

They offer A LOT of value for nothing in the community edition - you
should be grateful for that.


Regards,

-- 
Heitor Medrado de Faria
www.bacula.com.br
Msn: hei...@bacula.com.br
Gtalk: heitorfa...@gmail.com
Skype: neocodeheitor
+ 55 71 9132-3349
+55 71 3381-6869


--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a
Billion shares his insights and actions to help propel your
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-devel mailing list
bacula-de...@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-devel

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tuning for large (millions of files) backups?

2010-11-01 Thread Henrik Johansen
'Ondrej PLANKA (Ignum profile)' wrote:
Thanks :)
Which type of MySQL storage engine are you using? MyISAM or InnoDB for
large Bacula system?
Can you please copy/paste your MySQL configuration? I mean my.cnf file

Please re-read this thread and you should find what you are looking for.

Thanks, Ondrej.


Henrik Johansen napsal(a):
 'Ondrej PLANKA (Ignum profile)' wrote:

 Hello Henrik,

 what are you using? MySQL?


 Yes - all our catalog servers run MySQL.

 I forgot to mention this in my last post - we are Bacula System
 customers and they have proved to very supportive and competent.

 If you are thinking about doing large scale backups with Bacula I can
 only encourage you to get a support subscription - it is worth every
 penny.



 Thanks, Ondrej.

 'Mingus Dew' wrote:

 Henrik,
 Have you had any problems with slow queries during backup or restore
 jobs? I'm thinking about http://bugs.bacula.org/view.php?id=1472
 specifically, and considering that the bacula.File table already has 73
 million rows in it and I haven't even successfully ran the big job
 yet.

 Not really.

 We have several 10+ million file jobs - all run without problem (backup
 and restore).

 I am aware of the fact that a lot of Bacula users run PG  ( Bacula
 Systems also does recommend PG for larger setups ) but nevertheless
 MySQL has served us very well so far.


 Just curious as a fellow Solaris deployer...

 Thanks,
 Shon

 On Fri, Oct 8, 2010 at 3:30 PM, Henrik Johansen
 hen...@scannet.dkmailto:hen...@scannet.dk 
 mailto:hen...@scannet.dk%3cmailto:hen...@scannet.dk wrote:
 'Mingus Dew' wrote:
 All,
 I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
 MySQL 4.1.22 for the database server. I do plan on upgrading to a
 compatible version of MySQL 5, but migrating to PostgreSQL isn't an
 option at this time.

 I am trying to backup to tape a very large number of files for a
 client. While the data size is manageable at around 2TB, the number of
 files is incredibly large.
 The first of the jobs had 27 million files and initially failed because
 the batch table became Full. I changed the myisam_data_pointer size
 to a value of 6 in the config.

 This job was then able to run successfully and did not take too long.

 I have another job which has 42 million files. I'm not sure what that
 equates to in rows that need to be inserted, but I can say that I've
 not been able to successfully run the job, as it seems to hang for
 over 30 hours in a Dir inserting attributes status. This causes
 other jobs to backup in the queue and once canceled I have to restart
 Bacula.

 I'm looking for way to boost performance of MySQL or Bacula (or both)
 to get this job completed.

 You *really* need to upgrade to MySQL 5 and change to InnoDB - there is no
 way in hell that MySQL 4 + MyISAM is going to perform decent in your
 situation.
 Solaris 10 is a Tier 1 platform for MySQL so the latest versions are
 always available from http://www.mysql.com in the native pkg format so 
 there really
 is no excuse.

 We run our Bacula Catalog MySQl servers on Solaris (OpenSolaris) so
 perhaps I can give you some pointers.

 Our smallest Bacula DB is currently ~70 GB (381,230,610 rows).

 Since you are using Solaris 10 I assume that you are going to run MySQL
 off ZFS - in that case you need to adjust the ZFS recordsize for the
 filesystem that is going to hold your InnoDB datafiles to match the
 InnoDB block size.

 If you are using ZFS you should also consider getting yourself a fast
 SSD as a SLOG (or to disable the ZIL entirely if you dare) - all InnoDB
 writes to datafiles are O_SYNC and benefit *greatly* from an SSD in
 terms of write / transaction speed.

 If you have enough CPU power to spare you should try turning on
 compression for the ZFS filesystem holding the datafiles - it also can
 accelerate DB writes / reads but YMMV.

 Lastly, our InnoDB related configuration from my.cnf :

 # InnoDB options skip-innodb_doublewrite
 innodb_data_home_dir = /tank/db/
 innodb_log_group_home_dir = /tank/logs/
 innodb_support_xa = false
 innodb_file_per_table = true
 innodb_buffer_pool_size = 20G
 innodb_flush_log_at_trx_commit = 2
 innodb_log_buffer_size = 128M
 innodb_log_file_size = 512M
 innodb_log_files_in_group = 2
 innodb_max_dirty_pages_pct = 90



 Thanks,
 Shon

 --
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
 Spend less time writing and  rewriting code and more time creating great
 experiences on the web. Be a part of the beta today.
 http://p.sf.net/sfu/beautyoftheweb

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.netmailto:Bacula-users@lists.sourceforge.net
  
 mailto:bacula-us...@lists.sourceforge.net%3cmailto:Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


 --
 Med

Re: [Bacula-users] Tuning for large (millions of files) backups?

2010-10-31 Thread Henrik Johansen
'Ondrej PLANKA (Ignum profile)' wrote:
Hello Henrik,

what are you using? MySQL?

Yes - all our catalog servers run MySQL.

I forgot to mention this in my last post - we are Bacula System
customers and they have proved to very supportive and competent.

If you are thinking about doing large scale backups with Bacula I can
only encourage you to get a support subscription - it is worth every
penny.


Thanks, Ondrej.

'Mingus Dew' wrote:
Henrik,
Have you had any problems with slow queries during backup or restore
jobs? I'm thinking about http://bugs.bacula.org/view.php?id=1472
specifically, and considering that the bacula.File table already has 73
million rows in it and I haven't even successfully ran the big job
yet.

Not really.

We have several 10+ million file jobs - all run without problem (backup
and restore).

I am aware of the fact that a lot of Bacula users run PG  ( Bacula
Systems also does recommend PG for larger setups ) but nevertheless
MySQL has served us very well so far.


Just curious as a fellow Solaris deployer...

Thanks,
Shon

On Fri, Oct 8, 2010 at 3:30 PM, Henrik Johansen
hen...@scannet.dkmailto:hen...@scannet.dk 
mailto:hen...@scannet.dk%3cmailto:hen...@scannet.dk wrote:
'Mingus Dew' wrote:
All,
I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
MySQL 4.1.22 for the database server. I do plan on upgrading to a
compatible version of MySQL 5, but migrating to PostgreSQL isn't an
option at this time.

I am trying to backup to tape a very large number of files for a
client. While the data size is manageable at around 2TB, the number of
files is incredibly large.
The first of the jobs had 27 million files and initially failed because
the batch table became Full. I changed the myisam_data_pointer size
to a value of 6 in the config.

This job was then able to run successfully and did not take too long.

I have another job which has 42 million files. I'm not sure what that
equates to in rows that need to be inserted, but I can say that I've
not been able to successfully run the job, as it seems to hang for
over 30 hours in a Dir inserting attributes status. This causes
other jobs to backup in the queue and once canceled I have to restart
Bacula.

I'm looking for way to boost performance of MySQL or Bacula (or both)
to get this job completed.

You *really* need to upgrade to MySQL 5 and change to InnoDB - there is no
way in hell that MySQL 4 + MyISAM is going to perform decent in your
situation.
Solaris 10 is a Tier 1 platform for MySQL so the latest versions are
always available from http://www.mysql.com in the native pkg format so there 
really
is no excuse.

We run our Bacula Catalog MySQl servers on Solaris (OpenSolaris) so
perhaps I can give you some pointers.

Our smallest Bacula DB is currently ~70 GB (381,230,610 rows).

Since you are using Solaris 10 I assume that you are going to run MySQL
off ZFS - in that case you need to adjust the ZFS recordsize for the
filesystem that is going to hold your InnoDB datafiles to match the
InnoDB block size.

If you are using ZFS you should also consider getting yourself a fast
SSD as a SLOG (or to disable the ZIL entirely if you dare) - all InnoDB
writes to datafiles are O_SYNC and benefit *greatly* from an SSD in
terms of write / transaction speed.

If you have enough CPU power to spare you should try turning on
compression for the ZFS filesystem holding the datafiles - it also can
accelerate DB writes / reads but YMMV.

Lastly, our InnoDB related configuration from my.cnf :

# InnoDB options skip-innodb_doublewrite
innodb_data_home_dir = /tank/db/
innodb_log_group_home_dir = /tank/logs/
innodb_support_xa = false
innodb_file_per_table = true
innodb_buffer_pool_size = 20G
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 128M
innodb_log_file_size = 512M
innodb_log_files_in_group = 2
innodb_max_dirty_pages_pct = 90



Thanks,
Shon

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.netmailto:Bacula-users@lists.sourceforge.net 
mailto:bacula-us...@lists.sourceforge.net%3cmailto:Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dkmailto:hen...@scannet.dk 
mailto:hen...@scannet.dk%3cmailto:hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet


--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code

Re: [Bacula-users] Tuning for large (millions of files) backups?

2010-10-14 Thread Henrik Johansen
'Mingus Dew' wrote:
Henrik,
Have you had any problems with slow queries during backup or restore
jobs? I'm thinking about http://bugs.bacula.org/view.php?id=1472
specifically, and considering that the bacula.File table already has 73
million rows in it and I haven't even successfully ran the big job
yet.

Not really.

We have several 10+ million file jobs - all run without problem (backup
and restore).

I am aware of the fact that a lot of Bacula users run PG  ( Bacula
Systems also does recommend PG for larger setups ) but nevertheless
MySQL has served us very well so far.


Just curious as a fellow Solaris deployer...

Thanks,
Shon

On Fri, Oct 8, 2010 at 3:30 PM, Henrik Johansen
hen...@scannet.dkmailto:hen...@scannet.dk wrote:
'Mingus Dew' wrote:
All,
I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
MySQL 4.1.22 for the database server. I do plan on upgrading to a
compatible version of MySQL 5, but migrating to PostgreSQL isn't an
option at this time.

I am trying to backup to tape a very large number of files for a
client. While the data size is manageable at around 2TB, the number of
files is incredibly large.
The first of the jobs had 27 million files and initially failed because
the batch table became Full. I changed the myisam_data_pointer size
to a value of 6 in the config.

This job was then able to run successfully and did not take too long.

I have another job which has 42 million files. I'm not sure what that
equates to in rows that need to be inserted, but I can say that I've
not been able to successfully run the job, as it seems to hang for
over 30 hours in a Dir inserting attributes status. This causes
other jobs to backup in the queue and once canceled I have to restart
Bacula.

I'm looking for way to boost performance of MySQL or Bacula (or both)
to get this job completed.

You *really* need to upgrade to MySQL 5 and change to InnoDB - there is no
way in hell that MySQL 4 + MyISAM is going to perform decent in your
situation.
Solaris 10 is a Tier 1 platform for MySQL so the latest versions are
always available from www.mysql.com in the native pkg format so there really
is no excuse.

We run our Bacula Catalog MySQl servers on Solaris (OpenSolaris) so
perhaps I can give you some pointers.

Our smallest Bacula DB is currently ~70 GB (381,230,610 rows).

Since you are using Solaris 10 I assume that you are going to run MySQL
off ZFS - in that case you need to adjust the ZFS recordsize for the
filesystem that is going to hold your InnoDB datafiles to match the
InnoDB block size.

If you are using ZFS you should also consider getting yourself a fast
SSD as a SLOG (or to disable the ZIL entirely if you dare) - all InnoDB
writes to datafiles are O_SYNC and benefit *greatly* from an SSD in
terms of write / transaction speed.

If you have enough CPU power to spare you should try turning on
compression for the ZFS filesystem holding the datafiles - it also can
accelerate DB writes / reads but YMMV.

Lastly, our InnoDB related configuration from my.cnf :

# InnoDB options skip-innodb_doublewrite
innodb_data_home_dir = /tank/db/
innodb_log_group_home_dir = /tank/logs/
innodb_support_xa = false
innodb_file_per_table = true
innodb_buffer_pool_size = 20G
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 128M
innodb_log_file_size = 512M
innodb_log_files_in_group = 2
innodb_max_dirty_pages_pct = 90



Thanks,
Shon

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.netmailto:Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dkmailto:hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet


--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend

Re: [Bacula-users] ZFS and Bacula

2010-10-09 Thread Henrik Johansen
'Phil Stracchino' wrote:
On 10/07/10 13:47, Roy Sigurd Karlsbakk wrote:
 Hi all

 I'm planning a Bacula setup with ZFS on the SDs (media being disk,
 not tape), and I just wonder - should I use a smaller recordsize (aka
 largest block size) than the default setting of 128kB?

Actually, there are arguments in favor of using a larger, not smaller,
block size for applications such as this where you expect the major
usage to be extended streaming reads and writes.

As a general rule I do agree (especially when dealing with sequential
I/O) - but it still is dependend on the application I/O.

That said, my own disk SD runs on ZFS with default block size and works
just fine.

$ for f in `ls`; do zfs get -Hpo value recordsize storage01-01/bacula/$f; done 
| uniq -c
 189 131072

189 ZFS filesystems all with the default 128k recordsize.

A quick peek with dtrace shows this : 

$ dtrace -n 'sysinfo:::writech  / execname == bacula-sd / {
@dist[execname] = quantize(arg0); }'

dtrace: description 'sysinfo:::writech  ' matched 4 probes
^C

   bacula-sd
value  - Distribution - count
2 | 0
4 | 4
8 | 0
   16 | 3
   32 | 0
   64 | 75216
  128 |@18477
  256 | 74357
  512 | 0
 1024 | 0
 2048 | 0
 4096 | 0
 8192 | 0
16384 | 0
32768 |@@   514260
65536 | 0

This was taken during a single full backup of a windows client.

The sysinfo:::writech call covers all write(2), writev(2) or pwrite(2)
system calls - writes generated by the bacula-sd process seem to be
limited to 32k, regardless of the underlying recordsize (upper block
size limit).

I'll run this tonight when we have ~100 clients backing up towards this
machine - I'll will monitor the actual I/O size as seen by ZFS aswell
and post the output is someone is interested ...


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ZFS and Bacula

2010-10-09 Thread Henrik Johansen
'Phil Stracchino' wrote:
On 10/09/10 07:31, Henrik Johansen wrote:
 $ dtrace -n 'sysinfo:::writech  / execname == bacula-sd / {
 @dist[execname] = quantize(arg0); }'

 dtrace: description 'sysinfo:::writech  ' matched 4 probes
 ^C

bacula-sd
 value  - Distribution - count
 2 | 0
 4 | 4
 8 | 0
16 | 3
32 | 0
64 | 75216
   128 |@18477
   256 | 74357
   512 | 0
  1024 | 0
  2048 | 0
  4096 | 0
  8192 | 0
 16384 | 0
 32768 |@@   514260
 65536 | 0

 This was taken during a single full backup of a windows client.

 The sysinfo:::writech call covers all write(2), writev(2) or pwrite(2)
 system calls - writes generated by the bacula-sd process seem to be
 limited to 32k, regardless of the underlying recordsize (upper block
 size limit).

 I'll run this tonight when we have ~100 clients backing up towards this
 machine - I'll will monitor the actual I/O size as seen by ZFS aswell
 and post the output is someone is interested ...


Please do.  This is interesting information.

Seems I was fooled by the quantize function - the raw size is 64512
bytes. According to the Bacula manual this also is the default maximum
block size for the SD ;-)

I'll bump that to 131072 bytes when I have some time and see how it
goes.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tuning for large (millions of files) backups?

2010-10-08 Thread Henrik Johansen
'Mingus Dew' wrote:
 All,
 I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
 MySQL 4.1.22 for the database server. I do plan on upgrading to a
 compatible version of MySQL 5, but migrating to PostgreSQL isn't an
 option at this time.

 I am trying to backup to tape a very large number of files for a
 client. While the data size is manageable at around 2TB, the number of
 files is incredibly large.
The first of the jobs had 27 million files and initially failed because
 the batch table became Full. I changed the myisam_data_pointer size
 to a value of 6 in the config.

This job was then able to run successfully and did not take too long.

 I have another job which has 42 million files. I'm not sure what that
 equates to in rows that need to be inserted, but I can say that I've
 not been able to successfully run the job, as it seems to hang for
 over 30 hours in a Dir inserting attributes status. This causes
 other jobs to backup in the queue and once canceled I have to restart
 Bacula.

 I'm looking for way to boost performance of MySQL or Bacula (or both)
 to get this job completed.

You *really* need to upgrade to MySQL 5 and change to InnoDB - there is no
way in hell that MySQL 4 + MyISAM is going to perform decent in your
situation. 

Solaris 10 is a Tier 1 platform for MySQL so the latest versions are
always available from www.mysql.com in the native pkg format so there really
is no excuse.

We run our Bacula Catalog MySQl servers on Solaris (OpenSolaris) so
perhaps I can give you some pointers.

Our smallest Bacula DB is currently ~70 GB (381,230,610 rows).

Since you are using Solaris 10 I assume that you are going to run MySQL
off ZFS - in that case you need to adjust the ZFS recordsize for the
filesystem that is going to hold your InnoDB datafiles to match the
InnoDB block size.

If you are using ZFS you should also consider getting yourself a fast
SSD as a SLOG (or to disable the ZIL entirely if you dare) - all InnoDB
writes to datafiles are O_SYNC and benefit *greatly* from an SSD in
terms of write / transaction speed.

If you have enough CPU power to spare you should try turning on
compression for the ZFS filesystem holding the datafiles - it also can
accelerate DB writes / reads but YMMV.

Lastly, our InnoDB related configuration from my.cnf :

# InnoDB options 
skip-innodb_doublewrite
innodb_data_home_dir = /tank/db/
innodb_log_group_home_dir = /tank/logs/
innodb_support_xa = false
innodb_file_per_table = true
innodb_buffer_pool_size = 20G
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 128M
innodb_log_file_size = 512M
innodb_log_files_in_group = 2
innodb_max_dirty_pages_pct = 90



Thanks,
Shon

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ZFS and Bacula

2010-10-07 Thread Henrik Johansen
'Roy Sigurd Karlsbakk' wrote:
Hi all

I'm planning a Bacula setup with ZFS on the SDs (media being disk, not
tape), and I just wonder - should I use a smaller recordsize (aka
largest block size) than the default setting of 128kB?

Setting the recordsize to 64k has worked well for us so far.

If you are limited on CPU power you might consider to disable the SD's
checksum feature since ZFS already does that.

Also, last I tried, with ZFS on a test box, I enabled compression, the
lzjb algorithm (very lightwegith and quite decent compression). For
'normal' data, I usually get 30%+ compression with this, but for the
data backed up with bacula, it didn't look that good, compression being
down to 3-5%. Any idea what might cause this?

I have played with ZFS compression aswell - some clients yield good
results (20% +) while others do not. Bacula does a good job at
compacting data when writing it's volumes files which can impact
compression aswell.

Our current SD's are a bit low on CPU power during prime time but I'll
do some more serious testing once my new 48 core boxes arrive.


Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres
intelligibelt. Det er et elementært imperativ for alle pedagoger å
unngå eksessiv anvendelse av idiomer med fremmed opprinnelse. I de
fleste tilfeller eksisterer adekvate og relevante synonymer på norsk.

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows 2008 R2 FD issues

2010-09-29 Thread Henrik Johansen
'Joseph L. Casale' wrote:
I have several physical and virtual fd's that are just unreliable to backup
against two sd/dir's all at 5.0.3 running CentOS5x64.

Anyone else having these issues? I get random network IO failures as
suggested by the dir?

We are seeing the same pattern - W28K boxes fail sporadically.

It seems to have started recently - our windows folks are currently
looking into it.

Thanks,
jlc

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with webacula

2010-09-29 Thread Henrik Johansen
'Daniel beas' wrote:
Hi to all.
I'm trying to install webacula but after do all tha config i get the
next error


Fatal error:  Uncaught exception 'Zend_Exception' with message 'Version
error for Catalog database (wanted 12, got 11) ' in
/var/www/webacula/html/index.php:183
Stack trace:
#0 {main}
  thrown in /var/www/webacula/html/index.php on line 183

I have to mention that when i run the script to check system
requeriments and i get all right (but PostgreSQL because i'm running
bacula with mysql).

#!/usr/bin/php Check System Requirements... Current MySQL version =
#!5.0.45 OK Current PostgreSQL version = Warning. Upgrade your
#!PostgreSQL version to 8.0.0 or later Current Sqlite version = 3.4.2
#!OK Current PHP version = 5.2.4 OK php pdo installed. OK php gd
#!installed. OK php xml installed. OK php dom installed. OK php
#!pdo_mysql installed. OK php pdo_pgsql installed. OK php-dom, php-xml
#!installed. OK

Actually im running bacula 3.03 and webacula 5.0 in the director and i
don't have any idea what can be wrong.
I don't know if i have provided all the information required, if i'm
missing something i'll be so thanked you to tell me.

You need a webacula 3.x version for Bacula 3.x


Thanks in advance

Daniel Beas Enriquez



--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Verify differences: SHA1 sum doesn't match but it should

2010-08-30 Thread Henrik Johansen
'Paul Mather' wrote:
On Aug 28, 2010, at 7:12 AM, Steve Costaras wrote:

Could be due to a transient error (transmission or wild/torn read at
time of calculation).  I see this a lot with integrity checking of
files here (50TiB of storage).

Only way to get around this now is to do a known-good sha1/md5 hash of
data (2-3 reads of the file make sure that they all match and that the
file is not corrupted) save that as a baseline and then when doing
reads/compares if one fails do another re-read and see if the first one
was in error and compare that with your baseline.  This is one reason
why I'm switching to the new generation of sas drives that have ioecc
checks on READS not just writes to help cut down on some of this.

Corruption does occur as well and is more probable with the higher the
capacity of the drive.  Ideally you would have a drive that would do
ioecc on reads, plus using T10 PI extensions (DIX/DIF) from drive to
controller up to your file system layer.  It won't always prevent it by
itself but would allow if you have a raid setup to do some self-healing
when a drive reports a non transient (i.e. corrupted sector of data).

However the T10 PI extensions are only on sas/fc drives (520/528 byte
blocks) and so far as I can tell only the new LSI hba's support a small
subset of this (no hardware raid controllers I can find) and have not
seen any support up to the OS/filesystem level.  SATA is not included
at all as the T13 group opted not to include it in the spec.

You could also stick with your current hardware and use a file system
that emphasises end-to-end data integrity like ZFS.  ZFS checksums at
many levels, and has a don't trust the hardware mentality.  It can
detect silent data corruption and automatically self-heal where
redundancy permits.

ZFS also supports pool scrubbing---akin to the patrol reading of many
RAID controllers---for proactive detection of silent data corruption.
With drive capacities becoming very large, the probability of an
unrecoverable read becomes very high.  This becomes very significant
even in redundant storage systems because a drive failure necessitates
a lengthy rebuild period during which the storage array lacks any
redundancy (in the case of RAID-5).  It is for this reason that RAID-6
(ZFS raidz2) is becoming de rigeur for many-terabyte arrays using large
drives, and, specifically, the reason ZFS garnered its triple-parity
raidz3 pool type (in ZFS pool version 17).

Have you ever tried scrubbing a 40+ TB pool ?

I believe Btrfs intends to bring many ZFS features to Linux.

Cheers,

Paul.

--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users
worldwide. Take advantage of special opportunities to increase revenue and
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users 
worldwide. Take advantage of special opportunities to increase revenue and 
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Verify differences: SHA1 sum doesn't match but it should

2010-08-30 Thread Henrik Johansen
'Steve Costaras' wrote:
  A little mis-quoted there:

On 2010-08-30 02:59, Henrik Johansen wrote:
 On Aug 28, 2010, at 7:12 AM, Steve Costaras wrote:

 Could be due to a transient error (transmission or wild/torn read at
 time of calculation).  I see this a lot with integrity checking of
 files here (50TiB of storage).

 Only way to get around this now is to do a known-good sha1/md5 hash of
 data (2-3 reads of the file make sure that they all match and that the
 file is not corrupted) save that as a baseline and then when doing
 reads/compares if one fails do another re-read and see if the first one
 was in error and compare that with your baseline.  This is one reason
 why I'm switching to the new generation of sas drives that have ioecc
 checks on READS not just writes to help cut down on some of this.

 Corruption does occur as well and is more probable with the higher the
 capacity of the drive.  Ideally you would have a drive that would do
 ioecc on reads, plus using T10 PI extensions (DIX/DIF) from drive to
 controller up to your file system layer.  It won't always prevent it by
 itself but would allow if you have a raid setup to do some self-healing
 when a drive reports a non transient (i.e. corrupted sector of data).

 However the T10 PI extensions are only on sas/fc drives (520/528 byte
 blocks) and so far as I can tell only the new LSI hba's support a small
 subset of this (no hardware raid controllers I can find) and have not
 seen any support up to the OS/filesystem level.  SATA is not included
 at all as the T13 group opted not to include it in the spec.

 You could also stick with your current hardware and use a file system
 that emphasises end-to-end data integrity like ZFS.  ZFS checksums at
 many levels, and has a don't trust the hardware mentality.  It can
 detect silent data corruption and automatically self-heal where
 redundancy permits.


'Paul Mather' wrote:
 ZFS also supports pool scrubbing---akin to the patrol reading of many
 RAID controllers---for proactive detection of silent data corruption.
 With drive capacities becoming very large, the probability of an
 unrecoverable read becomes very high.  This becomes very significant
 even in redundant storage systems because a drive failure necessitates
 a lengthy rebuild period during which the storage array lacks any
 redundancy (in the case of RAID-5).  It is for this reason that RAID-6
 (ZFS raidz2) is becoming de rigeur for many-terabyte arrays using large
 drives, and, specifically, the reason ZFS garnered its triple-parity
 raidz3 pool type (in ZFS pool version 17).


On 2010-08-30 02:59, Henrik Johansen wrote:
 Have you ever tried scrubbing a 40+ TB pool ?

If the  question was to me, then yes, I have but with the comment that I
am working with SANs and otherwise
redundant luns/disks that I run ZFS on top of.So the availability
portion of the disk subsystem is pretty stable
already.  I use ZFS mainly to check/verify data integrity as well as for
volume management functions.For
performance reasons I am mainly using mirroring.   When pool sizes get
large 50, 100, or more TiB the problem
is the time it takes to do a scrub and the cpu  i/o costs are high.
For ~50TiB I would say you would want
to have a subsystem that is capable of 2-3GiB/s. And then increase
that in proportion with larger sets.
Even then it takes a toll on a system that the primary job is NOT disk
integrity but to run X application.

My point exactly.

Scrubbing is problematic when dealing with large datasets that have a
limited lifetime and / or a high changerate (eg D2D backups). There is a
hefty cost associated with scrubbing both in terms of IOPS / CPU cyles
and still you may not be able to cover your entire dataset before data
in it gets removed or changed.

Once a fix for CR6730306 is integrated it may become feasible to
schedule scrubbing operations during off-hours though.

Like most ZFS related stuff it all sounds (and looks) extremely easy but
in reality it is not quite so simple.




--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users
worldwide. Take advantage of special opportunities to increase revenue and
speed time-to-market. Join now, and jumpstart your future.
http://p.sf.net/sfu/intel-atom-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
Sell apps to millions through the Intel(R) Atom(Tm) Developer Program
Be part of this innovative community and reach millions of netbook users 
worldwide. Take advantage of special opportunities to increase revenue

Re: [Bacula-users] Verify differences: SHA1 sum doesn't match but it should

2010-08-30 Thread Henrik Johansen
'Paul Mather' wrote:
On Aug 30, 2010, at 6:41 AM, Henrik Johansen wrote:

 Like most ZFS related stuff it all sounds (and looks) extremely easy but
 in reality it is not quite so simple.

Yes, but does ZFS makes things easier or harder?

It hides a lot of the complexity involved. In ZFS it is either
super-easy or super-hard (once super-easy fails to apply).

Silent data corruption won't go away just because your pool is large. :-)

No, but a large pool combined with a high changerate renders scrubbing
somewhat useless.

(But, this is all getting a bit off-topic for Bacula-users.)

Agreed.

Cheers,

Paul.


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bare metal windows server 2003 restore

2010-06-14 Thread Henrik Johansen
-6] should work.
 
  Many thanks in advance for any info,
 
  Gavin
 
  PS MS SQL Server v5 is involved here.  Should having VSS mean that's okay
  to just restore directly?  We do have database backups if need be, but it
  would be nice if that wasn't needed.
 
 

 -- Bruno Friedmann



--
ThinkGeek and WIRED's GeekDad team up for the Ultimate
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the
lucky parental unit.  See the prize list and enter to win:
http://p.sf.net/sfu/thinkgeek-promo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 

--
ThinkGeek and WIRED's GeekDad team up for the Ultimate 
GeekDad Father's Day Giveaway. ONE MASSIVE PRIZE to the 
lucky parental unit.  See the prize list and enter to win: 
http://p.sf.net/sfu/thinkgeek-promo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Jobs from command line

2010-05-20 Thread Henrik Johansen
On 05/20/10 02:17 PM, Tyekanyk wrote:
 Hi List,

 Its a pretty easy question the one I have.

 I was wondering if it is possible to run certain job from command
 line, by name or by anything that can be 'scriptable'.

 My interest lies in running the job from outside bacula with a script
 or a command.

Something like 'echo run jobname yes | bconsole ' ?

 Thanks!



-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows 2008/2008r2 Server backup

2010-05-11 Thread Henrik Johansen
On 05/11/10 10:04 AM, Graham Keeling wrote:
 On Tue, May 11, 2010 at 09:51:52AM +1000, James Harper wrote:
 With full VSS support, VSS defines the files that make up the system
 state backup - it's a flag on the writer. Bacula handles junction
 points perfectly.

 So, to get a backup with Windows 2008 that includes the system state, you need
 to set a flag on the writer.
 Do you know how to set that flag?

wbadmin can do system state backups - you do not need to set any VSS 
flags for that :

wbadmin start systemstatebackup -backuptarget:C: -quiet

If you are storing your system state backup on C you'll need to apply 
the reg fix as pointed out in http://support.microsoft.com/kb/944530 first.

Simply put, Windows programs can register as 'VSS writers' and then get 
informed whenever a VSS snapshot is requested and act on that.


 Sent from my iPhone

 On 11/05/2010, at 4:13, Kevin Keanesubscript...@kkeane.com  wrote:

 There is no such thing as system state backup any more in Windows
 2008. It's always the whole C: drive. I'm not sure how well bacula
 handles it in the end. There also is the issue that Windows 2008
 relies heavily on junction points, which bacula doesn't handle well.

 I'm using Windows backup to an iSCSI drive, and then use bacula to
 back up a snapshot of that iSCSI volume.

 -Original Message-
 From: Michael Da Cova [mailto:mdac...@equiinet.com]
 Sent: Monday, May 10, 2010 9:47 AM
 To: bacula-users@lists.sourceforge.net
 Subject: [Bacula-users] Windows 2008/2008r2 Server backup

 Hi

 anyone have any tips recommendation on how to backup and restore
 windows
 2008 system state, do you need to if using VSS

 Michael


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] iSCSI and Windows Server Backup

2010-05-06 Thread Henrik Johansen
On 05/ 6/10 03:11 AM, James Harper wrote:
 I am also using W2K8 and back up with Windows Backup to an iSCSI
 server.

 There really is no good way around Windows backup (unless you want a
 paid
 solution). W2K8 relies heavily on junction points, which bacula
 doesn't back
 up. BTW, if you are using Exchange 2007, be sure to install SP2 -
 before that
 service pack, Windows backup wasn't Exchange-aware.

 Can you clarify that please? Bacula backs up the junction point itself,
 but doesn't 'follow' the junction point and back up what it points to.
 This is the same as it does under Linux with symlinks and is the correct
 behaviour afaik.

 So it should work just fine.

According to our restore tests is does work just fine.

 James

 --
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] iSCSI and Windows Server Backup

2010-05-05 Thread Henrik Johansen
On 05/ 5/10 08:11 AM, Craig Ringer wrote:
 Hi folks

 I've recently been saddled with a win2k8 server, and am trying to figure
 out how to make it play with my backup infrastructure.

 Currently I've got it doing scheduled backups via Windows Server Backup
 to an iSCSI volume exported by my backup server. In case the setup is of
 interest to anyone it's written up here:


 http://soapyfrogs.blogspot.com/2010/05/using-iet-iscsi-enterprise-target-for.html

 However ... I'd really like a way to integrate this into Bacula, though,
 at least in terms of monitoring and alerts. I'm backing up user-visible
 shares on the server with Bacula separately to the Windows image
 backups, but would prefer to avoid the duplication. As there must be
 people with 2k8 servers here, I thought I'd check in and see how others
 are doing it.

We are currently backing up 50+ w2k8 servers with Bacula - so far 
without any major pain.

What exactly is your problem ?

 Ideas? Suggestions?



-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] iSCSI and Windows Server Backup

2010-05-05 Thread Henrik Johansen
On 05/ 5/10 09:32 AM, Craig Ringer wrote:
 On 05/05/10 14:38, Henrik Johansen wrote:
 On 05/ 5/10 08:11 AM, Craig Ringer wrote:
 Hi folks

 I've recently been saddled with a win2k8 server, and am trying to figure
 out how to make it play with my backup infrastructure.

 Currently I've got it doing scheduled backups via Windows Server Backup
 to an iSCSI volume exported by my backup server. In case the setup is of
 interest to anyone it's written up here:


 http://soapyfrogs.blogspot.com/2010/05/using-iet-iscsi-enterprise-target-for.html

 However ... I'd really like a way to integrate this into Bacula, though,
 at least in terms of monitoring and alerts. I'm backing up user-visible
 shares on the server with Bacula separately to the Windows image
 backups, but would prefer to avoid the duplication. As there must be
 people with 2k8 servers here, I thought I'd check in and see how others
 are doing it.

 We are currently backing up 50+ w2k8 servers with Bacula - so far
 without any major pain.

 What exactly is your problem ?

 Are your backups suitable for restoring a bootable machine after total
 loss/destruction, though?

Yes - we can do bare metal restore to either similar hardware or to a 
virtual machine.

 If so, how are you doing that with Bacula? Are you integrating Windows
 System Imaging, or doing a filesytem level copy and a semi-manual
 restore (make ntfs, fix mbr, etc) ?

The latter.




-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Idea/suggestion for dedicated disk-based sd

2010-04-06 Thread Henrik Johansen
On 04/ 6/10 02:42 PM, Phil Stracchino wrote:
 On 04/06/10 02:37, Craig Ringer wrote:
 Is this insane? Or a viable approach to tackling some of the
 complexities of faking tape backup on disk as Bacula currently tries to do?

 Well, just off the top of my head, the first thing that comes to mind is
 that the only ways such a scheme is not going to result in massive disk
 fragmentation are:

   (a) it's built on top of a custom filesystem with custom device drivers
 to allow pre-positioning of volumes spaced across the disk surface, in
 which case it's going to be horribly slow because it's going to spend
 almost all its time seeking track-to-track; or

   (b) it writes to raw devices and one volume is one spindle, in which
 case you pretty much lose all the flexibility of using disk storage, and
 you need large numbers of spindles for the large numbers of concurrent
 volumes you want.  To all practical purposes, you would be replacing
 simulating tape on disk with using disks as though they were tapes.

 You could possibly simplify some of the issues involved in (a) by making
 it a FUSE userspace filesystem, but then you add the two drawbacks that
 (1) it's probably going to be slow, because userspace filesystems
 usually are, and (2) it'll only be workable on Linux.

 Now, all you're going to gain from this is non-interleaved disk volumes,
 and that's basically going to help you only during restores.  So you're
 sacrificing the common case to optimize for the rare case.

That depends on what you need, actually. Some people are fine with 
slower backups as long as they get fast restores.

There are a number of reasons why you might want to segregate backups 
into a one-volume-per-client or a one-volume-per-job relationship :

1. Keeping the size of a volume down for manageability.

2. The ability to migrate certain client data WITHOUT relying on Bacula 
to do it for you (think zfs send / receive, rsync, etc).

3. Hard quota for limiting disk consumption of given a client.

Some other aspects involve performance and / or deduplication but are 
highly dependent on the underlying infrastructure.


 You mention
 spool files, but the obvious question there is, if you're backing up to
 disk anyway, why use spooling at all?  The purpose of disk spooling was
 to buffer between clients and tape devices.  When backing up to disk,
 there's really not a lot of point in spooling at all.  What you really
 want is de-interleaving.  Correct?

Spooling to a sufficiently large RAM disk is plaussible and would serve 
the same purpose as spooling does for tape devices.


 As has already been discussed, you can achieve this end by creating
 multiple storage devices on the same disk pool and assigning one storage
 device per client, but this will result in massive disk fragmentation -
 and, honestly, you'll be no better off.

That depends largely on the underlying filesystem and thus should not be 
matter of such generalization.

 If what you want is to de-interleave your backups, then look into the
 Migration function.  You can allow backups to run normally, then Migrate
 one job at a time to a new device, which will give you non-interleaved
 jobs on the output volume.  But you're still not guaranteed that the
 output volume will be unfragmented, because you don't have control over
 the disk space allocation scheme; and you're still sacrificing the
 common case to optimize for the rare case.



-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Idea/suggestion for dedicated disk-based sd

2010-04-06 Thread Henrik Johansen
On 04/ 6/10 06:28 PM, Phil Stracchino wrote:
 On 04/06/10 12:06, Josh Fisher wrote:
 On 4/6/2010 8:42 AM, Phil Stracchino wrote:
 On 04/06/10 02:37, Craig Ringer wrote:
 Well, just off the top of my head, the first thing that comes to mind is
 that the only ways such a scheme is not going to result in massive disk
 fragmentation are:

(a) it's built on top of a custom filesystem with custom device drivers
 to allow pre-positioning of volumes spaced across the disk surface, in
 which case it's going to be horribly slow because it's going to spend
 almost all its time seeking track-to-track; or

 I disagree. A filesystem making use of extents and multi-block
 allocation, such as ext4, is designed for large file efficiency by
 keeping files mostly contiguous on disk. Also, filesystems with delayed
 allocation, such as ext4/XFS/ZFS, are much better at concurrent i/o than
 non-delayed allocation filesystems like ext2/3, reiser3, etc. The
 thrashing you mentioned is substantially reduced on writes, and for
 restores, the files (volumes) remain mostly contiguous. So with a modern
 filesystem, concurrent jobs writing to separate volume files will be
 pretty much as efficient as concurrent jobs writing to the same volume
 file, and restores will be much faster with no job interleaving.


 I think you're missing the point, though perhaps that's because I didn't
 make it clear enough.

 Let me try restating it this way:

 When you are writing large volumes of data from multiple sources onto
 the same set of disks, you have two choices.  Either you accept
 fragmentation, or you use a space allocation algorithm that keeps the
 distinct file targets self-contiguous, in which case you must accept
 hammering the disks as you constantly seek back and forth between the
 different areas you're writing your data streams to.

 Yes, aggressive write caching can help a bit with this.  But when we're
 getting into data sizes where this realistically matters on modern
 hardware, the data amounts have long since passed the range it's
 reasonable to cache in memory before writing.  Delayed allocation can
 only help just so much when you're talking multiple half-terabyte backup
 data streams.

No - aggressive write caching is key to solving a large part of this 
problem. Write caching to DRAM in particular is a very efficient way of 
doing this since it is relatively cheap and most modern servers have a 
lot of DRAM banks.

It also leaves room for flexibility since you easily can tune your cache 
size to your workload.

I have no problem saturating a 4 Gbit LAG group (~400 MB/s) when running 
backups via Bacula and data *only* touches the disks every 15 to 20 
seconds when ZFS flushes its transaction groups to spinning rust.

Adding more DRAM would probably push this all the way to 30 seconds, 
perhaps less once I convert this box to 10 Gbit ethernet.

These 15-20 seconds are more than enough for ZFS's block allocator to do 
its magic.




-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] File based backup - Large number of Volumes

2010-03-07 Thread Henrik Johansen
Hi,

On 03/ 8/10 12:39 AM, Dan Langille wrote:
 On 3/7/2010 3:01 PM, Henrik Johansen wrote:
 Hi,

 On 03/ 7/10 02:58 PM, Dan Langille wrote:
 I just set up a new SD for backing up to HDD.  The system can hande
 about 7TB of backup data.

 I don't see a good reason for putting all Volumes in the same directory,
 but I can see reasons for putting Volumes in different directories.
 This would require the creation of multiple Devices within the
 bacula-sd.conf file (one Device for each directory).

 Your new SD uses ZFS on FreeBSD, right ? If so - why bother with
 different directories ?

 Well, sanity I suppose.  While I am a believer in letting Bacula worry
 about Volumes, I'm now about to have 3+years of backups in one
 directory.  I thought some structure might simplify things.

Simplify what ?

On a side note, I would prefer ZFS filesystems instead of directories 
since they give you a lot more control.

 Do not take the following as proven.  It is what I've been thinking
 about and testing with today.  I am not sure this is an ideal solution
 at all.

 But in order to achieve a hierarchy, it does complicate the
 configuration files.  I want something like this

 $ cd /zfs/bacula/volumes/
 $ ls
 alpha bravo charlie echo delta frank golf hotel india

 ... where each directory listed above represents a Client.  All backups
 for that client goes in there.

 Thinking about this: it requires a Device per client, as taken from
 bacula-sd.conf (vastly simplified to show the vital detail):

 Device {
 Name = zfs-alpha
 Media Type = File
 Archive Device = /zfs/bacula/volumes/alpha
 }

 Repeat the above in bacula-sd.conf, once per each Client.

 We need similar things in bacula-dir.conf (once again, simplified and
 omitting vital settings):

 # Definiton of file storage device
 Storage {
 Name   = MegaFile-alpha
 Address= kraken.example.org
 Device = zfs-alpha
 }

 Then, in the job[s] for Client = alpha, you need something like this:

 Job {
 Name= alpha basic
 Client  = alpha
 Storage = MegaFile-alpha
 }

 But to do all this, you need a Pool per client, otherwise Bacula can't
 find them, and you'll see errors like this:

 kraken-sd JobId 33438: Warning: Volume FileAuto-0272 not on device
 MegaFile-beta (/zfs/bacula/volumes/beta).
 kraken-sd JobId 33438: Marking Volume FileAuto-0272 in Error in Catalog.

 A job for Client alpha created Volume FileAuto-0272.  But it's at
 /zfs/bacula/volumes/beta, where MegaFile-beta cannot see it.

 This leads me to thinking I need a Pool per client for this scenario to
 work properly.

I still think that this is *way* to much configuration purely for 
knowing where a given volume is located on disk ...

 How big is the disk array you're backing up to?  How do you have your
 Volumes arranged?

 We have 3 SD's with 50 TB capacity per SD. All our volumes live in the
 same ZFS filesystem(s) on those boxes.

 We store multiple jobs per volume to allow for concurrent jobs with a
 limit on how large each volume can grow.

 I allow any number of Jobs per Volume, with a maximum Volume size of 5GB.



-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing up 100 servers

2010-02-28 Thread Henrik Johansen
Hi,

On 02/28/10 10:07 PM, Stan Meier wrote:
 * mehma sarjamehmasa...@gmail.com:
 A hunerd-n-twenty jobs makes me wince. Why not simplify...a couple of ideas
 and feel free to knock them down:

 It's really not as bad as it sounds. A little Perl script parses a
 text file which contains groups of servers with their respective
 passwords, include/exclude lists and so on and generates all client
 definitions, all backup/copy jobs (and job definitions) and all
 filesets on thy fly.

I am currently implementing a ~1k job solution using Bacula and have 
been facing many of the same challenges. Once you find a templating 
mechanism that works for your environment most of the client 
configuration issues disappear.

 This way, one can focus on the servers which need special treatment.

 Quickly skimming through the archives of this list (and of course, the
 helpful comments from participants) made everything else quite clear.

 The plan is to have the standard three pools (monthly full, weekly
 differential, daily incremental), accompanied by two copy jobs (one
 run daily, one run monthly to create tapes that will go to The
 Vault).

 Everything else is up to trial and error (and more reading, of course:
 concurrency on the device level (5.0.1), TLS communication, deployment
 of bacula-fd through cfengine/puppet).

 I'm confident.

 Stan

 --
 Download Intel#174; Parallel Studio Eval
 Try the new software tools for yourself. Speed compiling, find bugs
 proactively, and fine-tune applications for parallel performance.
 See why Intel Parallel Studio got high marks during beta.
 http://p.sf.net/sfu/intel-sw-dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very slow restores after upgraded to 5.0.0

2010-02-24 Thread Henrik Johansen
Hi,

On 02/24/10 05:03 PM, Jeronimo Zucco wrote:
   Hi list.

   I'm trying to do some restores in my setup, but after I've upgraded
 to 5.0.0 version, the restore take a very, very long time in Building
 directory tree for JobId..., even in small sets of files.

   I was using 3.0.3 version, and the restore was very fast.

Check the archive for this month - there was a thread titled Dead slow 
backups with bacula 5.0, mysql ... which suggests adding some extra 
indexes.

Perhaps that'll do the trick for you ...

   Follow bellow the MySQL command generated:

 SELECT Path.Path, Filename.Name, Temp.FileIndex, Temp.JobId, LStat, MD5
 FROM ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS
 PathId, File.FilenameId AS FilenameId, LStat, MD5 FROM Job, File, (
 SELECT MAX(JobTDate) AS JobTDate, PathId, FilenameId FROM ( SELECT
 JobTDate, PathId, FilenameId FROM File JOIN Job USING (JobId) WHERE
 File.JobId IN (82184) UNION ALL SELECT JobTDate, PathId, FilenameId FROM
 BaseFiles JOIN File USING (FileId) JOIN Job  ON(BaseJobId =
 Job.JobId) WHERE BaseFiles.JobId IN (82184) ) AS tmp GROUP BY PathId,
 FilenameId ) AS T1 WHERE (Job.JobId IN ( SELECT DISTINCT BaseJobId FROM
 BaseFiles WHERE JobId IN (82184)) OR Job.JobId IN (82184)) AND
 T1.JobTDate = Job.JobTDate AND Job.JobId = File.JobId AND T1.PathId =
 File.PathId AND T1.FilenameId = File.FilenameId ) AS Temp JOIN Filename
 ON (Filename.FilenameId = Temp.FilenameId) JOIN Path ON (Path.PathId =
 Temp.PathId) WHERE FileIndex  0 ORDER BY Temp.JobId, FileIndex

   * in this case, I was trying to restore the jobID 82184.

 Any tips ?

 Thanks.



-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very slow restores after upgraded to 5.0.0

2010-02-24 Thread Henrik Johansen
On 02/25/10 04:01 AM, Jeronimo Zucco wrote:
   Thanks for your tip. I've created the index as mentioned in the
 thread, but my restores still very slow. I'm not using accurate backups.

 May be I have to migrate from mysql to postgres, my database are
 MyISAM with 60 gb of data. Or else downgrade back to 3.0.3 version.

I would strongly consider changing the storage engine to InnoDB.

You could use the explain feature in MySQL to take a look at where the 
query stalls.


 Any other tip ?

 best regards.

 Citando Henrik Johansenhen...@scannet.dk:

 Hi,

 On 02/24/10 05:03 PM, Jeronimo Zucco wrote:
Hi list.

I'm trying to do some restores in my setup, but after I've upgraded
 to 5.0.0 version, the restore take a very, very long time in Building
 directory tree for JobId..., even in small sets of files.

I was using 3.0.3 version, and the restore was very fast.

 Check the archive for this month - there was a thread titled Dead slow
 backups with bacula 5.0, mysql ... which suggests adding some extra
 indexes.

 Perhaps that'll do the trick for you ...

Follow bellow the MySQL command generated:

 SELECT Path.Path, Filename.Name, Temp.FileIndex, Temp.JobId, LStat, MD5
 FROM ( SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS
 PathId, File.FilenameId AS FilenameId, LStat, MD5 FROM Job, File, (
 SELECT MAX(JobTDate) AS JobTDate, PathId, FilenameId FROM ( SELECT
 JobTDate, PathId, FilenameId FROM File JOIN Job USING (JobId) WHERE
 File.JobId IN (82184) UNION ALL SELECT JobTDate, PathId, FilenameId FROM
 BaseFiles JOIN File USING (FileId) JOIN Job  ON(BaseJobId =
 Job.JobId) WHERE BaseFiles.JobId IN (82184) ) AS tmp GROUP BY PathId,
 FilenameId ) AS T1 WHERE (Job.JobId IN ( SELECT DISTINCT BaseJobId FROM
 BaseFiles WHERE JobId IN (82184)) OR Job.JobId IN (82184)) AND
 T1.JobTDate = Job.JobTDate AND Job.JobId = File.JobId AND T1.PathId =
 File.PathId AND T1.FilenameId = File.FilenameId ) AS Temp JOIN Filename
 ON (Filename.FilenameId = Temp.FilenameId) JOIN Path ON (Path.PathId =
 Temp.PathId) WHERE FileIndex   0 ORDER BY Temp.JobId, FileIndex

* in this case, I was trying to restore the jobID 82184.

 Any tips ?

 Thanks.



 --
 Med venlig hilsen / Best Regards

 Henrik Johansen
 hen...@scannet.dk
 Tlf. 75 53 35 00

 ScanNet Group
 A/S ScanNet

 --
 Download Intel#174; Parallel Studio Eval
 Try the new software tools for yourself. Speed compiling, find bugs
 proactively, and fine-tune applications for parallel performance.
 See why Intel Parallel Studio got high marks during beta.
 http://p.sf.net/sfu/intel-sw-dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users






-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
Download Intel#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VSS Windows Backups

2010-02-18 Thread Henrik Johansen
Hi,

On 02/17/10 11:26 PM, Kevin Keane wrote:


 -Original Message- From: Bob Hetzel [mailto:b...@case.edu]
 Sent: Wednesday, February 17, 2010 1:30 PM To:
 bacula-users@lists.sourceforge.net Subject: Re: [Bacula-users] VSS
 Windows Backups


 2) I couldn't get far enough for this to be an issue but I
 believe bacula's handling of Junction Points--it gripes but
 doesn't back them up, will break many things too.  Can
 anybody shed light on whether these will be auto-created by
 the OS if they're missing?

 No idea... yet.

 Junction points are Windows equivalent of soft links. They are used
 for Side-by-side assemblies (SxS). Most people actually come across
 the same issue not because of junction points, but because the WinSxS
 directory starts filling up their hard disk. Windows XP actually also
 had junction points and WinSxS in certain cases, but with Vista,
 Microsoft rearchitected the whole operating system to rely heavily on
 SxS.

 Side-by-side allows you to have multiple versions of the same DLL
 installed at the same time.

 These junction points are not (and cannot be) auto-created, and they
 are critical to Windows Vista/2008 and later. Without the junction
 points, you basically have a huge tangle of files but not a correctly
 working operating system.

Junctions are NTFS reparse points *specifically* for linking 
directories, not individual files.

 From Vista and upwards NTFS actually has support real symlinks (both 
files and directories) in order to provide *some* compatability with 
POSIX OS'es.

Most of the non-fatal FD errors I am seeing on W2K8 are related to 
directory junctions.

 Windows is installed in the C:\Windows drive (by default).
 Traditionally, in Windows, most the files that make up Windows are
 installed into the various subdirectories - most of them into the
 well known System32. With SxS assemblies, all files are installed
 into C:\Windows\WinSxS. The junction points point to these files from
 where older versions of Windows used to have these files.

 When you download one of Microsoft's software updates, they get
 installed into the WinSxS directory, as well, and never overwrite
 anything. Then the respective junction points are updated. That makes
 uninstalling software updates easier.

 Another side effect is that you usually no longer need the Windows
 DVD to install or remove components - all files are simply copied to
 the WinSxS folder, and installing/removing features is as simple as
 adding or removing the correct junction points.

 But Windows probably won't even boot (I haven't tried, but that's my
 guess) without the correct junction points in place - and Windows has
 no way of knowing which ones should be in place. Worse, after a
 restore, the new correct files might be in place, but the junction
 points may still point to the old incorrect ones.



 http://blog.tiensivu.com/aaron/archives/1306-Demystifying-the-WinSxS-directory-in-Windows-XP,-Vista-and-Server-20032008.html


 http://social.answers.microsoft.com/Forums/en-US/w7hardware/thread/450e0396-6ba6-4078-8ca0-b16bf4e22ccf
 (look for the post from Debbie that explains a lot)



 The Metabase is windows speak for the IIS config.  Sadly, I
 believe that's not included by default as part of the system state.
 Ditto with the keys needed for it.

 http://support.microsoft.com/kb/269586

 Be aware that this article is about Windows 2000. In Windows 2003,
 ntbackup does back up the Metabase as part of the systemstate (at
 least according to Wikipedia http://en.wikipedia.org/wiki/NTBackup -
 I haven't tested it and couldn't find a Microsoft reference for
 that).

 IIS 7.0 no longer has a metabase in the first place.


 --


Download Intelreg; Parallel Studio Eval
 Try the new software tools for yourself. Speed compiling, find bugs
 proactively, and fine-tune applications for parallel performance. See
 why Intel Parallel Studio got high marks during beta.
 http://p.sf.net/sfu/intel-sw-dev
 ___ Bacula-users mailing
 list Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
Download Intelreg; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs 
proactively, and fine-tune applications for parallel performance. 
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ERR=A Required Privilege Is Not Held By The Client...

2010-02-17 Thread Henrik Johansen
Hi,

On 02/17/10 04:04 PM, Josh Fisher wrote:

 On 2/16/2010 2:55 PM, Henrik Johansen wrote:
 Hi,

 On 02/16/10 06:36 PM, Josh Fisher wrote:

 On 2/16/2010 11:34 AM, Paul Binkley wrote:

 Hi All,

 Director is 3.0.2, backing up a 32bit Windows Vista client running 3.0.3.

 After adding onefs=no to the FileSet options in the director, I get the
 error messages during the backup:

 Cannot open C:/Documents and Settings/.../:ERR=A required privilege is 
 not
 held by the client.

 And,

 Could not open directory C:/Documents and Settings/.../:ERR=Access is
 denied

 What do I need to do to allow bacula to backup these directories?



 These are not real directories. They are symlinks pointing to the real
 directory in C:/Users/.. that Microsoft installs by default for backward
 compatibility with older software that (sloppily) assumes user
 directories are in C:/Documents and Settings/... Either ignore the
 messages or exclude the C:/Documents and Settings directory so Bacula
 doesn't attempt to back them up. They are not needed, as the files are
 in the real directory in C:/Users/..

 I case you should need them after a restore the MS Sysinternals Suite
 has tools to recreate those junctions - you have to do that by hand though.



 Yes, I don't think there is any way to back them up. They are not like
 symlinks in, for example, a Linux ext2 filesystem. It is a portion of
 the same filesystem mounted at another mountpoint. In Linux, this is
 called a bind mount. Which leads me to wonder what happens when
 backing up a Linux client that has bind mounts? Does Bacula know it is a
 bind mount? And if so, how does it handle it? Is the bind mount
 remembered or do the files get backed up twice? Are junction points
 handled by the Windows client in the same way bind mounts are handled by
 the Linux client?

AFAIK it is possible to programmatically to detect and / or create 
junctions points on Windows - if I find the time I will investigate this 
further and see how and if this could be integrated into the Windows FD.


 --
 SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
 Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
 http://p.sf.net/sfu/solaris-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ERR=A Required Privilege Is Not Held By The Client...

2010-02-16 Thread Henrik Johansen
Hi,

On 02/16/10 06:36 PM, Josh Fisher wrote:

 On 2/16/2010 11:34 AM, Paul Binkley wrote:
 Hi All,

 Director is 3.0.2, backing up a 32bit Windows Vista client running 3.0.3.

 After adding onefs=no to the FileSet options in the director, I get the
 error messages during the backup:

 Cannot open C:/Documents and Settings/.../:ERR=A required privilege is not
 held by the client.

 And,

 Could not open directory C:/Documents and Settings/.../:ERR=Access is
 denied

 What do I need to do to allow bacula to backup these directories?



 These are not real directories. They are symlinks pointing to the real
 directory in C:/Users/.. that Microsoft installs by default for backward
 compatibility with older software that (sloppily) assumes user
 directories are in C:/Documents and Settings/... Either ignore the
 messages or exclude the C:/Documents and Settings directory so Bacula
 doesn't attempt to back them up. They are not needed, as the files are
 in the real directory in C:/Users/..

I case you should need them after a restore the MS Sysinternals Suite 
has tools to recreate those junctions - you have to do that by hand though.

 Many Thanks,
 Paul



 --
 SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
 Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
 http://p.sf.net/sfu/solaris-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


 --
 SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
 Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
 http://p.sf.net/sfu/solaris-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to backup the catalog

2010-02-11 Thread Henrik Johansen
On 02/11/10 01:08 PM, Dan Langille wrote:
 JanJaap Scholing wrote:
 Thanks for all the input.

 I think a lvm snapshot is the best way to go for us.

 Be aware of the potential risks of backing up a database at the file
 system level. What you may be backing up is a database in an
 inconsistent state (e.g. part way through a transaction).  When you use
 mysqldump, you do not encounter this situation.

Normally, you would quiesce/lock your tables, take a snapshot and unlock 
which would give you consistent backups of your DB.

Using snapshots is just much, much faster when dealing with large tables 
/ databases and can be just as safe as using mysqldump.

As an example, the backup of a ~200GB InnoDB database using ZFS 
snapshots is done in ~3 seconds.


 --
 SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
 Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
 http://p.sf.net/sfu/solaris-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to backup the catalog

2010-02-10 Thread Henrik Johansen
Hi,

On 02/10/10 04:52 PM, JanJaap Scholing wrote:
 Hi List,

 One question, how to backup the catalog.

 We are using MySQL for the bacula catalog. This database is
 approximately 46 Gb in size.

Are you using MyISAM or InnoDB ? Backup procedures can vary according to 
the MySQL storage engine being used.

 When we use the backup script make_catalog_backup (supplied with bacula)
 to dump the database, bacula is not usable during the mysqldump process
 due to locked tables.

 In this case its not possible to make a backup of the catalog every day.
 We don’t like a not responding bacula system ;)

 My question is how do you make a good backup of the catalog without
 interrupting the bacula functionality?

For MyISAM I would either deploy filesystem snapshots or replication to 
another server for backups of large MySQL databases.

InnoDB is generally more suited for hot backups -  google should be able 
to provide you with both commercial and / or open source solutions for 
that.

We use filesystem snapshots for MySQL backup (both MyISAM  InnoDB) and 
it works very well.


 Thanks and regards

 Jan Jaap


 
 Ontdek nu Windows phone. De smartphone van dit moment
 http://www.microsoft.com/windowsmobile/nl-nl/default.mspx


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Minimum Hardware Requirements, througput and stored data size records

2010-02-08 Thread Henrik Johansen
Hi,

On 02/ 8/10 09:02 PM, Heitor Medrado de Faria wrote:
 Hello everyone,

 Im writing a academical paper about Bacula and I was wondering if anyone
 could provide me those informations:

 1. Bacula Minimum Hardware Requirements.
 2. Bacula maximum througput record.

Recent tests have shown that we can sustain ~500MB/s from network to 
disk using a single SD.

 3. Bacula maximum stored data size backuped by the same director.

 Be my guest to provide me other useful information.

 Regards,

 Heitor Faria


 --
 The Planet: dedicated and managed hosting, cloud storage, colocation
 Stay online with enterprise data centers and the best network in the business
 Choose flexible plans and management services without long-term contracts
 Personal 24x7 support from experience hosting pros just a phone call away.
 http://p.sf.net/sfu/theplanet-com
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Deploying Bacula

2010-02-03 Thread Henrik Johansen
On 02/ 3/10 12:06 PM, FredNF wrote:
 Le Mon, 01 Feb 2010 19:32:58 -0500,
 Dan Langilled...@langille.org  a écrit :

 So, my questions are:
  - How do I define my pools ?

 Define them along the lines of the parameters for Pool.  For
 starters, different retention times require different Pools.

 Then, the pools can be for each kind of server, let say, 3 pools
 (daily, weekly an monthly) by type of backup (system and datas ?), so 6
 pools.

 So, as I have 3 kind of server... 18 pools ?

 Is this acceptable or too complicated ?

 It is not so much the size of the backups, it's more the number of
 files in the backup.  Backing up 1 million files each 1KB in size
 will take longer than backing up 1 file of 1 million KB.  That is
 because there will be 1 million times more database accesses (in
 general) for the larger number of files.

 Web servers... So many many many little files.

 So if you have 600 servers to backup, that's at least 600 jobs.
 That's a high performance situation, but I'm sure it's not the
 largest Bacula installation around.  What hardware do you have
 available for your database server?  I would recommend making
 dedicated just to the Bacula server, give it fast HDD, and lots of
 RAM.

The bulk of Bacula's DB operations are purely disk IOPS bound so I would 
argue that IOPS is way more important than RAM.

 With the others advices I had, I'm planning to have a dual Xeon Nehalem,
 with 8 ou 12 GB of RAM, and four 300 GB SAS disks in RAID 10. Not sure
 about the OS, I'm balancing between FreeBSD and Gentoo.

 But, if someone here have a similar setup, I'm ready to hear his
 advices and tips about my configuration.

We are currently planning a large Bacula deployment (~1k machines) so I 
have been facing many of the same challenges.

Regardless of whatever database you choose you'll need enough disk IOPS 
to service the DB and I don't think that 4 x 300 GB SAS are sufficient.

A 4 disk RAID10 will give you the write IOPS equivalent to 2 disks and 
the DB is most likely going to do synchronous random writes which in 
turn is 100% disk IOPS bound.

Find the tech specs of the disks you are using - they should give you an 
indication of how many random write IOPS they can handle.

Additionally, you should align your FS to the same blocksize as your 
database - 8K for postgresql if I remember correctly. It you are using a 
fixed blocksize FS where the blocksize is lower than the DB blocksize 
you could end up in a siutation where one DB operation is causing 2 or 
more disk IOPS.

We backup ~35TB each week in a 3 week rotation so we just have to scale 
out in order to meet our demands and we are planning to go multi-DIR, 
multi-SD with a couple of very hefty MySQL servers to service them.

Directors will run Linux and both our SD's and MySQL servers will run 
Solaris.

The only place where we scale up instead of our are our SD's - currently 
our 3 SD nodes have access to 300+ disks and 2 dedicated 10 Gbit fiber 
links.

 I'm freaking out about the configs files :) They'll be really huge I
 think.

They don't have to - just split stuff into manageable pieces. We keep 
one file per client which gets included into the bacula-dir configuration.

Use templating wherever you can.

 Regards,

 Fred.

 --
 The Planet: dedicated and managed hosting, cloud storage, colocation
 Stay online with enterprise data centers and the best network in the business
 Choose flexible plans and management services without long-term contracts
 Personal 24x7 support from experience hosting pros just a phone call away.
 http://p.sf.net/sfu/theplanet-com
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Deploying Bacula

2010-02-03 Thread Henrik Johansen
On 02/ 3/10 04:23 PM, FredNF wrote:
 Le Wed, 03 Feb 2010 13:18:37 +0100,
 Henrik Johansenhen...@myunix.dk  a écrit :

 The bulk of Bacula's DB operations are purely disk IOPS bound so I
 would argue that IOPS is way more important than RAM.

 With the others advices I had, I'm planning to have a dual Xeon
 Nehalem, with 8 ou 12 GB of RAM, and four 300 GB SAS disks in RAID
 10. Not sure about the OS, I'm balancing between FreeBSD and Gentoo.

 But, if someone here have a similar setup, I'm ready to hear his
 advices and tips about my configuration.

 We are currently planning a large Bacula deployment (~1k machines) so
 I have been facing many of the same challenges.

 Regardless of whatever database you choose you'll need enough disk
 IOPS to service the DB and I don't think that 4 x 300 GB SAS are
 sufficient.

 A 4 disk RAID10 will give you the write IOPS equivalent to 2 disks
 and the DB is most likely going to do synchronous random writes which
 in turn is 100% disk IOPS bound.

 Find the tech specs of the disks you are using - they should give you
 an indication of how many random write IOPS they can handle.

 I will do some bonnie++ tests :)

 Additionally, you should align your FS to the same blocksize as your
 database - 8K for postgresql if I remember correctly. It you are
 using a fixed blocksize FS where the blocksize is lower than the DB
 blocksize you could end up in a siutation where one DB operation is
 causing 2 or more disk IOPS.

 Right, we never see the problem on this side. The filestem used for the
 director can be:

   - if FreeBSD FFS or ZFS (ZFS is nice supported with FreeBSD 8)
   - if Gentoo or Linux Distro, ext4

 In the same way, what FS do you, on the list, prefer for storage ? We
 don't use tape, only disk storage. The first who talk of NTFS will need
 to avoid my curses for generations.edk

If ZFS on FreeBSD is as reliable as it is in Solaris then I would go 
with ZFS.


 We backup ~35TB each week in a 3 week rotation so we just have to
 scale out in order to meet our demands and we are planning to go
 multi-DIR, multi-SD with a couple of very hefty MySQL servers to
 service them.

 Directors will run Linux and both our SD's and MySQL servers will run
 Solaris.

 So, you plan to have dedicated databases servers, having a lightweigh
 director but huge database servers ?

Correct.

 And, if Solaris on the SD, you'll surely use ZFS ?

Yes. I don't trust any other FS with that amount of data.

 The only place where we scale up instead of our are our SD's -
 currently our 3 SD nodes have access to 300+ disks and 2 dedicated 10
 Gbit fiber links.

 Wow.. Tht's really impressive. I'd like to have enough money for
 building such system. But, that's not and we'll use hand-made NAS with
 poor inexpensive SATA disk ;)

So are we - all done using off-the-shelf x86 hardware.


 I'm freaking out about the configs files :) They'll be really huge I
 think.

 They don't have to - just split stuff into manageable pieces. We keep
 one file per client which gets included into the bacula-dir
 configuration.

 I was looking for includes. But, If I read well the documentation, I
 can't specify a directory for includes. I need to give the full path
 for each file ? Right ?

It can be as simple as including the following in your bacula-dir config :

@|sh -c 'for f in /etc/bacula/clients/*.conf ; do echo @${f} ; done'




 Use templating wherever you can.

 The developers are working on an automatization of writing
 configuration files just after a new install of a dedicated server :)

 Regards,

 Fred.

 --
 The Planet: dedicated and managed hosting, cloud storage, colocation
 Stay online with enterprise data centers and the best network in the business
 Choose flexible plans and management services without long-term contracts
 Personal 24x7 support from experience hosting pros just a phone call away.
 http://p.sf.net/sfu/theplanet-com
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Grasping volume management.

2009-05-31 Thread Henrik Johansen
Hi list,

I have been testing bacula for a couple of days now but I am having some
problems grasping the volume management part of it.

What I would like to achieve is a configuration where every client has
its own pool with its own volumes. Furthermore, I would like to keep 21
volumes (one per day) for each client and automagically reuse (recycle)
those volumes at the end of the 3 week retention period. 

I have come up with this so far : 

Pool {
   Name = test01
   Pool Type = Backup
   AutoPrune = yes
   LabelFormat = $Client-
   Maximum Volume Jobs = 1 
   Maximum Volumes = 21
   Volume Use Duration = 23h
   Recycle = yes
   Recycle Oldest Volume = yes
}

Client {
   Name = test01
   Address = x.x.x.x
   FDPort = 9102
   Catalog = MyCatalog
   Password = xx
   File Retention = 21d
   Job Retention = 21d
   AutoPrune = yes
}

But I can't quite figure out if that's going to do what I want ...

Would anybody care to comment ?

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@myunix.dk



--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT 
is a gathering of tech-side developers  brand creativity professionals. Meet
the minds behind Google Creative Lab, Visual Complexity, Processing,  
iPhoneDevCamp as they present alongside digital heavyweights like Barbarian 
Group, R/GA,  Big Spaceship. http://p.sf.net/sfu/creativitycat-com 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Large scale disk based backups

2009-04-27 Thread Henrik Johansen
Hi list,

I have some questions regarding disk based backups that I hope some of you
could answer for me. I am currently managing 100+ UNIX systems that are backed 
up using a variety of scripts which I really would like to replace with 
something 
solid, flexible, scalable and easy to manage.

Bacula seems to be heavily centralized around tape and my head hurts a bit from 
trying to figure out how to do efficient disk based backup for 100+ clients 
without 
shooting myself in the foot :)

All our UNIX boxes follow a 3 week rotation with 1 full backup every week and 
incrementals in between but I would like to be able to change this on a per 
client basis.

My plan is to have a redundant pair of directors and 3 storage daemons running 
off 3 seperat Solaris ZFS backends. 

What combo of pools / volumes do people recommend ? A pool per client ? A pool 
per 
backup type, eg. full, diff, incr ? Seperate volumes per client perhaps ?

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@myunix.dk



--
Crystal Reports #45; New Free Runtime and 30 Day Trial
Check out the new simplified licensign option that enables unlimited
royalty#45;free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large scale disk based backups

2009-04-27 Thread Henrik Johansen
Dan Langille wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Henrik Johansen wrote:
 Hi list,
 
 I have some questions regarding disk based backups that I hope some of you
 could answer for me. I am currently managing 100+ UNIX systems that are 
 backed 
 up using a variety of scripts which I really would like to replace with 
 something 
 solid, flexible, scalable and easy to manage.
 
 Bacula seems to be heavily centralized around tape and my head hurts a bit 
 from 
 trying to figure out how to do efficient disk baseed backup for 100+ clients 
 without 
 shooting myself in the foot :)

Why do you conclude it is highly centralized around tape?

I am not concluding anything? When going through the manual I found that
a lot was written about tape and less about disk, hence it *seems* more
about tape then disk - at least to me.

 All our UNIX boxes follow a 3 week rotation with 1 full backup every week 
 and 
 incrementals in between but I would like to be able to change this on a per 
 client basis.

You can.

 My plan is to have a redundant pair of directors and 3 storage daemons 
 running 
 off 3 seperat Solaris ZFS backends.

What will you do with the two directors?  Will both be active?

Depends. I will start with an active / passive configuration and go from there.

 What combo of pools / volumes do people recommend ? A pool per client ? A 
 pool per 
 backup type, eg. full, diff, incr ? Seperate volumes per client perhaps ?

It depends upon your goals.  Do you want to do concurrent backups?  How
much data will you be storing at a given time?

Yes, concurrent backups is a must. Well, our UNIX boxes eat ~2 TB each week.

- --
Dan Langille

BSDCan - The Technical BSD Conference : http://www.bsdcan.org/
PGCon  - The PostgreSQL Conference: http://www.pgcon.org/
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.11 (FreeBSD)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkn2XWAACgkQCgsXFM/7nTyV9wCfVmbxea5duShN2meIgnLC9qnb
BvsAn2+vMSvPmJ5WmTAcEM5ft/o6lsgw
=1bE7
-END PGP SIGNATURE-

-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@myunix.dk



--
Register Now  Save for Velocity, the Web Performance  Operations 
Conference from O'Reilly Media. Velocity features a full day of 
expert-led, hands-on workshops and two days of sessions from industry 
leaders in dedicated Performance  Operations tracks. Use code vel09scf 
and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users