Re: [Bacula-users] Catalogue snapshot utility : any interest?

2010-10-04 Thread Rory Campbell-Lange
On 04/10/10, James Harper (james.har...@bendigoit.com.au) wrote:
  On 04/10/10, James Harper (james.har...@bendigoit.com.au) wrote:
  A full pg_dump of the catalogue is 2.8G. The output of the catalogue
  snapshot for job 60 is 1.6G. Naturally, the full pg_dump of the
  whole database will continue to grow over time.

  I'm a little suprised that the proportion of job 60 to the whole is
  so high. Job 60 is similar to job 1, but I don't expect they share
  much information. I'll have to look into that.
 
 If jobid 60 and job 1 were the same backup job then a lot of the
 information may be shared in the filename table. Even if they are
 backups of similar servers then they will share a lot of filename data
 and that filename data has to come with the extracted catalogue so you
 might not be saving that much.

My backups are all full backups. Also, the key file table in postgres (which
joins files and paths) is job specific, so I'm not sure where any
duplication is emanating from.

Regards
Rory

 Table public.file
   Column   |  Type   |   Modifiers   
+-+---
 fileid | bigint  | not null default nextval('file_fileid_seq'::regclass)
 fileindex  | integer | not null default 0
 jobid  | integer | not null
 pathid | integer | not null
 filenameid | integer | not null
 markid | integer | not null default 0
 lstat  | text| not null
 md5| text| not null


-- 
Rory Campbell-Lange
r...@campbell-lange.net

Campbell-Lange Workshop
www.campbell-lange.net
0207 6311 555
3 Tottenham Street London W1T 2AF
Registered in England No. 04551928

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Vol Usage

2010-10-04 Thread Bruno Gomes da Silva
Hello ..

I wonder if any parameter is particularly used to show the amount of volume
(STATUS MEDIA - VOL USAGE) in Bacula Admin Tool (bat). The percentage of
volume usage is always 0.00%. Here's the conf Pool:


Pool {
Name = Server1Diario
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Volume Retention = 7 days
Volume Use Duration = 7 days
Recycle Oldest Volume = yes
Maximum Volume Bytes = 10G
Purge Oldest Volume = yes
}


Thanks.
--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catalogue snapshot utility : any interest?

2010-10-04 Thread James Harper
 On 04/10/10, James Harper (james.har...@bendigoit.com.au) wrote:
  
   I have developed a catalogue snapshot facility in python to
snapshot
   one job's catalogue and dump it to disk.
 ...
  How much smaller is the catalogue subset vs the full catalogue?
 
 Good question.
 
 I'm not able to answer that question fully at present as I don't have
enough
 jobs in my current database to know.
 
 My currrent database has the following jobs in it:
 
  jobid | jobfiles | jobgigs
 ---+--+-
  1 |  7706717 | 6833.90
  8 |  3965507 | 4480.83
  9 |  1273459 |  129.87
 50 |   646336 |  512.07
 60 |  7845561 | 6990.67
 
 A full pg_dump of the catalogue is 2.8G. The output of the catalogue
snapshot
 for job 60 is 1.6G. Naturally, the full pg_dump of the whole database
will
 continue to grow over time.
 
 (The job 60 cataloge file compresses to about 300MB with bzip2 -9).
 
 I'm a little suprised that the proportion of job 60 to the whole is so
high.
 Job 60 is similar to job 1, but I don't expect they share much
information.
 I'll have to look into that.
 

If jobid 60 and job 1 were the same backup job then a lot of the
information may be shared in the filename table. Even if they are
backups of similar servers then they will share a lot of filename data
and that filename data has to come with the extracted catalogue so you
might not be saving that much.

James


--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] 5.0.3 psql indexes after upgrade

2010-10-04 Thread Ralf Gross
Hi,

I just updated from 3.0.3 to 5.0.3. I know that there have been problems with
the update_postgresql_tables script. Here are my indexes:


bacula=# select * from pg_indexes where tablename='file';
 schemaname | tablename |   indexname| tablespace | 
 indexdef   
+---+++-
 public | file  | file_pkey  || CREATE UNIQUE INDEX 
file_pkey ON file USING btree (fileid)
 public | file  | file_jobid_idx || CREATE INDEX 
file_jobid_idx ON file USING btree (jobid)
 public | file  | file_jpfid_idx || CREATE INDEX 
file_jpfid_idx ON file USING btree (jobid, pathid, filenameid)
(3 Zeilen)


Can anyone confirm that these indexes are correct. Looking at the manual, they
look ok to me.

http://www.bacula.org/5.0.x-manuals/en/main/main/Catalog_Maintenance.html#SECTION004591000


Ralf

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore from a file listing

2010-10-04 Thread Graham Keeling
On Mon, Oct 04, 2010 at 12:00:12PM +0100, Rory Campbell-Lange wrote:
 We provide clients with a Bacula backup-to-tape service, which is
 complementary to our offsite backup services.
 
 As part of the backup-to-tape service we wish to audit each tape by
 checking that we can retrieve several files from each tape in the backup
 set.
 
 Our audit programme (a python script) produces a listing of files
 suitable for the audit in the format below. The listing is presently
 made to show the tape name, filename and base64 md5sum. Many of the file
 names have odd characters in them.
 
 At present I am doing a restore job by using option 3 and entering the
 job id as all our backups are full backups. Then I go into the restore
 console.
 
 For each file I want to retrieve I find I have to walk the directory
 tree to mark the file. 
 mark /survey/GTC/02_SURVEY/02 Graphics/A Exisiting Site 
 Images/GTC_40_ContactSheet-002.pdf 
 doesn't seem to work.

I don't know if this is helpful to you, but the way I do this (with a script)
is:
path=/your/path/to/file.abc
dir=${path%/*}
file=${path##*/}
(then in bconsole)
cd $dir
mark $file

Bear in mind, last time I checked, bconsole has a very eccentric way of
quoting things. And the 'cd' quoting is different to the 'mark' quoting.

For 'cd', you need to quote '\' and '' with '\'.

For 'mark':
'\' needs '\\\'.
'' needs '\'.
'*', '?' and '[' need '\\'.



 Is there a simple and accurate way of providing a list of files of this
 sort to Bacula in order to mark them and proceed with a restore job?
 
 Advice gratefully received.
 
 Regards
 Rory
 
 
 ZA-09 : /survey/GTC/02_SURVEY/02 Graphics/A Exisiting Site 
 Images/GTC_40_ContactSheet-002.pdf  : BxtAuFFc/f1ad9KAu6QcTA
 ZA-09 : /survey/GTC/02_SURVEY/01 Draw/05 
 3d/_Mark/03_renders/elements/viewno02_VRay_RenderID.tif   : 
 +uXYLMX+dzF3tagX1HLxGA
 ZA-09 : /survey/GTC/02_SURVEY/01 Draw/05 
 3d/_Mark/03_renders/elements/viewno03_VRay_SampleRate.tif : 
 8mV7L15K2oD8Myl3RHGH1g
 ZA-09 : /survey/GTC/02_SURVEY/02 Graphics/A Exisiting Site Images/080605_site 
 visit/IMG_0153.JPG   : Bffxysn05Jf835pjn8EhWg
 ZA-09 : /survey/GTC/02_SURVEY/01 Draw/09 Details/A_DE_L.dgn   
  : 8n6FPZ8LUOvd2j0yhpe5Jw
 ZA-09 : /survey/GTC/02_SURVEY/02 Graphics/A Exisit Images/00_site 
 visit/IMG_0207.JPG   : Lc1l+Npa3fAWR7dG7UNtbw
 ZA-10 : /survey/USA2/Design/Graphics/99_SCM/Material/Elevation.pdf
  : cO4XMEdlCtdI7g9wL5B/Dg
 ZA-10 : /survey/USA2/Design/Graphics/99_SCM/Material/added to binder/ 
 this.pdf: KSNJEaQHmW0+xvrqjFFmog
 ZA-10 : /survey/USA2/Design/Graphics/99_A 
 Visit_mplan/Material/uerplan_A1_130208.indd  : kKBqVl5Dh9elOe1HeXqMDw
 ZA-10 : 
 /survey/USA2/Design/Graphics/99_B/material/Coloured/WR_MultiBay_with_Stair.pdf
  : Rk43I9cB+hkWAd+BHNrzkQ
 ZA-10 : /survey/USA2/Design/Graphics/99_C/Material/corner sketch.pdf  
  : 723kZeU7rY5l7620SWwS0w
 ZA-10 : /survey/USA2/Design/Graphics/99_phased/Material/Finished 
 JPEGs/phase 5.jpg : 5R+Mvs0JsQW+RtWUo6SIXg
 ZA-10 : /survey/USA2/Design/Graphics/99_SCM/Material/for 
 binder/elevations 4dec08.pdf  : G2AOwxxYs8PnuJuxpdJY3A
 ZA-10 : 
 /survey/USA2/Design/Graphics/99_SCM/Material/east_flatroof_sketch.pdf 
  : 8cOmdPhe+Hgkkxb60l+0og
 ZA-10 : /survey/USA2/Design/Graphics/99_SCM/Material/A_nnn_ 
 East-West_OR.pdf   : XrGzU07JjrWevaxWDUjMNQ
 ZA-10 : /survey/USA2/Design/Graphics/99_meeting/A___Typical_bay.pdf   
  : vM0DRH5xevICHlVxKg3elg
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN_Scheme updated/PDF/_LongSect250.tiff 
  : TySX9plPTq4uIy05iwY5fA
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN_DWG/IN/1527.dwg  
  : ew1rXI6UJoLmizY4wvIUrw
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0/DGN/0005.dgn
  : n040KyL8cX9crgpY9asLfg
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0/ff=/0004.dgn
  : V2jTeupRp3Yv1qmpteDnWw
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_Construction issue/15L9.hpgl
  : 8fOJdGO5SwaOMkxNi43kxg
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_ZAN/A_ZAN.pdf   
  : v+oAaYnWv+1Bjq5ezWgmlg
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_arch/1000_.000  
  : ODbF++krYYWMLsBmI/GgRQ
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_Engage  Plan/copy/3002.dwg 
  : O+haqF++WUp519AnX+q1uQ
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_DWGs/1515.bak   
  : Ca7aA7v95BNv+rK6HoyvYA
 ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_Construction issue southblock/Binder1.pdf   
  : K/iqrW28Z+Lemk4osfrnCQ
 
 -- 
 Rory Campbell-Lange
 r...@campbell-lange.net
 
 Campbell-Lange Workshop
 

[Bacula-users] Restore from a file listing

2010-10-04 Thread Rory Campbell-Lange
We provide clients with a Bacula backup-to-tape service, which is
complementary to our offsite backup services.

As part of the backup-to-tape service we wish to audit each tape by
checking that we can retrieve several files from each tape in the backup
set.

Our audit programme (a python script) produces a listing of files
suitable for the audit in the format below. The listing is presently
made to show the tape name, filename and base64 md5sum. Many of the file
names have odd characters in them.

At present I am doing a restore job by using option 3 and entering the
job id as all our backups are full backups. Then I go into the restore
console.

For each file I want to retrieve I find I have to walk the directory
tree to mark the file. 
mark /survey/GTC/02_SURVEY/02 Graphics/A Exisiting Site 
Images/GTC_40_ContactSheet-002.pdf 
doesn't seem to work.

Is there a simple and accurate way of providing a list of files of this
sort to Bacula in order to mark them and proceed with a restore job?

Advice gratefully received.

Regards
Rory


ZA-09 : /survey/GTC/02_SURVEY/02 Graphics/A Exisiting Site 
Images/GTC_40_ContactSheet-002.pdf  : BxtAuFFc/f1ad9KAu6QcTA
ZA-09 : /survey/GTC/02_SURVEY/01 Draw/05 
3d/_Mark/03_renders/elements/viewno02_VRay_RenderID.tif   : 
+uXYLMX+dzF3tagX1HLxGA
ZA-09 : /survey/GTC/02_SURVEY/01 Draw/05 
3d/_Mark/03_renders/elements/viewno03_VRay_SampleRate.tif : 
8mV7L15K2oD8Myl3RHGH1g
ZA-09 : /survey/GTC/02_SURVEY/02 Graphics/A Exisiting Site Images/080605_site 
visit/IMG_0153.JPG   : Bffxysn05Jf835pjn8EhWg
ZA-09 : /survey/GTC/02_SURVEY/01 Draw/09 Details/A_DE_L.dgn 
   : 8n6FPZ8LUOvd2j0yhpe5Jw
ZA-09 : /survey/GTC/02_SURVEY/02 Graphics/A Exisit Images/00_site 
visit/IMG_0207.JPG   : Lc1l+Npa3fAWR7dG7UNtbw
ZA-10 : /survey/USA2/Design/Graphics/99_SCM/Material/Elevation.pdf  
   : cO4XMEdlCtdI7g9wL5B/Dg
ZA-10 : /survey/USA2/Design/Graphics/99_SCM/Material/added to binder/ 
this.pdf: KSNJEaQHmW0+xvrqjFFmog
ZA-10 : /survey/USA2/Design/Graphics/99_A 
Visit_mplan/Material/uerplan_A1_130208.indd  : kKBqVl5Dh9elOe1HeXqMDw
ZA-10 : 
/survey/USA2/Design/Graphics/99_B/material/Coloured/WR_MultiBay_with_Stair.pdf
 : Rk43I9cB+hkWAd+BHNrzkQ
ZA-10 : /survey/USA2/Design/Graphics/99_C/Material/corner sketch.pdf
   : 723kZeU7rY5l7620SWwS0w
ZA-10 : /survey/USA2/Design/Graphics/99_phased/Material/Finished 
JPEGs/phase 5.jpg : 5R+Mvs0JsQW+RtWUo6SIXg
ZA-10 : /survey/USA2/Design/Graphics/99_SCM/Material/for binder/elevations 
4dec08.pdf  : G2AOwxxYs8PnuJuxpdJY3A
ZA-10 : 
/survey/USA2/Design/Graphics/99_SCM/Material/east_flatroof_sketch.pdf   
   : 8cOmdPhe+Hgkkxb60l+0og
ZA-10 : /survey/USA2/Design/Graphics/99_SCM/Material/A_nnn_ 
East-West_OR.pdf   : XrGzU07JjrWevaxWDUjMNQ
ZA-10 : /survey/USA2/Design/Graphics/99_meeting/A___Typical_bay.pdf 
   : vM0DRH5xevICHlVxKg3elg
ZA-11 : /issue/ZAN/OUT/ISSUES/IN_Scheme updated/PDF/_LongSect250.tiff   
   : TySX9plPTq4uIy05iwY5fA
ZA-11 : /issue/ZAN/OUT/ISSUES/IN_DWG/IN/1527.dwg
   : ew1rXI6UJoLmizY4wvIUrw
ZA-11 : /issue/ZAN/OUT/ISSUES/IN0/DGN/0005.dgn  
   : n040KyL8cX9crgpY9asLfg
ZA-11 : /issue/ZAN/OUT/ISSUES/IN0/ff=/0004.dgn  
   : V2jTeupRp3Yv1qmpteDnWw
ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_Construction issue/15L9.hpgl  
   : 8fOJdGO5SwaOMkxNi43kxg
ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_ZAN/A_ZAN.pdf 
   : v+oAaYnWv+1Bjq5ezWgmlg
ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_arch/1000_.000
   : ODbF++krYYWMLsBmI/GgRQ
ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_Engage  Plan/copy/3002.dwg   
   : O+haqF++WUp519AnX+q1uQ
ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_DWGs/1515.bak 
   : Ca7aA7v95BNv+rK6HoyvYA
ZA-11 : /issue/ZAN/OUT/ISSUES/IN0_Construction issue southblock/Binder1.pdf 
   : K/iqrW28Z+Lemk4osfrnCQ

-- 
Rory Campbell-Lange
r...@campbell-lange.net

Campbell-Lange Workshop
www.campbell-lange.net
0207 6311 555
3 Tottenham Street London W1T 2AF
Registered in England No. 04551928

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net

Re: [Bacula-users] Catalogue snapshot utility : any interest?

2010-10-04 Thread Phil Stracchino
On 10/04/10 07:22, Rory Campbell-Lange wrote:
 I have developed a catalogue snapshot facility in python to snapshot one
 job's catalogue and dump it to disk. The snapshot provides a bacula
 database schema file, a database dump of the job's data, and a file
 listing of files showing info such as the tape number, path, file, md5
 and lstat.
 
 We intend to include the catalogue in compressed format on CDs
 accompanying tape sets to assist our clients retrieve data in future if
 required.
 
 At present the system works only for Postgresql, and for our setup which
 has the director, storage and file daemons on the same Linux server.
 
 How it works:
 
 * A temporary schema is made in postgres, named job_%d % (jobid)
 * Relevant data is selected from the public schema to the temporary
   schema
 * The file listing is ouput
 * The public schema is dumped
 * The temporary schema is dumped
 * The temporary schema is removed
 
 I'm considering making an sqlite database from the temporary schema to
 obviate the need for the public schema file and file listing.
 
 This is fairly simple stuff, but if this functionality is useful to you,
 do let me know and I can share the programme with you.


This sounds like a useful tool for any Bacula site that's managing
Bacula backups for a large number of clients.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Vol Usage

2010-10-04 Thread John Drescher
2010/10/4 Bruno Gomes da Silva kidbro...@gmail.com:
 Hello ..

 I wonder if any parameter is particularly used to show the amount of volume
 (STATUS MEDIA - VOL USAGE) in Bacula Admin Tool (bat). The percentage of
 volume usage is always 0.00%. Here's the conf Pool:


 Pool {
 Name = Server1Diario
 Pool Type = Backup
 Recycle = yes
 AutoPrune = yes
 Volume Retention = 7 days
 Volume Use Duration = 7 days
 Recycle Oldest Volume = yes
 Maximum Volume Bytes = 10G
 Purge Oldest Volume = yes
 }



Did you change the above settings after volumes existed in bacula?
Remember that the settings in bacula-dir.conf are a template on how
bacula will create new volumes. Changing the pool settings do not
affect currently existing volumes.

John

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catalogue snapshot utility : any interest?

2010-10-04 Thread Rory Campbell-Lange
On 04/10/10, James Harper (james.har...@bendigoit.com.au) wrote:
  
  I have developed a catalogue snapshot facility in python to snapshot
  one job's catalogue and dump it to disk. 
...
 How much smaller is the catalogue subset vs the full catalogue?

Good question.

I'm not able to answer that question fully at present as I don't have enough
jobs in my current database to know.

My currrent database has the following jobs in it:

 jobid | jobfiles | jobgigs 
---+--+-
 1 |  7706717 | 6833.90
 8 |  3965507 | 4480.83
 9 |  1273459 |  129.87
50 |   646336 |  512.07
60 |  7845561 | 6990.67

A full pg_dump of the catalogue is 2.8G. The output of the catalogue snapshot
for job 60 is 1.6G. Naturally, the full pg_dump of the whole database will
continue to grow over time.

(The job 60 cataloge file compresses to about 300MB with bzip2 -9).

I'm a little suprised that the proportion of job 60 to the whole is so high.
Job 60 is similar to job 1, but I don't expect they share much information.
I'll have to look into that.

Regards
Rory

-- 
Rory Campbell-Lange
r...@campbell-lange.net

Campbell-Lange Workshop
www.campbell-lange.net
0207 6311 555
3 Tottenham Street London W1T 2AF
Registered in England No. 04551928

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Catalogue snapshot utility : any interest?

2010-10-04 Thread Rory Campbell-Lange
I have developed a catalogue snapshot facility in python to snapshot one
job's catalogue and dump it to disk. The snapshot provides a bacula
database schema file, a database dump of the job's data, and a file
listing of files showing info such as the tape number, path, file, md5
and lstat.

We intend to include the catalogue in compressed format on CDs
accompanying tape sets to assist our clients retrieve data in future if
required.

At present the system works only for Postgresql, and for our setup which
has the director, storage and file daemons on the same Linux server.

How it works:

* A temporary schema is made in postgres, named job_%d % (jobid)
* Relevant data is selected from the public schema to the temporary
  schema
* The file listing is ouput
* The public schema is dumped
* The temporary schema is dumped
* The temporary schema is removed

I'm considering making an sqlite database from the temporary schema to
obviate the need for the public schema file and file listing.

This is fairly simple stuff, but if this functionality is useful to you,
do let me know and I can share the programme with you.

Regards
Rory

-- 
Rory Campbell-Lange
r...@campbell-lange.net

Campbell-Lange Workshop
www.campbell-lange.net
0207 6311 555
3 Tottenham Street London W1T 2AF
Registered in England No. 04551928

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catalogue snapshot utility : any interest?

2010-10-04 Thread Rory Campbell-Lange
On 04/10/10, Phil Stracchino (ala...@metrocast.net) wrote:
 On 10/04/10 07:22, Rory Campbell-Lange wrote:
  I have developed a catalogue snapshot facility in python to snapshot one
  job's catalogue and dump it to disk. The snapshot provides a bacula
  database schema file, a database dump of the job's data, and a file
  listing of files showing info such as the tape number, path, file, md5
  and lstat.
...
  This is fairly simple stuff, but if this functionality is useful to you,
  do let me know and I can share the programme with you.
 
 This sounds like a useful tool for any Bacula site that's managing
 Bacula backups for a large number of clients.

Hi Phil

I'd be delighted if you could take a look at the python script and for
your comments.

It is part of the small .tgz archive here:
http://campbell-lange.net/media/files/bacula_tools_01.tgz

Please **do not** run it on a production Postgresql database.

Note that big backups (one with more than 7 million files, say) may take
up to 45 minutes to process.

If you are able to get the system to operate and you think it is useful
I'll stick the script on Bitbucket.

Regards
Rory

-- 
Rory Campbell-Lange
r...@campbell-lange.net

Campbell-Lange Workshop
www.campbell-lange.net
0207 6311 555
3 Tottenham Street London W1T 2AF
Registered in England No. 04551928

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore from a file listing

2010-10-04 Thread Rory Campbell-Lange
On 04/10/10, Graham Keeling (gra...@equiinet.com) wrote:
 On Mon, Oct 04, 2010 at 12:00:12PM +0100, Rory Campbell-Lange wrote:
...
  At present I am doing a restore job by using option 3 and entering the
  job id as all our backups are full backups. Then I go into the restore
  console.
  
  For each file I want to retrieve I find I have to walk the directory
  tree to mark the file. 
  mark /survey/GTC/02_SURVEY/02 Graphics/A Exisiting Site 
  Images/GTC_40_ContactSheet-002.pdf 
  doesn't seem to work.
 
 I don't know if this is helpful to you, but the way I do this (with a script)
 is:
 path=/your/path/to/file.abc
 dir=${path%/*}
 file=${path##*/}
 (then in bconsole)
 cd $dir
 mark $file
 
 Bear in mind, last time I checked, bconsole has a very eccentric way of
 quoting things. And the 'cd' quoting is different to the 'mark' quoting.
 
 For 'cd', you need to quote '\' and '' with '\'.
 
 For 'mark':
 '\' needs '\\\'.
 '' needs '\'.
 '*', '?' and '[' need '\\'.

Thanks very much for your notes, Graham. Aaargh! I'll give those quoting
patterns a go.

It would be fantastically useful to be able to pipe file names to a
Bacula restore process.

-- 
Rory Campbell-Lange
r...@campbell-lange.net

Campbell-Lange Workshop
www.campbell-lange.net
0207 6311 555
3 Tottenham Street London W1T 2AF
Registered in England No. 04551928

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with webacula

2010-10-04 Thread Yuri Timofeev
Hi

docs/INSTALL:
System Requirements:
- Bacula 5.0 or later

2010/9/29 Daniel beas beasdan...@hotmail.com:
 Hi to all.
 I'm trying to install webacula but after do all tha config i get the next
 error

 Fatal error:  Uncaught exception 'Zend_Exception' with message 'Version
 error for Catalog database (wanted 12, got 11) ' in
 /var/www/webacula/html/index.php:183
 Stack trace:
 #0 {main}
   thrown in /var/www/webacula/html/index.php on line 183

 I have to mention that when i run the script to check system requeriments
 and i get all right (but PostgreSQL because i'm running bacula with mysql).

 #!/usr/bin/php Check System Requirements... Current MySQL version = 5.0.45
 OK Current PostgreSQL version = Warning. Upgrade your PostgreSQL version to
 8.0.0 or later Current Sqlite version = 3.4.2 OK Current PHP version = 5.2.4
 OK php pdo installed. OK php gd installed. OK php xml installed. OK php dom
 installed. OK php pdo_mysql installed. OK php pdo_pgsql installed. OK
 php-dom, php-xml installed. OK

 Actually im running bacula 3.03 and webacula 5.0 in the director and i don't
 have any idea what can be wrong.
 I don't know if i have provided all the information required, if i'm missing
 something i'll be so thanked you to tell me.

 Thanks in advance

 Daniel Beas Enriquez



 --
 Start uncovering the many advantages of virtual appliances
 and start using them to simplify application deployment and
 accelerate your shift to cloud computing.
 http://p.sf.net/sfu/novell-sfdev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users





-- 
with best regards

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Vol Usage

2010-10-04 Thread Bruno Gomes da Silva
John


Yes, the volumes were already created and then I changed them. After that I
ran the UPDATE command in the console. I thought I would update the Usage
Vol.

Thanks for the reply.



2010/10/4 John Drescher dresche...@gmail.com

 2010/10/4 Bruno Gomes da Silva kidbro...@gmail.com:
  Hello ..
 
  I wonder if any parameter is particularly used to show the amount of
 volume
  (STATUS MEDIA - VOL USAGE) in Bacula Admin Tool (bat). The percentage of
  volume usage is always 0.00%. Here's the conf Pool:
 
 
  Pool {
  Name = Server1Diario
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 7 days
  Volume Use Duration = 7 days
  Recycle Oldest Volume = yes
  Maximum Volume Bytes = 10G
  Purge Oldest Volume = yes
  }
 
 

 Did you change the above settings after volumes existed in bacula?
 Remember that the settings in bacula-dir.conf are a template on how
 bacula will create new volumes. Changing the pool settings do not
 affect currently existing volumes.

 John

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catalogue snapshot utility : any interest?

2010-10-04 Thread Phil Stracchino
On 10/04/10 08:01, Rory Campbell-Lange wrote:
 Hi Phil
 
 I'd be delighted if you could take a look at the python script and for
 your comments.

I really can't help with testing it, sorry.  I don't run PostgreSQL and
don't speak Python.  ;)



-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Rename and move my catalog backups to a sub-directory...

2010-10-04 Thread John Doe
Hi,

I am trying to cleanup a bit my old bacula installation (2.4.2-1) and 
I would like to rename my catalog backups (the files) and move them 
to a sub-directory.  Is it easy or is it now hardcoded in multiple places?

Here are the specific parts of the configuration:

  Device {
Name = FileStorage
Archive Device = /FILER/bacula
Media Type = File
...
  }

  Storage {
Name = FileStorage
...
Device = FileStorage
Media Type = File
...
  }

  Job {
Name = BackupCatalog
JobDefs = DefaultJob
Level = Full
Client = backup
FileSet = Catalog
...
Write Bootstrap = /FILER/bacula/catalog.bsr
...
  }

So I would like to modify:

  Device {
Name = FileStorage_cartalog
Archive Device = /FILER/bacula/catalog
Media Type = File_catalog
...
  }

  Storage {
Name = FileStorage_catalog
...
Device = FileStorage_catalog
Media Type = File_catalog
...
  }

  Job {
Name = BackupCatalog
JobDefs = DefaultJob
Level = Full
Client = catalog
Storage = FileStorage_catalog
FileSet = Catalog
...
Write Bootstrap = /FILER/bacula/catalog/catalog.bsr
...
  }

followed by:

  # service bacula stop
  # cd /FILER/bacula
  # for i in backup.*; do mv $i catalog/catalog.${i#backup.}; done
  # mv catalog.bsr catalog/
  # service bacula start

Would this work or do I need extra steps?

Thx,
JD


  

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 5.0.3 psql indexes after upgrade

2010-10-04 Thread Rory Campbell-Lange
On 04/10/10, Ralf Gross (ralf-li...@ralfgross.de) wrote:
 Hi,
 
 I just updated from 3.0.3 to 5.0.3. I know that there have been problems with
 the update_postgresql_tables script. Here are my indexes:
 
 
 bacula=# select * from pg_indexes where tablename='file';
  schemaname | tablename |   indexname| tablespace |   
indexdef   
 +---+++-
  public | file  | file_pkey  || CREATE UNIQUE INDEX 
 file_pkey ON file USING btree (fileid)
  public | file  | file_jobid_idx || CREATE INDEX 
 file_jobid_idx ON file USING btree (jobid)
  public | file  | file_jpfid_idx || CREATE INDEX 
 file_jpfid_idx ON file USING btree (jobid, pathid, filenameid)
 (3 Zeilen)
 
 
 Can anyone confirm that these indexes are correct. Looking at the manual, they
 look ok to me.
 
 http://www.bacula.org/5.0.x-manuals/en/main/main/Catalog_Maintenance.html#SECTION004591000

All of the indexes are below; you seem to have the correct ones for the file 
table.

The Debian problems with 5.0.3 were/are related to the upgrade trying to create
an index that already exists. See  Bug#591293.

Cheers
Rory


bacula= \di
List of relations
 Schema | Name  | Type  | Owner  | Table  
+---+---++
 public | basefiles_jobid_idx   | index | bacula | basefiles
 public | basefiles_pkey| index | bacula | basefiles
 public | cdimages_pkey | index | bacula | cdimages
 public | client_name_idx   | index | bacula | client
 public | client_pkey   | index | bacula | client
 public | counters_pkey | index | bacula | counters
 public | device_pkey   | index | bacula | device
 public | file_jobid_idx| index | bacula | file
 public | file_jpfid_idx| index | bacula | file
 public | file_pkey | index | bacula | file
 public | filename_name_idx | index | bacula | filename
 public | filename_pkey | index | bacula | filename
 public | fileset_name_idx  | index | bacula | fileset
 public | fileset_pkey  | index | bacula | fileset
 public | job_media_firstindex  | index | bacula | jobmedia
 public | job_media_job_id_media_id_idx | index | bacula | jobmedia
 public | job_media_lastindex   | index | bacula | jobmedia
 public | job_name_idx  | index | bacula | job
 public | job_pkey  | index | bacula | job
 public | jobhisto_idx  | index | bacula | jobhisto
 public | jobmedia_pkey | index | bacula | jobmedia
 public | location_pkey | index | bacula | location
 public | locationlog_pkey  | index | bacula | locationlog
 public | log_name_idx  | index | bacula | log
 public | log_pkey  | index | bacula | log
 public | media_pkey| index | bacula | media
 public | media_volumename_id   | index | bacula | media
 public | mediatype_pkey| index | bacula | mediatype
 public | path_name_idx | index | bacula | path
 public | path_pkey | index | bacula | path
 public | pathhierarchy_pkey| index | bacula | pathhierarchy
 public | pathhierarchy_ppathid | index | bacula | pathhierarchy
 public | pathvisibility_jobid  | index | bacula | pathvisibility
 public | pathvisibility_pkey   | index | bacula | pathvisibility
 public | pool_name_idx | index | bacula | pool
 public | pool_pkey | index | bacula | pool
 public | status_pkey   | index | bacula | status
 public | storage_pkey  | index | bacula | storage
 public | unsavedfiles_pkey | index | bacula | unsavedfiles

-- 
Rory Campbell-Lange
r...@campbell-lange.net

Campbell-Lange Workshop
www.campbell-lange.net
0207 6311 555
3 Tottenham Street London W1T 2AF
Registered in England No. 04551928

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Catalogue snapshot utility : any interest?

2010-10-04 Thread James Harper
 
 I have developed a catalogue snapshot facility in python to snapshot
one
 job's catalogue and dump it to disk. The snapshot provides a bacula
 database schema file, a database dump of the job's data, and a file
 listing of files showing info such as the tape number, path, file, md5
 and lstat.
 
 We intend to include the catalogue in compressed format on CDs
 accompanying tape sets to assist our clients retrieve data in future
if
 required.
 
 At present the system works only for Postgresql, and for our setup
which
 has the director, storage and file daemons on the same Linux server.
 
 How it works:
 
 * A temporary schema is made in postgres, named job_%d % (jobid)
 * Relevant data is selected from the public schema to the
temporary
   schema
 * The file listing is ouput
 * The public schema is dumped
 * The temporary schema is dumped
 * The temporary schema is removed
 
 I'm considering making an sqlite database from the temporary schema to
 obviate the need for the public schema file and file listing.
 
 This is fairly simple stuff, but if this functionality is useful to
you,
 do let me know and I can share the programme with you.
 

How much smaller is the catalogue subset vs the full catalogue?

James

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Tuning Bacula

2010-10-04 Thread Tim Gustafson
We have recently installed Bacula onto a FreeBSD server and several Linux, 
SunOS and FreeBSD clients.  The Bacula director and storage daemon run on a box 
with about 6 terabytes of RAID6 storage (SATA 300 drives, 1TB each, Adaptec 
RAID controller with 512MB cache).  The box has 16GB of RAM and is not really 
doing much else right now. We're using mySQL for our database back-end, and we 
have MD5 hashing of files turned off (Accurate = mcs and Verify = mcs are 
set in bacula-dir.conf).

However, we're getting pretty pitiful throughput numbers.  When I scp a file 
from my workstation to the Bacula server, I get something like 40MB/s 
(320Mb/s).  When Bacula runs, we're lucky to get 20MB/s (160Mb/s), and we often 
get numbers closer to 10MB/s (80Mb/s).

I Googled tuning bacula and came up with primarily stuff related to tuning 
Postgres as it relates to Bacula, but nothing about tuning the file daemon or 
the storage daemon.  Can anyone point me to some leads as far as what I can do 
to bump up the throughput?  We have a data set that is several terabytes large 
to back up, and it will never complete in a reasonable amount of time at 
10MB/s.  I need to achieve something closer to 40MB/s to make this a workable 
option.

Tim Gustafson
Baskin School of Engineering
UC Santa Cruz
t...@soe.ucsc.edu
831-459-5354


--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tuning Bacula

2010-10-04 Thread Jeremiah D. Jester
Tim,

Have you tried backing up other hosts on your network? What are the speeds with 
these hosts? I've noticed that different host respond with varying speeds 
despite being on the same network. Wondering if this has to do the client OS 
doing some throttling based on work load.

JJ

-Original Message-
From: Tim Gustafson [mailto:t...@soe.ucsc.edu] 
Sent: Monday, October 04, 2010 10:38 AM
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Tuning Bacula

We have recently installed Bacula onto a FreeBSD server and several Linux, 
SunOS and FreeBSD clients.  The Bacula director and storage daemon run on a box 
with about 6 terabytes of RAID6 storage (SATA 300 drives, 1TB each, Adaptec 
RAID controller with 512MB cache).  The box has 16GB of RAM and is not really 
doing much else right now. We're using mySQL for our database back-end, and we 
have MD5 hashing of files turned off (Accurate = mcs and Verify = mcs are 
set in bacula-dir.conf).

However, we're getting pretty pitiful throughput numbers.  When I scp a file 
from my workstation to the Bacula server, I get something like 40MB/s 
(320Mb/s).  When Bacula runs, we're lucky to get 20MB/s (160Mb/s), and we often 
get numbers closer to 10MB/s (80Mb/s).

I Googled tuning bacula and came up with primarily stuff related to tuning 
Postgres as it relates to Bacula, but nothing about tuning the file daemon or 
the storage daemon.  Can anyone point me to some leads as far as what I can do 
to bump up the throughput?  We have a data set that is several terabytes large 
to back up, and it will never complete in a reasonable amount of time at 
10MB/s.  I need to achieve something closer to 40MB/s to make this a workable 
option.

Tim Gustafson
Baskin School of Engineering
UC Santa Cruz
t...@soe.ucsc.edu
831-459-5354


--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tuning Bacula

2010-10-04 Thread John Drescher
On Mon, Oct 4, 2010 at 1:37 PM, Tim Gustafson t...@soe.ucsc.edu wrote:
 We have recently installed Bacula onto a FreeBSD server and several Linux, 
 SunOS and FreeBSD clients.  The Bacula director and storage daemon run on a 
 box with about 6 terabytes of RAID6 storage (SATA 300 drives, 1TB each, 
 Adaptec RAID controller with 512MB cache).  The box has 16GB of RAM and is 
 not really doing much else right now. We're using mySQL for our database 
 back-end, and we have MD5 hashing of files turned off (Accurate = mcs and 
 Verify = mcs are set in bacula-dir.conf).

 However, we're getting pretty pitiful throughput numbers.  When I scp a file 
 from my workstation to the Bacula server, I get something like 40MB/s 
 (320Mb/s).  When Bacula runs, we're lucky to get 20MB/s (160Mb/s), and we 
 often get numbers closer to 10MB/s (80Mb/s).

 I Googled tuning bacula and came up with primarily stuff related to tuning 
 Postgres as it relates to Bacula, but nothing about tuning the file daemon or 
 the storage daemon.  Can anyone point me to some leads as far as what I can 
 do to bump up the throughput?  We have a data set that is several terabytes 
 large to back up, and it will never complete in a reasonable amount of time 
 at 10MB/s.  I need to achieve something closer to 40MB/s to make this a 
 workable option.


I would start by turning off software compression and do performance
tests with full backups.

A second thing to try is to enable attribute spooling so the database
does not slow down the backup. This can be useful if you have millions
of files.

John

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tuning Bacula

2010-10-04 Thread Mehma Sarja
On 10/4/10 10:37 AM, Tim Gustafson wrote:
 We have recently installed Bacula onto a FreeBSD server and several Linux, 
 SunOS and FreeBSD clients.  The Bacula director and storage daemon run on a 
 box with about 6 terabytes of RAID6 storage (SATA 300 drives, 1TB each, 
 Adaptec RAID controller with 512MB cache).  The box has 16GB of RAM and is 
 not really doing much else right now. We're using mySQL for our database 
 back-end, and we have MD5 hashing of files turned off (Accurate = mcs and 
 Verify = mcs are set in bacula-dir.conf).

 However, we're getting pretty pitiful throughput numbers.  When I scp a file 
 from my workstation to the Bacula server, I get something like 40MB/s 
 (320Mb/s).  When Bacula runs, we're lucky to get 20MB/s (160Mb/s), and we 
 often get numbers closer to 10MB/s (80Mb/s).

 I Googled tuning bacula and came up with primarily stuff related to tuning 
 Postgres as it relates to Bacula, but nothing about tuning the file daemon or 
 the storage daemon.  Can anyone point me to some leads as far as what I can 
 do to bump up the throughput?  We have a data set that is several terabytes 
 large to back up, and it will never complete in a reasonable amount of time 
 at 10MB/s.  I need to achieve something closer to 40MB/s to make this a 
 workable option.

 Tim Gustafson
 Baskin School of Engineering
 UC Santa Cruz
 t...@soe.ucsc.edu
 831-459-5354


 --
 Virtualization is moving to the mainstream and overtaking non-virtualized
 environment for deploying applications. Does it make network security
 easier or more difficult to achieve? Read this whitepaper to separate the
 two and get a better understanding.
 http://p.sf.net/sfu/hp-phase2-d2d
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

Hi Tim,

Compare against a stock, non tuned, Bacula install. Are you going 
between building where you get the slow transfer speed? UCSC has 1 Gb 
links between buildings from my recollection. The link to the outside 
world is not much more than that.

Bacula also has a batch mode which you can twiddle around with.

Mehma

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 5.0.3 psql indexes after upgrade

2010-10-04 Thread Ralf Gross
Rory Campbell-Lange schrieb:
 On 04/10/10, Ralf Gross (ralf-li...@ralfgross.de) wrote:
 
 All of the indexes are below; you seem to have the correct ones for the file 
 table.
 
 The Debian problems with 5.0.3 were/are related to the upgrade trying to 
 create
 an index that already exists. See  Bug#591293.
  ...
  public | job_media_firstindex  | index | bacula | jobmedia
  ...
  public | job_media_lastindex   | index | bacula | jobmedia

I don't have these two indexes, did you add them?

Ralf

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tuning Bacula

2010-10-04 Thread Josh Fisher
  On 10/4/2010 1:37 PM, Tim Gustafson wrote:
 We have recently installed Bacula onto a FreeBSD server and several Linux, 
 SunOS and FreeBSD clients.  The Bacula director and storage daemon run on a 
 box with about 6 terabytes of RAID6 storage (SATA 300 drives, 1TB each, 
 Adaptec RAID controller with 512MB cache).  The box has 16GB of RAM and is 
 not really doing much else right now. We're using mySQL for our database 
 back-end, and we have MD5 hashing of files turned off (Accurate = mcs and 
 Verify = mcs are set in bacula-dir.conf).

 However, we're getting pretty pitiful throughput numbers.  When I scp a file 
 from my workstation to the Bacula server, I get something like 40MB/s 
 (320Mb/s).  When Bacula runs, we're lucky to get 20MB/s (160Mb/s), and we 
 often get numbers closer to 10MB/s (80Mb/s).

Is the MySQL database storage on the same RAID array you are writing 
backups to?

 I Googled tuning bacula and came up with primarily stuff related to tuning 
 Postgres as it relates to Bacula, but nothing about tuning the file daemon or 
 the storage daemon.  Can anyone point me to some leads as far as what I can 
 do to bump up the throughput?  We have a data set that is several terabytes 
 large to back up, and it will never complete in a reasonable amount of time 
 at 10MB/s.  I need to achieve something closer to 40MB/s to make this a 
 workable option.

 Tim Gustafson
 Baskin School of Engineering
 UC Santa Cruz
 t...@soe.ucsc.edu
 831-459-5354


 --
 Virtualization is moving to the mainstream and overtaking non-virtualized
 environment for deploying applications. Does it make network security
 easier or more difficult to achieve? Read this whitepaper to separate the
 two and get a better understanding.
 http://p.sf.net/sfu/hp-phase2-d2d
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 5.0.3 psql indexes after upgrade

2010-10-04 Thread John Drescher
On Mon, Oct 4, 2010 at 3:01 PM, Ralf Gross ralf-li...@ralfgross.de wrote:
 Rory Campbell-Lange schrieb:
 On 04/10/10, Ralf Gross (ralf-li...@ralfgross.de) wrote:

 All of the indexes are below; you seem to have the correct ones for the file 
 table.

 The Debian problems with 5.0.3 were/are related to the upgrade trying to 
 create
 an index that already exists. See  Bug#591293.
  ...
  public | job_media_firstindex          | index | bacula | jobmedia
  ...
  public | job_media_lastindex           | index | bacula | jobmedia

 I don't have these two indexes, did you add them?


Here is what I have on gentoo. And no I did not add any index for many years.


bacula-# \di
   List of relations
 Schema | Name  | Type  |  Owner  |Table
+---+---+-+-
 public | basefiles_jobid_idx   | index | hbroker | basefiles
 public | basefiles_pkey| index | hbroker | basefiles
 public | cdimages_pkey | index | hbroker | cdimages
 public | client_group_idx  | index | hbroker | client_group
 public | client_group_member_idx   | index | hbroker | client_group_member
 public | client_group_member_pkey  | index | hbroker | client_group_member
 public | client_group_pkey | index | hbroker | client_group
 public | counters_pkey | index | hbroker | counters
 public | device_pkey   | index | hbroker | device
 public | file_filenameid_idx   | index | hbroker | file
 public | file_jobid_idx| index | hbroker | file
 public | file_jpfid_idx| index | hbroker | file
 public | file_pathid_idx   | index | hbroker | file
 public | file_pkey | index | hbroker | file
 public | filename_name_idx | index | hbroker | filename
 public | filename_pkey | index | hbroker | filename
 public | fileset_name_idx  | index | hbroker | fileset
 public | fileset_pkey  | index | hbroker | fileset
 public | job_media_job_id_media_id_idx | index | hbroker | jobmedia
 public | job_name_idx  | index | hbroker | job
 public | job_pkey  | index | hbroker | job
 public | jobhisto_idx  | index | hbroker | jobhisto
 public | jobmedia_pkey | index | hbroker | jobmedia
 public | location_pkey | index | hbroker | location
 public | locationlog_pkey  | index | hbroker | locationlog
 public | log_name_idx  | index | hbroker | log
 public | log_pkey  | index | hbroker | log
 public | media_pkey| index | hbroker | media
 public | media_volumename_id   | index | hbroker | media
 public | mediatype_pkey| index | hbroker | mediatype
 public | path_name_idx | index | hbroker | path
 public | path_pkey | index | hbroker | path
 public | pathhierarchy_pkey| index | hbroker | pathhierarchy
 public | pathhierarchy_ppathid | index | hbroker | pathhierarchy
 public | pathvisibility_jobid  | index | hbroker | pathvisibility
 public | pathvisibility_pkey   | index | hbroker | pathvisibility
 public | pool_name_idx | index | hbroker | pool
 public | pool_pkey | index | hbroker | pool
 public | status_pkey   | index | hbroker | status
 public | storage_pkey  | index | hbroker | storage
 public | unsavedfiles_pkey | index | hbroker | unsavedfiles
(41 rows)

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tuning Bacula

2010-10-04 Thread Rory Campbell-Lange
On 04/10/10, Tim Gustafson (t...@soe.ucsc.edu) wrote:
 ...we're getting pretty pitiful throughput numbers.  When I scp a file
 from my workstation to the Bacula server, I get something like 40MB/s
 (320Mb/s).  When Bacula runs, we're lucky to get 20MB/s (160Mb/s), and
 we often get numbers closer to 10MB/s (80Mb/s).

As others have mentioned, the key is to try and work out where the
contention is.

It may be useful to run iftop on the network interfaces of the Bacula
server to see what the network IO is like, and then compare that to
iotop to see what the disk IO is like.

Bear in mind that if you are using spooling (although I assume you
aren't), the fd-client status throughput stats reported are half of the
actual native speed. This is because the throughput calculation is based
on the speed from client to destination, so the time taken is the sum of
the network transfer from the client to the spool, and then from the
spool to the tape. That, anyhow, might be a reason for the roughly 50%
factor you report.

If disk IO is the issue it might be useful to verify that your database
(what sort?) is running on a separate disk array, that your raid
controller has caching enabled (you need a BBU for this to be safe) and
that you have a good filesystem for your backup needs (the best one for
us is XFS).

Rory

-- 
Rory Campbell-Lange
r...@campbell-lange.net

Campbell-Lange Workshop
www.campbell-lange.net
0207 6311 555
3 Tottenham Street London W1T 2AF
Registered in England No. 04551928

--
Virtualization is moving to the mainstream and overtaking non-virtualized
environment for deploying applications. Does it make network security 
easier or more difficult to achieve? Read this whitepaper to separate the 
two and get a better understanding.
http://p.sf.net/sfu/hp-phase2-d2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 5.0.3 psql indexes after upgrade

2010-10-04 Thread Ralf Gross
John Drescher schrieb:
 On Mon, Oct 4, 2010 at 3:01 PM, Ralf Gross ralf-li...@ralfgross.de wrote:
  Rory Campbell-Lange schrieb:
  On 04/10/10, Ralf Gross (ralf-li...@ralfgross.de) wrote:
 
  All of the indexes are below; you seem to have the correct ones for the 
  file table.
 
  The Debian problems with 5.0.3 were/are related to the upgrade trying to 
  create
  an index that already exists. See  Bug#591293.
   ...
   public | job_media_firstindex          | index | bacula | jobmedia
   ...
   public | job_media_lastindex           | index | bacula | jobmedia
 
  I don't have these two indexes, did you add them?
 
 
 Here is what I have on gentoo. And no I did not add any index for many years.
 
 
 bacula-# \di
List of relations
  Schema | Name  | Type  |  Owner  |Table
 +---+---+-+-
  public | basefiles_jobid_idx   | index | hbroker | basefiles
  public | basefiles_pkey| index | hbroker | basefiles
  public | cdimages_pkey | index | hbroker | cdimages
  public | client_group_idx  | index | hbroker | client_group
  public | client_group_member_idx   | index | hbroker | 
 client_group_member
  public | client_group_member_pkey  | index | hbroker | 
 client_group_member
  public | client_group_pkey | index | hbroker | client_group
  public | counters_pkey | index | hbroker | counters
  public | device_pkey   | index | hbroker | device
  public | file_filenameid_idx   | index | hbroker | file
  public | file_jobid_idx| index | hbroker | file
  public | file_jpfid_idx| index | hbroker | file
  public | file_pathid_idx   | index | hbroker | file
  public | file_pkey | index | hbroker | file
  public | filename_name_idx | index | hbroker | filename
  public | filename_pkey | index | hbroker | filename
  public | fileset_name_idx  | index | hbroker | fileset
  public | fileset_pkey  | index | hbroker | fileset
  public | job_media_job_id_media_id_idx | index | hbroker | jobmedia
  public | job_name_idx  | index | hbroker | job
  public | job_pkey  | index | hbroker | job
  public | jobhisto_idx  | index | hbroker | jobhisto
  public | jobmedia_pkey | index | hbroker | jobmedia
  public | location_pkey | index | hbroker | location
  public | locationlog_pkey  | index | hbroker | locationlog
  public | log_name_idx  | index | hbroker | log
  public | log_pkey  | index | hbroker | log
  public | media_pkey| index | hbroker | media
  public | media_volumename_id   | index | hbroker | media
  public | mediatype_pkey| index | hbroker | mediatype
  public | path_name_idx | index | hbroker | path
  public | path_pkey | index | hbroker | path
  public | pathhierarchy_pkey| index | hbroker | pathhierarchy
  public | pathhierarchy_ppathid | index | hbroker | pathhierarchy
  public | pathvisibility_jobid  | index | hbroker | pathvisibility
  public | pathvisibility_pkey   | index | hbroker | pathvisibility
  public | pool_name_idx | index | hbroker | pool
  public | pool_pkey | index | hbroker | pool
  public | status_pkey   | index | hbroker | status
  public | storage_pkey  | index | hbroker | storage
  public | unsavedfiles_pkey | index | hbroker | unsavedfiles
 (41 rows)

Hm, I'm missing some of the indexes:


file_pathid_idx
file_filenameid_idx
client_group_idx
client_group_member_idx
client_group_member_pkey
client_group_pkey



 List of relations
 Schema | Name  | Type  |  Owner   | Table  
+---+---+--+
 public | basefiles_jobid_idx   | index | postgres | basefiles
 public | basefiles_pkey| index | postgres | basefiles
 public | cdimages_pkey | index | postgres | cdimages
 public | client_name_idx   | index | postgres | client
 public | client_pkey   | index | postgres | client
 public | counters_pkey | index | postgres | counters
 public | device_pkey   | index | postgres | device
 public | file_jobid_idx| index | postgres | file
 public | file_jpfid_idx| index | postgres | file
 public | file_pkey | index | postgres | file
 public | filename_name_idx | index | postgres | filename
 public | filename_pkey | index | postgres | filename
 

[Bacula-users] Remove several inactive clients

2010-10-04 Thread Joseph L. Casale
Anyone know the proper way to do this w/ an MySQL db?
I really don't know much about sql or bacula's db structure but
a ` DELETE FROM Client WHERE name LIKE '%host-fd%';` left the
db in an unstable state where a restore was needed. Volumes
could no longer be used that were expected to be available etc...

Thanks,
jlc

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users