[Bacula-users] Bacula 5.2.4 Major problems with Warnings jobs not added to catalog etc

2012-01-23 Thread Stephen G Carr

Dear All

Have anyone had similar problems with 5.2.4 with backup warnings - the 
MAJOR problem seems that the Backup is NOT added to the catalog and as I 
backup clients to disc and migrate full backups to tape I suspect these 
jobs will not be migrated - the jobs migration jobs run tonight.


I have reverted to 5.2.3 and reran the Full backups of those clients 
with warnings.


I accept there will be dodgy files on Windows clients but would prefer 
the backup to treated as a valid backup with respect to adding it to the 
Catalog and as a job valid for migration.


When backing up windows clients I have got the following that resulted 
in a warning - I have copied the backup status from the Volume for 5.2.4 
and 5.2.3


---
Bacula: Backup OK -- with warnings of XXX Full - VSS problem

Generate VSS snapshot of drive d:\ failed. VSS support is disabled on 
this drive.


With 5.2.4
238,776 | civengXXX | 2012-01-23 10:05:00 | B| F |302 |  
1,793,433,275 | W


With 5.2.3
238,776 | civengXXX | 2012-01-23 10:05:00 | B| F |302 |  
1,793,433,275 | W




Bacula: Backup OK -- with warnings of YYY Full - encrypted file

Cannot open D:/Data/Sattar/others/Programming/matlab/yyy.m: ERR=Access 
is denied.


With 5.2.4
238,790 | civengYYY  | 2012-01-23 10:05:02 | B| F | 11,465 |  
2,255,073,886 | W


With 5.2.3
238,799 | civengYYY  | 2012-01-23 12:11:05 | B| F | 11,465 | 
2,255,185,607 | T


---

Bacula: Backup Unknown term code of civengAAA Full - OSX folder

D:/Data/My Documents/WDSA 2012/Notices, 
Banner/full-colour-vert/__MACOSX/._UoA_col_vert.png: ERR=Access is denied.


With 5.2.4
238,771 | civengAAA| 2012-01-23 10:00:09 | B| F | 42,781 | 
46,811,644,364 | W


With 5.2.3
| 238,797 | civengAAA| 2012-01-23 11:51:54 | B| F | 42,782 | 
46,813,945,624 | T




Regards
Stephen Carr

--
Stephen Carr
Computing Officer
School of Civil and Environmental Engineering
The University of Adelaide
Tel +618-8303-4313
Fax +618-8303-4359
Email sgc...@civeng.adelaide.edu.au

CRICOS Provider Number 00123M
---
This email message is intended only for the addressee(s)and 
contains information that may be confidential and/or copyright.
If you are not the intended recipient please notify the sender 
by reply email and immediately delete this email. Use, disclosure
or reproduction of this email by anyone other than the intended recipient(s) is strictly prohibited. No representation is made 
that this email or any attachments are free of viruses. Virus 
scanning is recommended and is the responsibility of the recipient.


--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fedora 16 - RHEL 5/6 Bacula RPM repository

2012-01-23 Thread Simone Caronni
Hello,

please have a look at the readme file at:

http://repos.fedorapeople.org/repos/slaanesh/bacula/README.txt

there's this note:

** The included /usr/share/doc/bacula-common-%{version}/README.Fedora contains
quick installation instructions and notes **

You'll find your quick answer by reading it.

Regards,
--Simone




On 23 January 2012 08:24, tonyalbers bacula-fo...@backupcentral.com wrote:
 Hi all,

 Simone, are the RHEL 6 packages compiled with mysql support? Whenever I try 
 to start the director, i get this message in the log file:

 22-Jan 17:43 bacula-dir JobId 0: Fatal error: postgresql.c:241 Unable to 
 connect to PostgreSQL server. Database=bacula User=bacula
 Possible causes: SQL server not running; password incorrect; max_connections 
 exceeded.

 But I'm running mysql, and it is working as it should be. Should I specify 
 which database server I want to use somewhere?

 /tony

 +--
 |This was sent by tony.alb...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--



 --
 Try before you buy = See our experts in action!
 The most comprehensive online learning library for Microsoft developers
 is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
 Metro Style Apps, more. Free future releases when you subscribe now!
 http://p.sf.net/sfu/learndevnow-dev2
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



-- 
You cannot discover new oceans unless you have the courage to lose
sight of the shore (R. W. Emerson).

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula 5.2.4 problem with warnings

2012-01-23 Thread Stephen G Carr

Dear All

Has anyone had similar problems with 5.2.4 with backup warnings.
The MAJOR problem seems that the Backup is NOT added to the catalog and 
as I backup clients to disc volume and then migrate Full backups to tape 
I suspect these jobs will not be migrated - the migration jobs will run 
tonight.


I have just noticed that a job with warnings was migrated.
Why is it not in the catalog prior to migration?
What if the job is not Migrated - applies to my Incremental backups that 
remain on disc?


I have reverted to 5.2.3 and reran the Full backups of those clients 
with warnings.


I accept there will be dodgy files on Windows clients but would prefer 
the backup to treated as a valid backup with respect to adding it to the 
Catalog and as a job valid for migration.


When backing up windows clients I have got the following that resulted 
in a warning - I have copied the backup status from the Volume for 5.2.4 
and 5.2.3


---
Bacula: Backup OK -- with warnings of XXX Full - VSS problem

Generate VSS snapshot of drive d:\ failed. VSS support is disabled on 
this drive.


With 5.2.4
238,776 | civengXXX | 2012-01-23 10:05:00 | B| F |302 |  
1,793,433,275 | W


With 5.2.3
238,796 | civengXXX | 2012-01-23 11:50:32 | B| F |302 | 
1,793,514,892 | T


Using Query option 5 for civengXXX after rerun

Note JobID 238,776 is missing

| 238,337 | civengXXX | workstation | I | 2012-01-20 14:00:06 
|   14 |   566,133,909 | Inc1232|
| 238,796 | civengXXX | workstation | F | 2012-01-23 11:50:32 |  
302 | 1,793,514,892 | Full1135   |





Bacula: Backup OK -- with warnings of YYY Full - encrypted file

Cannot open D:/Data/Sattar/others/Programming/matlab/yyy.m: ERR=Access 
is denied.


With 5.2.4
238,790 | civengYYY  | 2012-01-23 10:05:02 | B| F | 11,465 |  
2,255,073,886 | W


With 5.2.3
238,799 | civengYYY  | 2012-01-23 12:11:05 | B| F | 11,465 | 
2,255,185,607 | T


---

Bacula: Backup Unknown term code of civengAAA Full - OSX folder

D:/Data/My Documents/WDSA 2012/Notices, 
Banner/full-colour-vert/__MACOSX/._UoA_col_vert.png: ERR=Access is denied.


With 5.2.4
238,771 | civengAAA| 2012-01-23 10:00:09 | B| F | 42,781 | 
46,811,644,364 | W


With 5.2.3
238,797 | civengAAA| 2012-01-23 11:51:54 | B| F | 42,782 | 
46,813,945,624 | T


This is the client that has had both the backup with warnings and a 
normal (5.2.3) backup done


Using Query option 5 for civengAAA after migration

| 233,051 | civengAAA | workstation | I | 2011-12-21 10:02:09 |  
136 |  3,457,909,805 | Inc0011|
| 238,895 | civengAAA | workstation | F | 2012-01-23 10:00:09 |   
42,781 | 46,821,266,974 | LTO-D42|
| 238,890 | civengAAA | workstation | F | 2012-01-23 11:51:54 |   
42,782 | 46,823,568,423 | LTO-D42|


Regards
Stephen Carr

--
Stephen Carr
Computing Officer
School of Civil and Environmental Engineering
The University of Adelaide
Tel +618-8303-4313
Fax +618-8303-4359
Email sgc...@civeng.adelaide.edu.au

CRICOS Provider Number 00123M
---
This email message is intended only for the addressee(s)and 
contains information that may be confidential and/or copyright.
If you are not the intended recipient please notify the sender 
by reply email and immediately delete this email. Use, disclosure
or reproduction of this email by anyone other than the intended recipient(s) is strictly prohibited. No representation is made 
that this email or any attachments are free of viruses. Virus 
scanning is recommended and is the responsibility of the recipient.


--
Stephen Carr
Computing Officer
School of Civil and Environmental Engineering
The University of Adelaide
Tel +618-8303-4313
Fax +618-8303-4359
Email sgc...@civeng.adelaide.edu.au

CRICOS Provider Number 00123M
---
This email message is intended only for the addressee(s)and 
contains information that may be confidential and/or copyright.
If you are not the intended recipient please notify the sender 
by reply email and immediately delete this email. Use, disclosure
or reproduction of this email by anyone other than the intended recipient(s) is strictly prohibited. No representation is made 
that this email or any attachments are free of viruses. Virus 
scanning is recommended and is the responsibility of the recipient.


--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 5.2.4 Major problems with Warnings jobs not added to catalog etc

2012-01-23 Thread Eric Bollengier
Hello Stephen,

On 23/01/2012 09:08, Stephen G Carr wrote:
 Dear All

 Have anyone had similar problems with 5.2.4 with backup warnings - the
 MAJOR problem seems that the Backup is NOT added to the catalog and as I
 backup clients to disc and migrate full backups to tape I suspect these
 jobs will not be migrated - the jobs migration jobs run tonight.

I would be nice to not just suspect when you are reporting a potential 
problem such as this one. Did you try to migrate new jobs? Did you get 
any problem related to Bacula?

 I have reverted to 5.2.3 and reran the Full backups of those clients
 with warnings.

 I accept there will be dodgy files on Windows clients but would prefer
 the backup to treated as a valid backup with respect to adding it to the
 Catalog and as a job valid for migration.

The Warning job status is treated like the Ok status. Unless you are 
writing your own queries to migrate jobs, you shouldn't have problems. 
So, if you are using a custom query, you can just replace JobStatus = 
'T' by JobStatus IN ('T', 'W') as we did in other part of Bacula.

Bye

-- 
Need professional help and support for Bacula ?
Visit http://www.baculasystems.com

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fedora 16 - RHEL 5/6 Bacula RPM repository

2012-01-23 Thread tonyalbers
please have a look at the readme file at:

http://repos.fedorapeople.org/repos/slaanesh/bacula/README.txt

there's this note:

** The included /usr/share/doc/bacula-common-%{version}/README.Fedora contains
quick installation instructions and notes **

You'll find your quick answer by reading it. 

Thanks Simone, sorry for not reading the readme file. I thought it was only for 
Fedora.

/tony

+--
|This was sent by tony.alb...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 5.2.4 Major problems with Warnings jobs not added to catalog etc

2012-01-23 Thread Stephen G Carr
Dear Eric

Thanks for the clue regarding queries.

I have just looked at the default set of queries I use 
(examples/sample-query.sql) and noticed statements like

# 5
:List all backups for a Client
*Enter Client Name:
SELECT DISTINCT Job.JobId as JobId,Client.Name as Client,
   FileSet.FileSet AS FileSet,Level,StartTime,
   JobFiles,JobBytes,VolumeName
 FROM Client,Job,JobMedia,Media,FileSet
 WHERE Client.Name='%1'
 AND Client.ClientId=Job.ClientId AND Job.Type='B'
 AND Job.JobStatus='T' AND Job.FileSetId=FileSet.FileSetId
 AND JobMedia.JobId=Job.JobId AND JobMedia.MediaId=Media.MediaId
 ORDER BY Job.StartTime;


Note the Job.JobStatus='T'  - need to change to (T,W) as you said.

I retract the that there is a problem with Bacula and warnings - it is 
the query !!

Sorry
Stephen Carr

Eric Bollengier wrote:
 Hello,

 On 23/01/2012 11:37, Stephen G Carr wrote:
 Dear Eric

 I sent a later email and for got to add Update to the subject - the
 migration of jobs with warnings do migrate to tape.

 I am using the standard queries - will look at the syntax but why is a
 job shown in a Volume with warnings but not visible when querying the
 client?

 Sorry, I don't understand where jobs are not visible, we might have a 
 display problem somewhere like in Bat or Bweb, but the core code 
 should handle everything properly.

 Bye


 Thanks
 Stephen Carr

 Eric Bollengier wrote:
 Hello Stephen,

 On 23/01/2012 09:08, Stephen G Carr wrote:

 Dear All

 Have anyone had similar problems with 5.2.4 with backup warnings - the
 MAJOR problem seems that the Backup is NOT added to the catalog and 
 as I
 backup clients to disc and migrate full backups to tape I suspect 
 these
 jobs will not be migrated - the jobs migration jobs run tonight.


 I would be nice to not just suspect when you are reporting a 
 potential
 problem such as this one. Did you try to migrate new jobs? Did you get
 any problem related to Bacula?


 I have reverted to 5.2.3 and reran the Full backups of those clients
 with warnings.

 I accept there will be dodgy files on Windows clients but would 
 prefer
 the backup to treated as a valid backup with respect to adding it 
 to the
 Catalog and as a job valid for migration.


 The Warning job status is treated like the Ok status. Unless you are
 writing your own queries to migrate jobs, you shouldn't have problems.
 So, if you are using a custom query, you can just replace JobStatus =
 'T' by JobStatus IN ('T', 'W') as we did in other part of Bacula.

 Bye



 -- 
 Stephen Carr
 Computing Officer
 School of Civil and Environmental Engineering
 The University of Adelaide
 Tel +618-8303-4313
 Fax +618-8303-4359
 emailsgc...@civeng.adelaide.edu.au

 CRICOS Provider Number 00123M
 ---
 This email message is intended only for the addressee(s)and
 contains information that may be confidential and/or copyright.
 If you are not the intended recipient please notify the sender
 by reply email and immediately delete this email. Use, disclosure
 or reproduction of this email by anyone other than the intended 
 recipient(s) is strictly prohibited. No representation is made
 that this email or any attachments are free of viruses. Virus
 scanning is recommended and is the responsibility of the recipient.




-- 
Stephen Carr
Computing Officer
School of Civil and Environmental Engineering
The University of Adelaide
Tel +618-8303-4313
Fax +618-8303-4359
Email sgc...@civeng.adelaide.edu.au

CRICOS Provider Number 00123M
---
This email message is intended only for the addressee(s)and 
contains information that may be confidential and/or copyright.
If you are not the intended recipient please notify the sender 
by reply email and immediately delete this email. Use, disclosure
or reproduction of this email by anyone other than the intended recipient(s) is 
strictly prohibited. No representation is made 
that this email or any attachments are free of viruses. Virus 
scanning is recommended and is the responsibility of the recipient.


--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Have we reached bacula's limits?

2012-01-23 Thread Uwe Schuerkamp
Hi folks,

we're running four bacula installations, most of them on version 5.0.x
compiled from source on CentOS 5.x / 6.x 64bit servers. We're mostly
happy with the setup, backups are fast, reliable and generally do not
cause us a lot of headaches. 

Today, a colleague asked me to restore some data from the last backup
of a client on our largest bacula install, namely (according to bweb)

DB Size: 
Total clients:  107 Total bytes stored: 34.41 TB
Total files:47495362  Database size:31.64 GB

MySQL isn't exactly huge, and restoring the data didn't look like too
much of a big deal at first: 

+---+---+--++-+--+
| 9,582 | F |  527,265 | 55,999,595,573 | 2012-01-20 21:05:03 |
OFFLINE14_02 |
| 9,652 | I |1,150 |  1,534,499,185 | 2012-01-21 18:34:56 |
OFFLINE15_01 |
+---+---+--++-+--+

So we're talking a mere 500,000 files (he only needed a single dir out
of the bunch). 

4,5 hours later, Bacula is still sitting at the Building Directory
Tree message, without so much as a single . or + hopefully
showing up in the terminal, indicating some kind of progress. 

I've run mysqltuner on the db a couple of times as this isn't the
first time we've had problems during a restore, and it looks ok (to my
untrained, non-dba eyes anyway):

##
 General Statistics
 --
[--] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 5.1.52-log
[OK] Operating on 64-bit architecture

 Storage Engine Statistics
 ---
[--] Status: -Archive -BDB -Federated -InnoDB -ISAM -NDBCluster 
[--] Data in MyISAM tables: 31G (Tables: 33)
[!!] Total fragmented tables: 2

 Performance Metrics
 -
[--] Up for: 35s (57 q [1.629 qps], 12 conn, TX: 44K, RX: 3K)
[--] Reads / Writes: 100% / 0%
[--] Total buffers: 12.0G global + 83.2M per thread (151 max threads)
[!!] Maximum possible memory usage: 24.3G (137% of installed RAM)
[OK] Slow queries: 0% (0/57)
[OK] Highest usage of available connections: 0% (1/151)
[OK] Key buffer size / total MyISAM indexes: 11.9G/15.3G
[OK] Key buffer hit rate: 100.0% (6K cached / 2 reads)
[!!] Query cache efficiency: 0.0% (0 cached / 23 selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (0 temp sorts / 9 sorts)
[!!] Temporary tables created on disk: 34% (8 on disk / 23 total)
[OK] Thread cache hit rate: 91% (1 created / 12 connections)
[OK] Table cache hit rate: 85% (41 open / 48 opened)
[OK] Open file limit used: 1% (83/4K)
[OK] Table locks acquired immediately: 100% (38 immediate / 38 locks)
[!!] Connections aborted: 8%

 Recommendations
 -
General recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance
MySQL started within last 24 hours - recommendations may be
inaccurate
Reduce your overall MySQL memory footprint for system stability
When making adjustments, make tmp_table_size/max_heap_table_size
equal
Reduce your SELECT DISTINCT queries without LIMIT clauses
Your applications are not closing MySQL connections properly
Variables to adjust:
  *** MySQL's maximum memory usage is dangerously high ***
  *** Add RAM before increasing MySQL buffer variables ***
query_cache_limit ( 16M, or use smaller result sets)
tmp_table_size ( 61M)
max_heap_table_size ( 16M)

##

For the restore run mentioned above, I'm seeing a 40MB mysql tmp table
in /tmp updated every once in a while, and there's lots of write
activity to the partition that holds /tmp. 

I'm now running a repair table File after cancelling the restore
job, but I guess there's something seriously wrong with the above
setup. The other bacula servers generally run on smaller machines, but
come up with a dir tree after five to ten minutes for a comparable job
which is acceptable, but 5 hours seems way off the mark. 

the bacula db was created using bacula's own mysql init script, so I
assume all the indices where created (and more importantly, no extra
ones that might slow bacula down) correctly. Insert performance it
great during backups, we usually achieve around 30-50MB/sec sustained
for 3 to 4 jobs running in parallel. 

Mem usage (as opposed to mysql tuner's warning) is ok during the
restore run, no swapping, 5GB of 18G total still used for buffer
cache. 


Thanks in advance for any help / hints or thoughts, 

Uwe 

PS: Please let me know if I should provide more info on the setup what
would help in analyzing this problem. 

-- 
NIONEX --- Ein Unternehmen der Bertelsmann AG




Re: [Bacula-users] Have we reached bacula's limits?

2012-01-23 Thread Phil Stracchino
On 01/23/2012 10:28 AM, Uwe Schuerkamp wrote:
 I've run mysqltuner on the db a couple of times as this isn't the
 first time we've had problems during a restore, and it looks ok (to my
 untrained, non-dba eyes anyway):
 
 ##
  General Statistics
  --
 [--] Skipped version check for MySQLTuner script
 [OK] Currently running supported MySQL version 5.1.52-log
 [OK] Operating on 64-bit architecture
 
  Storage Engine Statistics
  ---
 [--] Status: -Archive -BDB -Federated -InnoDB -ISAM -NDBCluster 
 [--] Data in MyISAM tables: 31G (Tables: 33)
 [!!] Total fragmented tables: 2
 
  Performance Metrics
  -
 [--] Up for: 35s (57 q [1.629 qps], 12 conn, TX: 44K, RX: 3K)
 [--] Reads / Writes: 100% / 0%
 [--] Total buffers: 12.0G global + 83.2M per thread (151 max threads)
 [!!] Maximum possible memory usage: 24.3G (137% of installed RAM)
 [OK] Slow queries: 0% (0/57)
 [OK] Highest usage of available connections: 0% (1/151)
 [OK] Key buffer size / total MyISAM indexes: 11.9G/15.3G
 [OK] Key buffer hit rate: 100.0% (6K cached / 2 reads)
 [!!] Query cache efficiency: 0.0% (0 cached / 23 selects)
 [OK] Query cache prunes per day: 0
 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 9 sorts)
 [!!] Temporary tables created on disk: 34% (8 on disk / 23 total)
 [OK] Thread cache hit rate: 91% (1 created / 12 connections)
 [OK] Table cache hit rate: 85% (41 open / 48 opened)
 [OK] Open file limit used: 1% (83/4K)
 [OK] Table locks acquired immediately: 100% (38 immediate / 38 locks)
 [!!] Connections aborted: 8%
 
  Recommendations
  -
 General recommendations:
 Run OPTIMIZE TABLE to defragment tables for better performance
 MySQL started within last 24 hours - recommendations may be
 inaccurate
 Reduce your overall MySQL memory footprint for system stability
 When making adjustments, make tmp_table_size/max_heap_table_size
 equal
 Reduce your SELECT DISTINCT queries without LIMIT clauses
 Your applications are not closing MySQL connections properly
 Variables to adjust:
   *** MySQL's maximum memory usage is dangerously high ***
   *** Add RAM before increasing MySQL buffer variables ***
 query_cache_limit ( 16M, or use smaller result sets)
 tmp_table_size ( 61M)
 max_heap_table_size ( 16M)
 
 ##
 
 For the restore run mentioned above, I'm seeing a 40MB mysql tmp table
 in /tmp updated every once in a while, and there's lots of write
 activity to the partition that holds /tmp. 


If max_heap_table_size is 16M, then in-memory temporary tables are
limited to 16M too.  Maximum in-memory temporary table size is the
smaller of tmp_table-size and max_heap_table_size.  You only ever have a
single DB connection; why are you allowing 151 connections?

Cut max_connections to 10, increase tmp_table_size and
max_heap_table_size to 64M or even 128M, increase table_cache to 64,
disable the query cache because you're going to have few if any
frequently-repeated queries, update to MySQL 5.5, and seriously,
seriously consider converting to InnoDB.  It is a MUCH higher
performance storage engine than MyISAM.  Remember that MyISAM was
designed to yield *acceptable* performance in shared installations on
machines with less than 32MB of RAM.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
  Renaissance Man, Unix ronin, Perl hacker, SQL wrangler, Free Stater
 It's not the years, it's the mileage.

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Have we reached bacula's limits?

2012-01-23 Thread Alan Brown

1: Make sure you have enough ram in your mysql box (ie, several 10s of Gb)

2: Make sure you tune mysql properly. Most of the supplied config 
examples are for sub-1Gb memory configuration.

3: Make sure you have the _correct_ indexes built. this is in the bacula 
knowledgebase.

4: For systems with 10s of millions of files - Seriously consider moving 
to postgres. MySQL is a memory hog.



On 23/01/12 15:28, Uwe Schuerkamp wrote:
 Hi folks,

 we're running four bacula installations, most of them on version 5.0.x
 compiled from source on CentOS 5.x / 6.x 64bit servers. We're mostly
 happy with the setup, backups are fast, reliable and generally do not
 cause us a lot of headaches.

 Today, a colleague asked me to restore some data from the last backup
 of a client on our largest bacula install, namely (according to bweb)

 DB Size:
 Total clients:107 Total bytes stored: 34.41 TB
 Total files:  47495362  Database size:31.64 GB

 MySQL isn't exactly huge, and restoring the data didn't look like too
 much of a big deal at first:

 +---+---+--++-+--+
 | 9,582 | F |  527,265 | 55,999,595,573 | 2012-01-20 21:05:03 |
 OFFLINE14_02 |
 | 9,652 | I |1,150 |  1,534,499,185 | 2012-01-21 18:34:56 |
 OFFLINE15_01 |
 +---+---+--++-+--+

 So we're talking a mere 500,000 files (he only needed a single dir out
 of the bunch).

 4,5 hours later, Bacula is still sitting at the Building Directory
 Tree message, without so much as a single . or + hopefully
 showing up in the terminal, indicating some kind of progress.

 I've run mysqltuner on the db a couple of times as this isn't the
 first time we've had problems during a restore, and it looks ok (to my
 untrained, non-dba eyes anyway):

 ##
  General Statistics
   --
 [--] Skipped version check for MySQLTuner script
 [OK] Currently running supported MySQL version 5.1.52-log
 [OK] Operating on 64-bit architecture

  Storage Engine Statistics
   ---
 [--] Status: -Archive -BDB -Federated -InnoDB -ISAM -NDBCluster
 [--] Data in MyISAM tables: 31G (Tables: 33)
 [!!] Total fragmented tables: 2

  Performance Metrics
   -
 [--] Up for: 35s (57 q [1.629 qps], 12 conn, TX: 44K, RX: 3K)
 [--] Reads / Writes: 100% / 0%
 [--] Total buffers: 12.0G global + 83.2M per thread (151 max threads)
 [!!] Maximum possible memory usage: 24.3G (137% of installed RAM)
 [OK] Slow queries: 0% (0/57)
 [OK] Highest usage of available connections: 0% (1/151)
 [OK] Key buffer size / total MyISAM indexes: 11.9G/15.3G
 [OK] Key buffer hit rate: 100.0% (6K cached / 2 reads)
 [!!] Query cache efficiency: 0.0% (0 cached / 23 selects)
 [OK] Query cache prunes per day: 0
 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 9 sorts)
 [!!] Temporary tables created on disk: 34% (8 on disk / 23 total)
 [OK] Thread cache hit rate: 91% (1 created / 12 connections)
 [OK] Table cache hit rate: 85% (41 open / 48 opened)
 [OK] Open file limit used: 1% (83/4K)
 [OK] Table locks acquired immediately: 100% (38 immediate / 38 locks)
 [!!] Connections aborted: 8%

  Recommendations
   -
 General recommendations:
  Run OPTIMIZE TABLE to defragment tables for better performance
  MySQL started within last 24 hours - recommendations may be
  inaccurate
  Reduce your overall MySQL memory footprint for system stability
  When making adjustments, make tmp_table_size/max_heap_table_size
  equal
  Reduce your SELECT DISTINCT queries without LIMIT clauses
  Your applications are not closing MySQL connections properly
 Variables to adjust:
*** MySQL's maximum memory usage is dangerously high ***
*** Add RAM before increasing MySQL buffer variables ***
  query_cache_limit (  16M, or use smaller result sets)
  tmp_table_size (  61M)
  max_heap_table_size (  16M)

 ##

 For the restore run mentioned above, I'm seeing a 40MB mysql tmp table
 in /tmp updated every once in a while, and there's lots of write
 activity to the partition that holds /tmp.

 I'm now running a repair table File after cancelling the restore
 job, but I guess there's something seriously wrong with the above
 setup. The other bacula servers generally run on smaller machines, but
 come up with a dir tree after five to ten minutes for a comparable job
 which is acceptable, but 5 hours seems way off the mark.

 the bacula db was created using bacula's own mysql init script, so I
 assume all the indices where created (and more importantly, no extra
 ones that might slow 

Re: [Bacula-users] Will Bacula be faster with Two Tape Devices?

2012-01-23 Thread John Drescher
 But will this speed up my backups?   Generally speaking, should Bacula be
 able to write two the two tapes devices more quickly than it can write
 to one?  The unit in question is a Tandberg Data StorageLibrary T24 LTO
 with two serial attached scsi LTO4 tape devices, attached to a server
 running Centos 6.2.

It will not speed up a single job since 1 job can not use more than 1
drive however if you have concurrent jobs with spooling and you can
provide enough bandwidth to keep up with the drives it can speed up
multiple jobs. Remember that at 2:1 compression these LTO4 drives
write at 120MB /s each so you will need to have a raid array (raid 0
maybe) or fast SSD for your spool location and this raid probably
should not be on the same drives as your source data.

John

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] SCSI Errors

2012-01-23 Thread Nikola Lazic
I have Bacula 5.0.3 on FreeBSD 8.2 backing up to an IBM Ultrium ULT3580 tape
drive, using 400~800GB tapes, connected to an Adaptec 39320LPE Ultra320 SCSI
adapter. I'm using SQLite and have 4GB of RAM.

I used to have a Quantum DLTV4 tape drive, but I was getting SCSI errors
every few weeks, so I switched to the Ultrium.

I've used 3 other tapes in this drive with no issues. This tape fails:

# cat /var/log/messages
Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): WRITE FILEMARKS(6). CDB: 10 0
0 0 1 0 
Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): CAM status: SCSI Status Error
Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): SCSI status: Check Condition
Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): SCSI sense: MEDIUM ERROR
asc:31,0 (Medium format corrupted) field replaceable unit: 30
Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): WRITE FILEMARKS(6). CDB: 10 0
0 0 2 0 
Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): CAM status: SCSI Status Error
Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): SCSI status: Check Condition
Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): SCSI sense: MEDIUM ERROR
asc:31,0 (Medium format corrupted) field replaceable unit: 30

# cat /var/db/bacula/log
21-Jan 00:43 bacula-server-sd JobId 1070: Committing spooled data to Volume
BSD2a. Despooling 9,641,289,685 bytes ...
21-Jan 00:43 bacula-server-sd JobId 1070: End of Volume BSD2a at 21:7164
on device ULT3580 (/dev/nsa0). Write of 64512 bytes got 0.
21-Jan 00:44 bacula-server-sd JobId 1070: Error: Error writing final EOF to
tape. This Volume may not be readable.
dev.c:1745 ioctl MTWEOF error on ULT3580 (/dev/nsa0). ERR=Input/output
error.
21-Jan 00:44 bacula-server-sd JobId 1070: End of medium on Volume BSD2a
Bytes=13,968,912,384 Blocks=216,531 at 21-Jan-2012 00:44.
21-Jan 00:44 bacula-server-sd JobId 1070: Job nti-bsd.2012-01-20_23.05.00_26
is waiting. Cannot find any appendable volumes.
Please use the label command to create a new Volume for:
Storage:  ULT3580 (/dev/nsa0)
Pool: Ultium
Media type:   LTO4

I've tried migrating the jobs from the tape, but I get:
# cat /var/db/bacula/log
23-Jan 15:34 bacula-server-sd JobId 1079: Forward spacing Volume BSD2a to
file:block 20:0.
23-Jan 15:35 bacula-server-sd JobId 1079: Error: block.c:1002 Read error on
fd=5 at file:blk 21:7163 on device ULT3580 (/dev/nsa0). ERR=Operation not
permitted.
23-Jan 15:35 bacula-server-sd JobId 1079: Error: Unexpected Tape is Off-line
23-Jan 15:35 bacula-server-dir JobId 1079: Error: Bacula bacula-server-dir
5.0.3 (04Aug10): 23-Jan-2012 15:35:54

I was able to write multi-tape btape fill to this tape with no errors. I
haven't completed the read test, yet.

Here's a sample error from my old Quantum DLTV4 drive, since I feel like
these might be connected.

Dec 14 03:34:35 bsd kernel: (sa0:ahd0:0:5:0): SCSI status: Check Condition
Dec 14 03:34:35 bsd kernel: (sa0:ahd0:0:5:0): SCSI sense: UNIT ATTENTION
csi:e0,40,0,2e asc:29,3 (Bus device reset function occurred)
Dec 14 05:34:46 bsd kernel: ahd0: Recovery Initiated - Card was not paused

Other things I've done:
* Replaced the SCSI adapter
* Replaced the SCSI cable and terminator

Here's the current SCSI adapter info
# dmesg
ahd0: Adaptec 39320LPE Ultra320 SCSI adapter port
0xe400-0xe4ff,0xe800-0xe8ff mem 0xfbd7e000-0xfbd7 irq 16 at device 4.0
on pci4
ahd0: [ITHREAD]
aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 101-133MHz, 512 SCBs

Only other thing I haven't done is move to MySQL since my bacula.db is about
3.6 GB holding ~18M files in 332 jobs:

select sum(JobFiles) from job;
+---+
| sum(JobFiles) |
+---+
| 18685071  |
+---+
Enter SQL query: select count(*) from job;
+--+
| count(*) |
+--+
| 332  |
+--+

I have no idea where else to look. Any help is appreciated!

Nikola Lazic



--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SCSI Errors

2012-01-23 Thread John Drescher
On Mon, Jan 23, 2012 at 6:08 PM, Nikola Lazic n...@vpi.us wrote:
 I have Bacula 5.0.3 on FreeBSD 8.2 backing up to an IBM Ultrium ULT3580 tape
 drive, using 400~800GB tapes, connected to an Adaptec 39320LPE Ultra320 SCSI
 adapter. I'm using SQLite and have 4GB of RAM.

 I used to have a Quantum DLTV4 tape drive, but I was getting SCSI errors
 every few weeks, so I switched to the Ultrium.

 I've used 3 other tapes in this drive with no issues. This tape fails:

 # cat /var/log/messages
 Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): WRITE FILEMARKS(6). CDB: 10 0
 0 0 1 0
 Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): CAM status: SCSI Status Error
 Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): SCSI status: Check Condition
 Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): SCSI sense: MEDIUM ERROR
 asc:31,0 (Medium format corrupted) field replaceable unit: 30
 Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): WRITE FILEMARKS(6). CDB: 10 0
 0 0 2 0
 Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): CAM status: SCSI Status Error
 Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): SCSI status: Check Condition
 Jan 21 00:44:00 bsd kernel: (sa0:ahd0:0:6:0): SCSI sense: MEDIUM ERROR
 asc:31,0 (Medium format corrupted) field replaceable unit: 30

 # cat /var/db/bacula/log
 21-Jan 00:43 bacula-server-sd JobId 1070: Committing spooled data to Volume
 BSD2a. Despooling 9,641,289,685 bytes ...
 21-Jan 00:43 bacula-server-sd JobId 1070: End of Volume BSD2a at 21:7164
 on device ULT3580 (/dev/nsa0). Write of 64512 bytes got 0.
 21-Jan 00:44 bacula-server-sd JobId 1070: Error: Error writing final EOF to
 tape. This Volume may not be readable.
 dev.c:1745 ioctl MTWEOF error on ULT3580 (/dev/nsa0). ERR=Input/output
 error.
 21-Jan 00:44 bacula-server-sd JobId 1070: End of medium on Volume BSD2a
 Bytes=13,968,912,384 Blocks=216,531 at 21-Jan-2012 00:44.
 21-Jan 00:44 bacula-server-sd JobId 1070: Job nti-bsd.2012-01-20_23.05.00_26
 is waiting. Cannot find any appendable volumes.
 Please use the label command to create a new Volume for:
    Storage:      ULT3580 (/dev/nsa0)
    Pool:         Ultium
    Media type:   LTO4

 I've tried migrating the jobs from the tape, but I get:
 # cat /var/db/bacula/log
 23-Jan 15:34 bacula-server-sd JobId 1079: Forward spacing Volume BSD2a to
 file:block 20:0.
 23-Jan 15:35 bacula-server-sd JobId 1079: Error: block.c:1002 Read error on
 fd=5 at file:blk 21:7163 on device ULT3580 (/dev/nsa0). ERR=Operation not
 permitted.
 23-Jan 15:35 bacula-server-sd JobId 1079: Error: Unexpected Tape is Off-line
 23-Jan 15:35 bacula-server-dir JobId 1079: Error: Bacula bacula-server-dir
 5.0.3 (04Aug10): 23-Jan-2012 15:35:54

 I was able to write multi-tape btape fill to this tape with no errors. I
 haven't completed the read test, yet.

 Here's a sample error from my old Quantum DLTV4 drive, since I feel like
 these might be connected.

 Dec 14 03:34:35 bsd kernel: (sa0:ahd0:0:5:0): SCSI status: Check Condition
 Dec 14 03:34:35 bsd kernel: (sa0:ahd0:0:5:0): SCSI sense: UNIT ATTENTION
 csi:e0,40,0,2e asc:29,3 (Bus device reset function occurred)
 Dec 14 05:34:46 bsd kernel: ahd0: Recovery Initiated - Card was not paused

 Other things I've done:
 * Replaced the SCSI adapter
 * Replaced the SCSI cable and terminator

 Here's the current SCSI adapter info
 # dmesg
 ahd0: Adaptec 39320LPE Ultra320 SCSI adapter port
 0xe400-0xe4ff,0xe800-0xe8ff mem 0xfbd7e000-0xfbd7 irq 16 at device 4.0
 on pci4
 ahd0: [ITHREAD]
 aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 101-133MHz, 512 SCBs

 Only other thing I haven't done is move to MySQL since my bacula.db is about
 3.6 GB holding ~18M files in 332 jobs:

 select sum(JobFiles) from job;
 +---+
 | sum(JobFiles) |
 +---+
 | 18685071      |
 +---+
 Enter SQL query: select count(*) from job;
 +--+
 | count(*) |
 +--+
 | 332      |
 +--+

 I have no idea where else to look. Any help is appreciated!


Bad tape? Drive need cleaned?

John

--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] critical error -- tape labels get corrupted, previous backups unreadable

2012-01-23 Thread mark . bergman
I'm experiencing a critical problem where tape labels on volumes with data
get corrupted, leaving all data on the tape inaccessible to bacula.

I'm running bacula 5.2.2 built from source, under Linux (CentOS 5.7
x86_64).

This problem has happened with approximately 15 tapes over approximately 6
months, mostly new LTO-4 media, but some LTO-3 media that's being reused.
The problem is sporadic, appearing in approximately 1 out of 60 tapes
per week.

I do not think the issue is related to the physical media or the tape
drives. One tape was last written successfully when in drive 0, then appears
corrupt when a later job tries to use is in drive 1. Another tape was last
written successfully when in drive 1, then appears corrupt when a later job
tries to use it in drive 0.

I'm not sure what combination of circumstances trigger the problem,
but it seems to show up after:

a tape is used for backups
the tape is unloaded
the tape is later reloaded for use in another job

Data from uncorrupted tapes can be read and restored without problems.

Purging the corrupt volume from the bacula database and relabeling media
allows me to reuse the tape successfully (at the loss of TB of backups!).

A tape will work successfully for many jobs, then later bacula will mount the
tape and be unable to read the label.

Here are the log records for a particular volume. It was labeled about
Dec 22, 2011. First used on Jan 4 2012. Used successfully for 10 jobs
(350.49GB), then the label was corrupted.

--
04-Jan 06:24 sbia-infr-vbacula JobId 42676: Using Volume 004090 from 
'Scratch' pool.
04-Jan 06:25 sbia-infr-vbacula JobId 42676: Wrote label to prelabeled Volume 
004090 on device ml6000-drv1 (/dev/tape1-ml6000)
04-Jan 06:25 sbia-infr-vbacula JobId 42676: New volume 004090 mounted on 
device ml6000-drv1 (/dev/tape1-ml6000) at 04-Jan-2012 06:25.
04-Jan 08:23 sbia-infr-vbacula JobId 42676: Committing spooled data to Volume 
004090. Despooling 37,003,975,390 bytes ...
05-Jan 06:47 sbia-infr-vbacula JobId 42724: Volume 004090 previously written, 
moving to end of data.
05-Jan 06:49 sbia-infr-vbacula JobId 42724: Ready to append to end of Volume 
004090 at file=69.
05-Jan 11:05 sbia-infr-vbacula JobId 42724: Committing spooled data to Volume 
004090. Despooling 495 bytes ...
06-Jan 06:51 sbia-infr-vbacula JobId 42746: Volume 004090 previously written, 
moving to end of data.
06-Jan 06:52 sbia-infr-vbacula JobId 42746: Ready to append to end of Volume 
004090 at file=70.
06-Jan 12:08 sbia-infr-vbacula JobId 42746: Committing spooled data to Volume 
004090. Despooling 495 bytes ...
07-Jan 06:48 sbia-infr-vbacula JobId 42768: Volume 004090 previously written, 
moving to end of data.
07-Jan 06:50 sbia-infr-vbacula JobId 42768: Ready to append to end of Volume 
004090 at file=71.
07-Jan 12:07 sbia-infr-vbacula JobId 42768: Committing spooled data to Volume 
004090. Despooling 495 bytes ...
08-Jan 06:48 sbia-infr-vbacula JobId 42790: Volume 004090 previously written, 
moving to end of data.
08-Jan 06:50 sbia-infr-vbacula JobId 42790: Ready to append to end of Volume 
004090 at file=72.
08-Jan 11:40 sbia-infr-vbacula JobId 42790: Committing spooled data to Volume 
004090. Despooling 495 bytes ...
09-Jan 11:12 sbia-infr-vbacula JobId 42812: Committing spooled data to Volume 
004090. Despooling 495 bytes ...
10-Jan 05:47 sbia-infr-vbacula JobId 42831: Volume 004090 previously written, 
moving to end of data.
10-Jan 05:49 sbia-infr-vbacula JobId 42831: Ready to append to end of Volume 
004090 at file=74.
10-Jan 23:52 sbia-infr-vbacula JobId 42831: Committing spooled data to Volume 
004090. Despooling 35,276,606,182 bytes ...
11-Jan 12:58 sbia-infr-vbacula JobId 42856: Committing spooled data to Volume 
004090. Despooling 495 bytes ...
12-Jan 06:45 sbia-infr-vbacula JobId 42879: Volume 004090 previously written, 
moving to end of data.
12-Jan 06:45 sbia-infr-vbacula JobId 42879: Ready to append to end of Volume 
004090 at file=83.
12-Jan 12:56 sbia-infr-vbacula JobId 42879: Committing spooled data to Volume 
004090. Despooling 5,914 bytes ...
13-Jan 06:45 sbia-infr-vbacula JobId 42901: Committing spooled data to Volume 
004090. Despooling 404 bytes ...
15-Jan 16:54 sbia-infr-vbacula JobId 42924: Please mount Volume 004090 or 
label a new one for:
--




At this point, the volume 004090 is unusable.  Running 'btape' on that volume 
reports 

[root@sbia-infr1 working]# ../bin/btape -v ml6000-drv0
Tape block granularity is 1024 bytes.
btape: butil.c:290 Using device: ml6000-drv0 for writing.
23-Jan 18:14 btape JobId 0: 3301 Issuing autochanger loaded? drive 0
command.
23-Jan 18:14 btape JobId 0: 3302 Autochanger loaded? drive 0, result is Slot
9.
btape: btape.c:477 open device ml6000-drv0 (/dev/tape0-ml6000): OK
*readlabel
btape: btape.c:526 Volume has no label.

Volume Label:
Id: **error**VerNo : 0
VolName