[Bacula-users] Large maildir backup

2008-11-27 Thread Proskurin Kirill
Hello all!

Soon I will deploy a large email server - it will use maildirs and will 
be about 1Tb of mail with really many small files.

It is any hints to make a backup via bacula of this?


-- 
Best regards,
Proskurin Kirill

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Silver Salonen
On Thursday 27 November 2008 09:50:14 Proskurin Kirill wrote:
 Hello all!
 
 Soon I will deploy a large email server - it will use maildirs and will 
 be about 1Tb of mail with really many small files.
 
 It is any hints to make a backup via bacula of this?
 
 
 -- 
 Best regards,
 Proskurin Kirill

Hello.

I think Bacula is quite good for backing up maildirs as they constist of 
separate files as e-mail messages. I don't think small files are a problem.

--
Silver

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Silver Salonen
On Thursday 27 November 2008 11:07:41 James Cort wrote:
 Silver Salonen wrote:
  On Thursday 27 November 2008 09:50:14 Proskurin Kirill wrote:
  Hello all!
 
  Soon I will deploy a large email server - it will use maildirs and will 
  be about 1Tb of mail with really many small files.
 
  It is any hints to make a backup via bacula of this?
 
  I think Bacula is quite good for backing up maildirs as they constist of 
  separate files as e-mail messages. I don't think small files are a 
problem.
 
 I don't think they're a problem either and I also backup a maildir-based
 mail server.
 
 However, one thing you may want to be aware of - unless you take
 specific steps to avoid it, the maildirs on tape won't necessarily be in
 a consistent state.  Obviously this won't affect your IMAP server - but
 it does mean that when you restore, metadata like whether or not emails
 have been read or replied to and recently received/sent email won't be a
 perfect snapshot of how the mailserver looked at any given point in time.

And when you have many incrementals in a row while restoring, you end up 
seeing many duplicate messages, that have been deleted or moved during these 
incrementals.

It'll be better in Bacula 3.0, I guess :)

--
Silver

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Jesper Krogh
Boris Kunstleben onOffice Software GmbH wrote:
 Hi,
 
 i am doing exactly that since last Thursday. 
 I have about 1.6TB in Maildirs and an huge number of small files. I have to 
 say it is awfull slow. Backing up a directory with about 190GB of Maildirs 
 took Elapsed time: 1 day 14 hours 49 mins 34 secs.
 On the other hand i have a server with Documents and images (about 700GB) 
 took much less time.
 All the Servers are virtuall Enviroments (Virtuozzo).

Can you give us the time for doing a tar to /dev/null of the fileset.

time tar cf /dev/null /path/to/maildir

Then we have a feeling about the actual read time for the file of the 
filesystem.

-- 
Jesper


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Arno Lehmann
Hi,

27.11.2008 11:31, Boris Kunstleben wrote:
 Hi,
 
 i am doing exactly that since last Thursday. 
 I have about 1.6TB in Maildirs and an huge number of small files. I have to 
 say it is awfull slow. Backing up a directory with about 190GB of Maildirs 
 took Elapsed time: 1 day 14 hours 49 mins 34 secs.
 On the other hand i have a server with Documents and images (about 700GB) 
 took much less time.
 All the Servers are virtuall Enviroments (Virtuozzo).
 
 Any Ideas would be appreciated.

Looks like the catalog database is the bottle-neck here. All the files 
need to be added to it.

There are two rather simple solutions:
- Don't keep file information for this job in the catalog. This makes 
restoring single mails difficult.
- Tune your catalog database for faster inserts. That can mean moving 
it to a faster machine, assigning more memory for it, or dropping some 
indexes (during inserts). If you're not yet using batch inserts, try 
to recompile Bacula with batch-inserts enabled.

Arno

 Kind regards Boris Kunstleben
 
 
 Silver Salonen schrieb:
 On Thursday 27 November 2008 11:07:41 James Cort wrote:
   
 Silver Salonen wrote:
 
 On Thursday 27 November 2008 09:50:14 Proskurin Kirill wrote:
   
 Hello all!

 Soon I will deploy a large email server - it will use maildirs and will 
 be about 1Tb of mail with really many small files.

 It is any hints to make a backup via bacula of this?

 
 I think Bacula is quite good for backing up maildirs as they constist of 
 separate files as e-mail messages. I don't think small files are a 
   
 problem.
   
 I don't think they're a problem either and I also backup a maildir-based
 mail server.

 However, one thing you may want to be aware of - unless you take
 specific steps to avoid it, the maildirs on tape won't necessarily be in
 a consistent state.  Obviously this won't affect your IMAP server - but
 it does mean that when you restore, metadata like whether or not emails
 have been read or replied to and recently received/sent email won't be a
 perfect snapshot of how the mailserver looked at any given point in time.
 
 And when you have many incrementals in a row while restoring, you end up 
 seeing many duplicate messages, that have been deleted or moved during these 
 incrementals.

 It'll be better in Bacula 3.0, I guess :)

 --
 Silver

 -
 This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
 Build the coolest Linux based applications with Moblin SDK  win great prizes
 Grand prize is a trip for two to an Open Source event anywhere in the world
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
   
 
 

-- 
Arno Lehmann
IT-Service Lehmann
Sandstr. 6, 49080 Osnabrück
www.its-lehmann.de

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to set up large database backup

2008-11-27 Thread David Ballester
2008/11/27 David Jurke [EMAIL PROTECTED]

  Whoa!



 Okay, I need to go talk to the DBAs about this lot, lots of it is too far
 on the DBA side for me to comment intelligently on it. It does sound
 promising, though - if we back up daily only the current month's data (the
 rest will be in static partitions), only the weekly(?) full backup will
 have space/time issues.



 But, after some thought...



Basically, as I understand it, your first group of comments are about not
 backing up empty space, as per your example if there is only 10GB data in a
 100GB data file. However, our database is growing rapidly, and our DBAs tend
 to allocate smaller tablespace files more frequently (rather than huge files
 seldom),


Bad idea(tm)

Oracle makes checkpoints ( something similar to a 'stamp of time' ) in each
datafile header and other Oracle critical files and mem structures, to be
able to restore the whole scenario to a known state if a crash occurs. This
'marks' are doing often, and of course they have a cost in CPU/IO time.
Having more datafiles more time will be spent coordinating/executing  the
stamp ( it's an exclusive operation ).

Again, could be very useful to know what release of Oracle RDBMS are you
running ( 9iR2, 10gR2...? ) and what platform/architecture ( Linux
x86_x86_64, itanium2...? ) but I think that you will have no problems to
define bigger datafiles ( think in mind that Oracle defines 128GB datafile
as the maximum for a 'small file' type ). We are using tablespaces of 360GB
each one using groups 128GB databafiles. Another reason to use bigger
datafiles, less time expended by the DBA creating datafiles ;)

 so at any time there's probably not more than 20 GB of unused space in the
 database files, which is less than 10% of our database currently, and the %
 will only decrease. So yes there would be a benefit, but not huge.


Ok, rman goes to rescue again ;) . If you're in 10gR2, you can define a
'tracking file' where Oracle will store a little info about what blocks are
changed. Then, you make a first full backup, but after that, you can use
this tracking file with rman and it will only backup the data blocks
changed. The tracking file could be 'reseted' at convenience ( after each
backup, for example ). I did not say it before, but rman is able to do
full/cumulative/differential backups.



 Your RMAN example still backs up the entire database to disk and then later
 to tape, which leaves us with the problems of disk space and backup
 duration. As mentioned above, these won't be mitigated very much by only
 backing up data and not empty space.


rman have a lot of options, of course you can backup only a particular
tablespace or a group of them, only a datafile even if the tablespace has
more than one, only archive logs, etc...


http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/toc.htm


What I'd like to do is halve the backup time and remove the requirement for
 intermediate disk storage by backing up the tablespaces (RMAN or otherwise)
 straight to tape.


Oracle provides a software api for the backup solution developers ( Legato,
Tivoli, etc...) but is offered under $$$, don't know the prize or the
conditions


 For which the only solution anyone's suggested which would actually work
 with Bacula is a variation of Kjetil's suggestion, running multiple backup
 tasks, one per tablespace. A little ugly in that there will be a LOT of
 backup jobs and hence a lot of emails in the morning, but it would work.


do it with rman ( datafile per datafile  or tablespace per tablespace ).
rman will NOT block the datafile header and no extra redo info will
generated.




 The DBAs are already talking about partitioning and making the older
 tablespaces read-only and only backing them up weekly or fortnightly or
 whatever, which solves the problem for the daily backups but still leaves us
 with a weekly/fortnightly backup which won't fit in the backup staging disk
 and won't complete before the next backup is due to kick in. It may be that
 we have to just accept that and not do daily backups over the weekend, say,
 working around the disk space issue somehow.


disk is cheap today, think about it. ¿Do you need the latest and fastest
disks? IMMO, no.



 For various reasons our hot backup site isn't ready yet. The business have
 agreed that in the interim an outage of several days is acceptable, while we
 restore from tape and start it all up. At this stage (it's a work in
 progress), an outage of this application doesn't affect external customers,
 only internally, and the pain is not great. Long term we will have a hot
 database replica at another site ready to step up to production status in
 the event of problems, but as I said this isn't ready yet. I don't know
 whether it will be Oracle Data Guard, but it'll be equivalent. We already do
 this for other applications/databases, the DBAs are well on top of this
 stuff. I don't know the details.


The Data Guard option is the best, a 

[Bacula-users] Some questions about pools and volumes

2008-11-27 Thread Personal Técnico




Hi!

I have configured these 2 pools for backup a server:

Pool {
 Name = Incremental
 Label Format = "Server-Incr"
 Pool Type = Backup
 Recycle = yes 
 AutoPrune = yes 
 Storage = BackupRAID5
 Volume Use Duration = 7 days
 Volume Retention = 7 days
 Maximum Volume Jobs = 7
 Maximum Volumes = 2
 Recycle Oldest Volume = yes
 Maximum Volume Bytes = 200GB
}

Pool {
 Name = Full
 Label Format = "Server-Full"
 Pool Type = Backup
 Recycle = yes 
 AutoPrune = yes 
 Storage = BackupRAID5
 Volume Use Duration = 1d
 Volume Retention = 2 months
 Maximum Volume Jobs = 1
 Maximum Volumes = 1
 Recycle Oldest Volume = yes
 Maximum Volume Bytes = 200GB
}

With this configuration, I get 2 questions with no answer:

  With this configuration, I get 14 days of Incremetal backups (12
Incremental + 2 Full -no prior Incremental-) going to Incremental pool
and just 1 Full backup going to Full pool. Is it correct or am I wrong?
  If I change "Label Format" in Incremental Pool by new value
"${Pool}", what will Bacula do at the moment of generate the second
volume? If pool has variable expansion, first volume will be labeled as
"Incremental" (without jobid number), but what will be the name of the
second volume??
  In Full pool, where I have only define 1 maximum volume, when
second backup is going to execute (1 month later from first one),
bacula will prune/purge volume because it was marked as "Used". Is
there any way for avoiding a full prune/pruge and only delete some
records (not all volume)? Maybe is "volume retention" the solution?


Thanks for all!!



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Arno Lehmann
Hi,

27.11.2008 12:15, James Cort wrote:
 Arno Lehmann wrote:
 - Tune your catalog database for faster inserts. That can mean moving 
 it to a faster machine, assigning more memory for it, or dropping some 
 indexes (during inserts). If you're not yet using batch inserts, try 
 to recompile Bacula with batch-inserts enabled.
 
 Is that where it inserts everything to a temporary table then copies the
 entire table into the live one?

Yes, that's batch inserting. Which can produce it's own problems, by 
the way, because it might need lots of temp space.

Arno

 
 James.

-- 
Arno Lehmann
IT-Service Lehmann
Sandstr. 6, 49080 Osnabrück
www.its-lehmann.de

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to set up large database backup

2008-11-27 Thread David Ballester
take a look at

http://www.oracle.com/technology/deploy/availability/pdf/oracle-openworld-2007/S291487_1_Chien.pdf

Backup and Recovery Best Practices for Very Large Databases (VLDBs)


Regards

D.
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to set up large database backup

2008-11-27 Thread Mike Holden
David Jurke wrote:
 The DBAs are already talking about partitioning and making the older
 tablespaces read-only and only backing them up weekly or fortnightly or
 whatever, which solves the problem for the daily backups but still leaves
 us with a weekly/fortnightly backup which won't fit in the backup staging
 disk and won't complete before the next backup is due to kick in. It may
 be that we have to just accept that and not do daily backups over the
 weekend, say, working around the disk space issue somehow.

We use a similar partitioning scheme and make stuff read-only once it is
older. For backups, what we do is split the read-only partitions into 8
groups, and once a week we back up one of these 8 groups on a rota. This
means that each week we only backup one eighth of the RO data, and each RO
file gets backed up once every 8 weeks. The tape retention is set so that
we always have a couple of spare copies of each RO file in the archive
before it is overwritten. This works pretty well for us.

As an additional note, rather than creating multiple smallish files on a
regular basis, I would suggest resizing files - you can even automate this
by use of the AUTOINCREMENT clauses on datafiles. This keeps the number of
files lower. This can have an impact during checkpointing (less files =
faster checkpointing). Although there's no realistic limit on file sizes
these days most of the time, I would suggest keeping files to say 50Gb or
smaller, just because of recovery times when only a single file needs to
be recovered.
-- 
Mike Holden

http://www.by-ang.com - the place to shop for all manner of hand crafted
items, including Jewellery, Greetings Cards and Gifts




-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Some questions about pools and volumes

2008-11-27 Thread Kevin Keane
Personal Técnico wrote:
 Hi!

 I have configured these 2 pools for backup a server:

 Pool {
 Name = Incremental
 Label Format = Server-Incr
 Pool Type = Backup
 Recycle = yes  
 AutoPrune = yes  
 Storage = BackupRAID5
 Volume Use Duration = 7 days
 Volume Retention = 7 days
 Maximum Volume Jobs = 7
 Maximum Volumes = 2
 Recycle Oldest Volume = yes
 Maximum Volume Bytes = 200GB
 }

 Pool {
 Name = Full
 Label Format = Server-Full
 Pool Type = Backup
 Recycle = yes  
 AutoPrune = yes
 Storage = BackupRAID5
 Volume Use Duration = 1d
 Volume Retention = 2 months
 Maximum Volume Jobs = 1
 Maximum Volumes = 1
 Recycle Oldest Volume = yes
 Maximum Volume Bytes = 200GB
 }

 With this configuration, I get 2 questions with no answer:

1. With this configuration, I get 14 days of Incremetal backups (12
   Incremental + 2 Full -no prior Incremental-) going to
   Incremental pool and just 1 Full backup going to Full pool. Is
   it correct or am I wrong?
2. If I change Label Format in Incremental Pool by new value
   ${Pool}, what will Bacula do at the moment of generate the
   second volume? If pool has variable expansion, first volume will
   be labeled as Incremental (without jobid number), but what
   will be the name of the second volume??
3. In Full pool, where I have only define 1 maximum volume, when
   second backup is going to execute (1 month later from first
   one), bacula will prune/purge volume because it was marked as
   Used. Is there any way for avoiding a full prune/pruge and
   only delete some records (not all volume)? Maybe is volume
   retention the solution?


 Thanks for all!!
You probably don't want to use Maximum Volumes or Maximum Volume Bytes 
in your pool definition. Otherwise, you may get an error once the pool 
is full.

Instead, just rely on the retention period. Also, since you are backing 
up to a hard disk, I would go with Maximum Volume Jobs = 1 and create 
more than one file for the incremental backup - but that's just my 
preference; you don't have to do it that way. There also was an issue 
with the volume use duration; somebody else may remember the details, 
but I believe you would have to add one day to it.

There is an interaction with the schedule and the job definitions. If 
you don't explicitly specify otherwise (in either the job or the 
schedule), it is quite possible that the first backup will be upgraded 
to a full backup and still go to the incremental pool. That is one way 
how the incremental pool might end up with more jobs than you'd expect.

If you change the label format to use variable expansion, Bacula will 
not append the media ID number any more. You need to find another way to 
count volumes. I would NOT suggest relying on the jobid or anything 
job-related, because bacula will later recycle the volume (file), and at 
that point the file name would no longer have any relationship to the 
content.

You are right, in your full pool, bacula is going to overwrite the first 
file. This is because of the Maximum Volumes setting, and the main 
reason I recommended removing it.

You CANNOT remove individual jobs from a volume, you can only delete 
volumes as a whole. This is because bacula is fundamentally 
philosophically a tape backup application, and it treats files as tapes. 
And in any case, it would be difficult to prune data from the middle of 
a file.

-- 
Kevin Keane
Owner
The NetTech
Turn your NetWORRY into a NetWORK!

Office: 866-642-7116
http://www.4nettech.com

This e-mail and attachments, if any, may contain confidential and/or 
proprietary information. Please be advised that the unauthorized use or 
disclosure of the information is strictly prohibited. The information herein is 
intended only for use by the intended recipient(s) named above. If you have 
received this transmission in error, please notify the sender immediately and 
permanently delete the e-mail and any copies, printouts or attachments thereof.


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem with tape capacity

2008-11-27 Thread Alan Brown
On Wed, 26 Nov 2008, Willians Vivanco wrote:

 Thanks for the information, but... Some specific action i do to make Bacula
 recognize more than 64Kb in my tapes? I'm really confused and my work is
 completely stopped for that reason.

Bacula just talks to the os generic tape interface.

What are the permissions of /dev/nst* and what userid are the programs
running as?

If these are correct:

What happens if you try to write to the tape using tar and related
programs?




 Regards

 Alan Brown wrote:
  On Tue, 25 Nov 2008, Willians Vivanco wrote:
 
 
   Hi, i'm trying to make work Bacula in Debian/Lenny with a HP
   StorageWorks TapeLibrary MSL6000 and  HP LTO3-Ultrium tapes of 800 Gb of
   capacity each one...
  
 
  Nit: LTO3 is 400Gb capacity.
 
  Any more than that is opportunistic based on compressability of data.
 
 
 


 ---
Red Telematica de Salud - Cuba
 CNICM - Infomed


-- 
Sending anything by unencrypted email is the internet equivalent of
writing on the back of a post card that anyone might be able to see.


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] FW: Re: Large maildir backup

2008-11-27 Thread Boris Kunstleben onOffice Software GmbH
Hi,

know i got all the necessary Information (bacula-director Version 1.38.11-8):
@Jesper (the timed tar)
server:~# time tar cf /dev/null /home/mailer4/
tar: Removing leading `/' from member names

real96m25.390s
user0m18.644s
sys 0m57.260s

@Arno
i'm not that good in mysql, but i already tuned mysql und set some new indexes, 
see below:
mysql show index from File;
+---+++--+-+---+-+--++--++-+
| Table | Non_unique | Key_name   | Seq_in_index | Column_name | Collation 
| Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+---+++--+-+---+-+--++--++-+
| File  |  0 | PRIMARY|1 | FileId  | A 
|15544061 | NULL | NULL   |  | BTREE  | |
| File  |  1 | JobId  |1 | JobId   | A 
|  74 | NULL | NULL   |  | BTREE  | |
| File  |  1 | PathId |1 | PathId  | A 
| 1195697 | NULL | NULL   |  | BTREE  | |
| File  |  1 | FilenameId |1 | FilenameId  | A 
| 5181353 | NULL | NULL   |  | BTREE  | |
| File  |  1 | FilenameId |2 | PathId  | A 
| 7772030 | NULL | NULL   |  | BTREE  | |
| File  |  1 | JobId_2|1 | JobId   | A 
|  74 | NULL | NULL   |  | BTREE  | |
| File  |  1 | JobId_2|2 | PathId  | A 
| 1943007 | NULL | NULL   |  | BTREE  | |
| File  |  1 | JobId_2|3 | FilenameId  | A 
|15544061 | NULL | NULL   |  | BTREE  | |
| File  |  1 | file_jobid_idx |1 | JobId   | A 
|  74 | NULL | NULL   |  | BTREE  | |
| File  |  1 | file_jpf_idx   |1 | JobId   | A 
|  74 | NULL | NULL   |  | BTREE  | |
| File  |  1 | file_jpf_idx   |2 | FilenameId  | A 
| 7772030 | NULL | NULL   |  | BTREE  | |
| File  |  1 | file_jpf_idx   |3 | PathId  | A 
|15544061 | NULL | NULL   |  | BTREE  | |
+---+++--+-+---+-+--++--++-+
12 rows in set (0.00 sec)


mysql show index from Path;
+---++--+--+-+---+-+--++--++-+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | 
Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+---++--+--+-+---+-+--++--++-+
| Path  |  0 | PRIMARY  |1 | PathId  | A | 
1262216 | NULL | NULL   |  | BTREE  | |
| Path  |  1 | Path |1 | Path| A |  
  NULL |  255 | NULL   |  | BTREE  | |
+---++--+--+-+---+-+--++--++-+
2 rows in set (0.00 sec)


mysql show index from Filename;
+--++--+--+-+---+-+--++--++-+
| Table| Non_unique | Key_name | Seq_in_index | Column_name | Collation | 
Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+--++--+--+-+---+-+--++--++-+
| Filename |  0 | PRIMARY  |1 | FilenameId  | A |   
  4823581 | NULL | NULL   |  | BTREE  | |
| Filename |  1 | Name |1 | Name| A |   
 NULL |  255 | NULL   |  | BTREE  | |
+--++--+--+-+---+-+--++--++-+
2 rows in set (0.00 sec)


mysql show create table File;

Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Boris Kunstleben onOffice Software GmbH
Hi Alan,

any idea if there is a better filesystem, im using ext3 on the clients and xfs 
on the director

Kind Regards Boris Kunstleben



-- 
--
onOffice Software GmbH
Feldstr. 40
52070 Aachen
Tel. +49 (0)241 44686-0
Fax. +49 (0)241 44686-250
Email: [EMAIL PROTECTED]
Web: www.onOffice.com
--
Registergericht: Amtsgericht Aachen, HRB 12123
Geschäftsleitung: Stefan Mantl, Torsten Kämper, Stefan Becker
--

- Ursprüngliche Nachricht -
Von: Alan Brown [EMAIL PROTECTED]
Gesendet: Donnerstag, 27. November 2008 12:46:24
An: Boris Kunstleben onOffice Software GmbH [EMAIL PROTECTED]
Cc: bacula-users@lists.sourceforge.net
Betreff: Re: [Bacula-users] Large maildir backup

On Thu, 27 Nov 2008, Boris Kunstleben onOffice Software GmbH wrote:

 i am doing exactly that since last Thursday.
 I have about 1.6TB in Maildirs and an huge number of small files. I have to 
 say it is awfull slow. Backing up a directory with about 190GB of Maildirs 
 took Elapsed time: 1 day 14 hours 49 mins 34 secs.
 On the other hand i have a server with Documents and images (about 700GB) 
 took much less time.
 All the Servers are virtuall Enviroments (Virtuozzo).

 Any Idess would be appreciated.

I have filesystems here of simlar sisze with wildly varying file sizes.

The 1Tb partition (80% full) with 8000 files in it backs up quickly

The 1Tb partition (50% full) with 7 million files in it takes 5 times
longer.

There is a fixed filesystem time cost of opening each file and therefore
the smaller the file the lower the average throughput - having said that
most filesystems get SLOW when there are thousands of files in one
directory.

AB



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread James Cort
Arno Lehmann wrote:
 - Tune your catalog database for faster inserts. That can mean moving 
 it to a faster machine, assigning more memory for it, or dropping some 
 indexes (during inserts). If you're not yet using batch inserts, try 
 to recompile Bacula with batch-inserts enabled.

Is that where it inserts everything to a temporary table then copies the
entire table into the live one?


James.
-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Boris Kunstleben onOffice Software GmbH
Hi,

i am doing exactly that since last Thursday. 
I have about 1.6TB in Maildirs and an huge number of small files. I have to say 
it is awfull slow. Backing up a directory with about 190GB of Maildirs took 
Elapsed time: 1 day 14 hours 49 mins 34 secs.
On the other hand i have a server with Documents and images (about 700GB) took 
much less time.
All the Servers are virtuall Enviroments (Virtuozzo).

Any Idess would be appreciated.

Boris



-- 
--
onOffice Software GmbH
Feldstr. 40
52070 Aachen
Tel. +49 (0)241 44686-0
Fax. +49 (0)241 44686-250
Email: [EMAIL PROTECTED]
Web: www.onOffice.com
--
Registergericht: Amtsgericht Aachen, HRB 12123
Geschäftsleitung: Stefan Mantl, Torsten Kämper, Stefan Becker
--

- Ursprüngliche Nachricht -
Von: Silver Salonen [EMAIL PROTECTED]
Gesendet: Donnerstag, 27. November 2008 10:15:48
An: bacula-users bacula-users@lists.sourceforge.net
Betreff: Re: [Bacula-users] Large maildir backup

On Thursday 27 November 2008 11:07:41 James Cort wrote:
 Silver Salonen wrote:
  On Thursday 27 November 2008 09:50:14 Proskurin Kirill wrote:
  Hello all!
 
  Soon I will deploy a large email server - it will use maildirs and will 
  be about 1Tb of mail with really many small files.
 
  It is any hints to make a backup via bacula of this?
 
  I think Bacula is quite good for backing up maildirs as they constist of 
  separate files as e-mail messages. I don't think small files are a 
problem.
 
 I don't think they're a problem either and I also backup a maildir-based
 mail server.
 
 However, one thing you may want to be aware of - unless you take
 specific steps to avoid it, the maildirs on tape won't necessarily be in
 a consistent state.  Obviously this won't affect your IMAP server - but
 it does mean that when you restore, metadata like whether or not emails
 have been read or replied to and recently received/sent email won't be a
 perfect snapshot of how the mailserver looked at any given point in time.

And when you have many incrementals in a row while restoring, you end up 
seeing many duplicate messages, that have been deleted or moved during these 
incrementals.

It'll be better in Bacula 3.0, I guess :)

--
Silver

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Boris Kunstleben onOffice Software GmbH
Hi,

i am doing exactly that since last Thursday. 
I have about 1.6TB in Maildirs and an huge number of small files. I have to say 
it is awfull slow. Backing up a directory with about 190GB of Maildirs took 
Elapsed time: 1 day 14 hours 49 mins 34 secs.
On the other hand i have a server with Documents and images (about 700GB) took 
much less time.
All the Servers are virtuall Enviroments (Virtuozzo).

Any Idess would be appreciated.

Kind regards Boris Kunstleben


-- 
--
onOffice Software GmbH
Feldstr. 40
52070 Aachen
Tel. +49 (0)241 44686-0
Fax. +49 (0)241 44686-250
Email: [EMAIL PROTECTED]
Web: www.onOffice.com
--
Registergericht: Amtsgericht Aachen, HRB 12123
Geschäftsleitung: Stefan Mantl, Torsten Kämper, Stefan Becker
--

- Ursprüngliche Nachricht -
Von: Silver Salonen [EMAIL PROTECTED]
Gesendet: Donnerstag, 27. November 2008 10:15:48
An: bacula-users bacula-users@lists.sourceforge.net
Betreff: Re: [Bacula-users] Large maildir backup

On Thursday 27 November 2008 11:07:41 James Cort wrote:
 Silver Salonen wrote:
  On Thursday 27 November 2008 09:50:14 Proskurin Kirill wrote:
  Hello all!
 
  Soon I will deploy a large email server - it will use maildirs and will 
  be about 1Tb of mail with really many small files.
 
  It is any hints to make a backup via bacula of this?
 
  I think Bacula is quite good for backing up maildirs as they constist of 
  separate files as e-mail messages. I don't think small files are a 
problem.
 
 I don't think they're a problem either and I also backup a maildir-based
 mail server.
 
 However, one thing you may want to be aware of - unless you take
 specific steps to avoid it, the maildirs on tape won't necessarily be in
 a consistent state.  Obviously this won't affect your IMAP server - but
 it does mean that when you restore, metadata like whether or not emails
 have been read or replied to and recently received/sent email won't be a
 perfect snapshot of how the mailserver looked at any given point in time.

And when you have many incrementals in a row while restoring, you end up 
seeing many duplicate messages, that have been deleted or moved during these 
incrementals.

It'll be better in Bacula 3.0, I guess :)

--
Silver

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FW: Re: Large maildir backup

2008-11-27 Thread Kevin Keane
You are using a very old version of bacula! Maybe you can find a version 
for your Linux distribution that is more current? I believe 2.4.3 is the 
current one.

Boris Kunstleben onOffice Software GmbH wrote:
 Hi,

 know i got all the necessary Information (bacula-director Version 1.38.11-8):
   

-- 
Kevin Keane
Owner
The NetTech
Turn your NetWORRY into a NetWORK!

Office: 866-642-7116
http://www.4nettech.com

This e-mail and attachments, if any, may contain confidential and/or 
proprietary information. Please be advised that the unauthorized use or 
disclosure of the information is strictly prohibited. The information herein is 
intended only for use by the intended recipient(s) named above. If you have 
received this transmission in error, please notify the sender immediately and 
permanently delete the e-mail and any copies, printouts or attachments thereof.


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Mike Holden
Boris Kunstleben onOffice Software GmbH wrote:
 any idea if there is a better filesystem, im using ext3 on the clients and
 xfs on the director

ext3 is possibly not a good fs for a Maildir. Can't offer you any personal
accounts, but I was looking at Google for something else regarding
filesystem performance for MythTV, which is at the opposite end of the
spectrum (small number of large, sequentially written files with parallel
files in use at the same time (record 3 programs, watch another, all at
the same time, then delete a large file after watching it)). One likely
page I saw was http://ubuntuforums.org/showthread.php?t=398332 which heads
towards xfs/jfs for your arena.
-- 
Mike Holden

http://www.by-ang.com - the place to shop for all manner of hand crafted
items, including Jewellery, Greetings Cards and Gifts



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Boris Kunstleben
Hi,

i am doing exactly that since last Thursday. 
I have about 1.6TB in Maildirs and an huge number of small files. I have to say 
it is awfull slow. Backing up a directory with about 190GB of Maildirs took 
Elapsed time: 1 day 14 hours 49 mins 34 secs.
On the other hand i have a server with Documents and images (about 700GB) took 
much less time.
All the Servers are virtuall Enviroments (Virtuozzo).

Any Ideas would be appreciated.

Kind regards Boris Kunstleben


Silver Salonen schrieb:
 On Thursday 27 November 2008 11:07:41 James Cort wrote:
   
 Silver Salonen wrote:
 
 On Thursday 27 November 2008 09:50:14 Proskurin Kirill wrote:
   
 Hello all!

 Soon I will deploy a large email server - it will use maildirs and will 
 be about 1Tb of mail with really many small files.

 It is any hints to make a backup via bacula of this?

 
 I think Bacula is quite good for backing up maildirs as they constist of 
 separate files as e-mail messages. I don't think small files are a 
   
 problem.
   
 I don't think they're a problem either and I also backup a maildir-based
 mail server.

 However, one thing you may want to be aware of - unless you take
 specific steps to avoid it, the maildirs on tape won't necessarily be in
 a consistent state.  Obviously this won't affect your IMAP server - but
 it does mean that when you restore, metadata like whether or not emails
 have been read or replied to and recently received/sent email won't be a
 perfect snapshot of how the mailserver looked at any given point in time.
 

 And when you have many incrementals in a row while restoring, you end up 
 seeing many duplicate messages, that have been deleted or moved during these 
 incrementals.

 It'll be better in Bacula 3.0, I guess :)

 --
 Silver

 -
 This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
 Build the coolest Linux based applications with Moblin SDK  win great prizes
 Grand prize is a trip for two to an Open Source event anywhere in the world
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
   


-- 
--
onOffice Software GmbH
Feldstr. 40
52070 Aachen
Tel. +49 (0)241 44686-0
Fax. +49 (0)241 44686-250
Email: [EMAIL PROTECTED]
Web: www.onOffice.com
--
Registergericht: Amtsgericht Aachen, HRB 12123
Geschäftsleitung: Stefan Mantl, Torsten Kämper, Stefan Becker
--


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FW: Re: Large maildir backup

2008-11-27 Thread James Cort
Kevin Keane wrote:
 You are using a very old version of bacula! Maybe you can find a version 
 for your Linux distribution that is more current? I believe 2.4.3 is the 
 current one.
 
 Boris Kunstleben onOffice Software GmbH wrote:
 Hi,

 know i got all the necessary Information (bacula-director Version 1.38.11-8):
   
 

That version number is, by an amazing coincidence, the exact patchlevel
assigned by Debian to their build in current Debian Stable.  2.4.3 is
available in Debian Backports and seems pretty stable to me.


-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula hang waiting for storage

2008-11-27 Thread Pasi Kärkkäinen
On Thu, Nov 27, 2008 at 08:14:45AM +0100, Arno Lehmann wrote:
 Hi,
 
 26.11.2008 21:22, Bob Hetzel wrote:
  I've got bacula currently in a hung state with the following interesting 
  info.  When I run a status storage produces the following...
 
 Is your Bacula still stuck? If so, and you have gdb installed, and a 
 Bacula with debug symbols, now might be a good time to see what it's 
 doing...
 
  Automatically selected Storage: Dell-PV136T
  Connecting to Storage daemon Dell-PV136T at gyrus:9103
  
  gyrus-sd Version: 2.4.3 (10 October 2008) i686-pc-linux-gnu suse 10.2
  Daemon started 25-Nov-08 19:20, 59 Jobs run since started.
Heap: heap=3,756,032 smbytes=3,519,564 max_bytes=3,684,397 bufs=555 
  max_bufs=557
  Sizes: boffset_t=8 size_t=4 int32_t=4 int64_t=8
  
  Running Jobs:
  Writing: Incremental Backup job axh93-gx270 JobId=45634 Volume=LTO261L2
   pool=Default device=IBMLTO2-3 (/dev/nst2)
   spooling=0 despooling=0 despool_wait=1
   Files=78 Bytes=21,123,239 Bytes/sec=2,337
   FDReadSeqNo=970 in_msg=750 out_msg=9 fd=20
  Writing: Incremental Backup job bxn4-gx280 JobId=45641 Volume=LTO261L2
   pool=Default device=IBMLTO2-3 (/dev/nst2)
   spooling=0 despooling=0 despool_wait=1
   Files=155 Bytes=2,925,138,595 Bytes/sec=323,648
   FDReadSeqNo=45,916 in_msg=45480 out_msg=9 fd=35
  Writing: Incremental Backup job cdking JobId=45646 Volume=LTO261L2
   pool=Default device=IBMLTO2-3 (/dev/nst2)
   spooling=0 despooling=0 despool_wait=1
   Files=88 Bytes=11,846,912 Bytes/sec=1,310
   FDReadSeqNo=920 in_msg=672 out_msg=9 fd=23
  Writing: Incremental Backup job ceg3-d810 JobId=45648 Volume=LTO253L2
   pool=Default device=IBMLTO2-2 (/dev/nst1)
   spooling=0 despooling=1 despool_wait=0
   Files=35 Bytes=1,391,695,993 Bytes/sec=176,588
   FDReadSeqNo=21,542 in_msg=21439 out_msg=9 fd=36
  Writing: Incremental Backup job clifford3 JobId=45651 Volume=LTO261L2
   pool=Default device=IBMLTO2-3 (/dev/nst2)
   spooling=0 despooling=0 despool_wait=0
   Files=0 Bytes=0 Bytes/sec=0
   FDReadSeqNo=6 in_msg=6 out_msg=4 fd=32
  Writing: Incremental Backup job cxj57-gx270 JobId=45657 Volume=LTO261L2
   pool=Default device=IBMLTO2-3 (/dev/nst2)
   spooling=0 despooling=0 despool_wait=0
   Files=0 Bytes=0 Bytes/sec=0
   FDReadSeqNo=6 in_msg=6 out_msg=4 fd=33
  Writing: Incremental Backup job dxa2-d630 JobId=45665 Volume=LTO261L2
   pool=Default device=IBMLTO2-3 (/dev/nst2)
   spooling=0 despooling=0 despool_wait=0
   Files=0 Bytes=0 Bytes/sec=0
   FDReadSeqNo=6 in_msg=6 out_msg=4 fd=17
  Writing: Incremental Backup job educationdean JobId=45667 Volume=
   pool=Default device=IBMLTO2-1 (/dev/nst0)
   spooling=0 despooling=0 despool_wait=0
   Files=0 Bytes=0 Bytes/sec=0
   FDSocket closed
  
  
  Jobs waiting to reserve a drive:
  3605 JobId=45667 wants free drive but device IBMLTO2-1 (/dev/nst0) 
  is busy.
  
  [terminated jobs info snipped out]
  Device status:
  Autochanger Dell-PV136T with devices:
  IBMLTO2-1 (/dev/nst0)
  IBMLTO2-2 (/dev/nst1)
  IBMLTO2-3 (/dev/nst2)
  Device IBMLTO2-1 (/dev/nst0) is mounted with:
   Volume:  LTO342L2
   Pool:Default
   Media type:  LTO-2
   Slot 32 is loaded in drive 0.
   Total Bytes=11,991,168,000 Blocks=185,874 Bytes/block=64,512
   Positioned at File=14 Block=0
  Device IBMLTO2-2 (/dev/nst1) is mounted with:
   Volume:  LTO253L2
   Pool:Default
   Media type:  LTO-2
   Slot 48 is loaded in drive 1.
   Total Bytes=2,193,408 Blocks=33 Bytes/block=66,466
   Positioned at File=1 Block=0
  Device IBMLTO2-3 (/dev/nst2) is not open.
   Device is being initialized.
   Drive 2 status unknown.
  
  
  Used Volume status:
  [nothing further and the bconsole program hangs here]
 
 That alone would be a bug, I guess...
 
  Note that the last Writing line has no volume listed.  The odd thing is 
  that there actually is a tape in IBMLTO2-1.  There's no tape in drive 
  IBMLTO2-3.  The pool apparently needs another appendable volume and 
  there are several available in the Scratch pool but bacula is stuck.
  
  I tried to mount a volume into the empty drive and got back the following...
  *mount slot=61 drive=2
  Automatically selected Storage: Dell-PV136T
  3001 Device IBMLTO2-3 (/dev/nst2) is doing acquire.
  
  Does anybody have any idea what to do to further troubleshoot this?  I 
  have had some other instances of bacula getting hung up and so I have 
  already previously applied the 2.4.3-orphaned-jobs.patch
 
 Sounds like it's worth a bug report - especially if you can re-create 
 the problem. I cc'ed this to Eric, who - I believe - has been working 
 on this sort of problems recently.
 

I have also seen this lately.. but that was with Bacula 2.5.18.

I could make that hang happen multiple times, but I'm not totally sure what
caused that..

-- Pasi


Re: [Bacula-users] Problem with tape capacity

2008-11-27 Thread Alan Brown
On Thu, 27 Nov 2008, Willians Vivanco wrote:

  What are the permissions of /dev/nst* and what userid are the programs
  running as?
 
 The normal permissions root:tape

Bacula-sd and btape usually run as bacula:tape - as a test set the
/dev/nst devices world writeable and see if the error persists.

  What happens if you try to write to the tape using tar and related
  programs?

 It happens the same error...

This indicates a hardware problem.

What about mt -f /dev/nst0 status ?

Is your scsi chain terminated properly? What does the library interface
say?

I have a MSL6000 and it works perfectly under linux

AB


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Alan Brown
On Thu, 27 Nov 2008, Boris Kunstleben onOffice Software GmbH wrote:

 any idea if there is a better filesystem, im using ext3 on the clients
 and xfs on the director

I believe XFS copes fine with overstuffed directories.

Ext3 will perform a lot better if you use tune2fs and enable the following
features:

 dir_index
  Use  hashed  b-trees  to  speed  up lookups in large
  directories.

 filetype
  Store file type information in directory entries.

 sparse_super
  Limit the number of backup superblocks to save space
  on large filesystems.


man tune2fs - note that you MUST run e2fsck before remounting the
filesystems.






-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Silver Salonen
On Thursday 27 November 2008 16:29:50 you wrote:
 Silver Salonen wrote:
  And when you have many incrementals in a row while restoring, you end up 
  seeing many duplicate messages, that have been deleted or moved during 
these 
  incrementals.

 
 For such case use snapshots - freeze your FS, mount snapshot, make
 backup, unmount.
 LVM is great for that :-)

Well, no, I meant that these files reappear, because Bacula doesn't keep track 
on deleted files, ie. if a file in full backup gets deleted before the next 
backup, a restore basing on both of these backups will make the deleted file 
to reappear.

--
Silver

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] encrypting 5gb bacula backup files?

2008-11-27 Thread Kevin Keane
What kind of security are you looking for? Generally, the fastest 
encryption if you only have minimal security needs and speed is your 
overriding concern is probably XOR. Otherwise, I would probably go with PGP.

Lukasz Szybalski wrote:
 Hello,

 I was wondering if somebody could suggest a fast encryption algorithm
 that I could use to encrypt 5gb files.
 What I'm trying to do is create backup files on amazon S3 servers
 which allow you to upload up to 5bg per files. I can tell bacula to
 use 5gb files, which I would like to encrypt.

 I'm looking for a algorithm or a tool that would allow me to
 encrypt/decrypt the file at a reasonable speed and goods security.

 Is anybody doing that with bacula right now? Do you tell bacula to
 create 5gb files and then you compress it ? or...


 Let me know,
 Thanks,
 Lucas


 --
 Turbogears2 Manual
 http://lucasmanual.com/mywiki/TurboGears2
 Bazaar and Launchpad
 http://lucasmanual.com/mywiki/Bazaar



   


-- 
Kevin Keane
Owner
The NetTech
Turn your NetWORRY into a NetWORK!

Office: 866-642-7116
http://www.4nettech.com

This e-mail and attachments, if any, may contain confidential and/or 
proprietary information. Please be advised that the unauthorized use or 
disclosure of the information is strictly prohibited. The information herein is 
intended only for use by the intended recipient(s) named above. If you have 
received this transmission in error, please notify the sender immediately and 
permanently delete the e-mail and any copies, printouts or attachments thereof.


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Boris Kunstleben onOffice Software GmbH
Hi Mike,

thanks dor the advice. That was my thought to. I'm already designing a new 
mails architecture, i'll change the filesystem then.

Kind regards
Boris Kunstleben



-- 
--
onOffice Software GmbH
Feldstr. 40
52070 Aachen
Tel. +49 (0)241 44686-0
Fax. +49 (0)241 44686-250
Email: [EMAIL PROTECTED]
Web: www.onOffice.com
--
Registergericht: Amtsgericht Aachen, HRB 12123
Geschäftsleitung: Stefan Mantl, Torsten Kämper, Stefan Becker
--

- Ursprüngliche Nachricht -
Von: Mike Holden [EMAIL PROTECTED]
Gesendet: Donnerstag, 27. November 2008 14:43:46
An: bacula-users@lists.sourceforge.net
Betreff: Re: [Bacula-users] Large maildir backup

Boris Kunstleben onOffice Software GmbH wrote:
 any idea if there is a better filesystem, im using ext3 on the clients and
 xfs on the director

ext3 is possibly not a good fs for a Maildir. Can't offer you any personal
accounts, but I was looking at Google for something else regarding
filesystem performance for MythTV, which is at the opposite end of the
spectrum (small number of large, sequentially written files with parallel
files in use at the same time (record 3 programs, watch another, all at
the same time, then delete a large file after watching it)). One likely
page I saw was http://ubuntuforums.org/showthread.php?t=398332 which heads
towards xfs/jfs for your arena.
-- 
Mike Holden

http://www.by-ang.com - the place to shop for all manner of hand crafted
items, including Jewellery, Greetings Cards and Gifts



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Boris Kunstleben onOffice Software GmbH
Hi Alan,

i did that already (expect the superblocks). I think it the ext3 itself in 
combination with the Virtual machines.

THX so far
Boris Kunstleben 


-- 
--
onOffice Software GmbH
Feldstr. 40
52070 Aachen
Tel. +49 (0)241 44686-0
Fax. +49 (0)241 44686-250
Email: [EMAIL PROTECTED]
Web: www.onOffice.com
--
Registergericht: Amtsgericht Aachen, HRB 12123
Geschäftsleitung: Stefan Mantl, Torsten Kämper, Stefan Becker
--

- Ursprüngliche Nachricht -
Von: Alan Brown [EMAIL PROTECTED]
Gesendet: Donnerstag, 27. November 2008 15:28:49
An: Boris Kunstleben onOffice Software GmbH [EMAIL PROTECTED]
Cc: bacula-users@lists.sourceforge.net
Betreff: Re: [Bacula-users] Large maildir backup

On Thu, 27 Nov 2008, Boris Kunstleben onOffice Software GmbH wrote:

 any idea if there is a better filesystem, im using ext3 on the clients
 and xfs on the director

I believe XFS copes fine with overstuffed directories.

Ext3 will perform a lot better if you use tune2fs and enable the following
features:

 dir_index
  Use  hashed  b-trees  to  speed  up lookups in large
  directories.

 filetype
  Store file type information in directory entries.

 sparse_super
  Limit the number of backup superblocks to save space
  on large filesystems.


man tune2fs - note that you MUST run e2fsck before remounting the
filesystems.








-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backup of in one directory with 800.000 files

2008-11-27 Thread Tobias Bartel
Hello,

i am tasked to set up daily full backups of our entire fax communication
and they are all stored in one single director ;). There are about
800.000 files in that directory what makes accessing that directory
extremely slow. The target device is a LTO3 tape drive with an 8 slots
changer.

With my current configuration Bacula needs ~48h to make a complete
backup which kinda conflicts with the requirement of doing dailys.

Does anybody have any suggestions on how i could speed things up? 

THX in advance - Tobi


PS: I already talked to my boss and our developers and they will change
the system so the faxes get stored in subdirectories. But changing that
doesn't have a very high priority and got scheduled for next summer.


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup of in one directory with 800.000 files

2008-11-27 Thread James Cort
Tobias Bartel wrote:
 Hello,
 
 i am tasked to set up daily full backups of our entire fax communication
 and they are all stored in one single director ;). There are about
 800.000 files in that directory what makes accessing that directory
 extremely slow. The target device is a LTO3 tape drive with an 8 slots
 changer.
 
 With my current configuration Bacula needs ~48h to make a complete
 backup which kinda conflicts with the requirement of doing dailys.
 
 Does anybody have any suggestions on how i could speed things up? 

Even with 800,000 files, that sounds very slow.  How much data is
involved, how is it stored and how fast is your database server?


-- 
James Cort

IT Manager
U4EA Technologies Ltd.

-- 
U4EA Technologies
http://www.u4eatech.com


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula hang waiting for storage

2008-11-27 Thread Arno Lehmann
Hi,

27.11.2008 15:10, Pasi Kärkkäinen wrote:
 On Thu, Nov 27, 2008 at 08:14:45AM +0100, Arno Lehmann wrote:
 Hi,

 26.11.2008 21:22, Bob Hetzel wrote:
 I've got bacula currently in a hung state with the following interesting 
 info.  When I run a status storage produces the following...
 Is your Bacula still stuck? If so, and you have gdb installed, and a 
 Bacula with debug symbols, now might be a good time to see what it's 
 doing...
...
 I have also seen this lately.. but that was with Bacula 2.5.18.
 
 I could make that hang happen multiple times, but I'm not totally sure what
 caused that..

Well, if you can recreate the issue it's worth the effort building 
Bacula with debug information so you get usable backtraces.

If the problem happens again, you can use gdb to create a backtrace, 
showing the developers more details about what happens and thus 
enabling them to fix the issue.

I would recommend that now.

Arno

-- 
Arno Lehmann
IT-Service Lehmann
Sandstr. 6, 49080 Osnabrück
www.its-lehmann.de

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Kjetil Torgrim Homme
Jesper Krogh [EMAIL PROTECTED] writes:

 Can you give us the time for doing a tar to /dev/null of the fileset.

 time tar cf /dev/null /path/to/maildir

 Then we have a feeling about the actual read time for the file of
 the filesystem.

if you're using GNU tar, it will *not* read the files if you dump to
/dev/null.  it will simply stat the files as necessary (you can check
this with strace if you like.)

use /dev/zero to avoid this optimisation.

-- 
regards,  | Redpill  _
Kjetil T. Homme   | Linpro  (_)


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup of in one directory with 800.000 files

2008-11-27 Thread Arno Lehmann
Hi,

27.11.2008 17:10, Tobias Bartel wrote:
 Hello,
 
 i am tasked to set up daily full backups of our entire fax communication
 and they are all stored in one single director ;). There are about
 800.000 files in that directory what makes accessing that directory
 extremely slow. The target device is a LTO3 tape drive with an 8 slots
 changer.
 
 With my current configuration Bacula needs ~48h to make a complete
 backup which kinda conflicts with the requirement of doing dailys.
 
 Does anybody have any suggestions on how i could speed things up? 

A more or less identical question is discussed in the thread Large 
maildir backup right now.

Without further details: Database tuning and using a file system 
better handling many files in a directory are the ideas right now.

Regarding file systems, xfs works here quite well and is said to be 
suitable for large directories.

Arno

 THX in advance - Tobi
 
 
 PS: I already talked to my boss and our developers and they will change
 the system so the faxes get stored in subdirectories. But changing that
 doesn't have a very high priority and got scheduled for next summer.

Then tell them that daily full backups of the fax system can be 
scheduled for next autumn ;-)

Arno

-- 
Arno Lehmann
IT-Service Lehmann
Sandstr. 6, 49080 Osnabrück
www.its-lehmann.de

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FW: Re: Large maildir backup

2008-11-27 Thread Arno Lehmann
Hi,

27.11.2008 14:15, Boris Kunstleben onOffice Software GmbH wrote:
...
 @Arno
 i'm not that good in mysql, but i already tuned mysql und set some new 
 indexes, see below:
 mysql show index from File;

Actually, my suggestion was to *remove* indexes; updating an index 
when adding new data needs additional time. If you update less 
indexes, you save that time.

So, remove some of those... but do this carefully, as many other 
operations will really suffer if the indexes required are not available.

 +---+++--+-+---+-+--++--++-+
 | Table | Non_unique | Key_name   | Seq_in_index | Column_name | 
 Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
 +---+++--+-+---+-+--++--++-+
 | File  |  0 | PRIMARY|1 | FileId  | A
  |15544061 | NULL | NULL   |  | BTREE  | |
 | File  |  1 | JobId  |1 | JobId   | A
  |  74 | NULL | NULL   |  | BTREE  | |
 | File  |  1 | PathId |1 | PathId  | A
  | 1195697 | NULL | NULL   |  | BTREE  | |
 | File  |  1 | FilenameId |1 | FilenameId  | A
  | 5181353 | NULL | NULL   |  | BTREE  | |
 | File  |  1 | FilenameId |2 | PathId  | A
  | 7772030 | NULL | NULL   |  | BTREE  | |
 | File  |  1 | JobId_2|1 | JobId   | A
  |  74 | NULL | NULL   |  | BTREE  | |
 | File  |  1 | JobId_2|2 | PathId  | A
  | 1943007 | NULL | NULL   |  | BTREE  | |
 | File  |  1 | JobId_2|3 | FilenameId  | A
  |15544061 | NULL | NULL   |  | BTREE  | |

The following two indexes seem to be duplicates of pre-existing ones.
...

Arno

-- 
Arno Lehmann
IT-Service Lehmann
Sandstr. 6, 49080 Osnabrück
www.its-lehmann.de

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Improving catalog performance

2008-11-27 Thread Kelly, Brian
Arno,

I've been studying different options for improving my database performance. 

After reading your recommendations on improving catalog performance. 

There are two rather simple solutions:
- Don't keep file information for this job in the catalog. This makes 
restoring single mails difficult.
- Tune your catalog database for faster inserts. That can mean moving 
it to a faster machine, assigning more memory for it, or dropping some 
indexes (during inserts). If you're not yet using batch inserts, try 
to recompile Bacula with batch-inserts enabled.

Could you elaborate on how to:

1. Not keep file information for particular jobs in the catalog. 
2. How does one drop an index during inserts?

Thanks,

Brian Kelly



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Kjetil Torgrim Homme
Alan Brown [EMAIL PROTECTED] writes:

 Ext3 will perform a lot better if you use tune2fs and enable the
 following features:

  dir_index
   Use  hashed  b-trees  to  speed  up lookups in large
   directories.

this may be good for Maildir, but with Cyrus IMAPD, which uses short
filenames like 1432., 1433., this *reduced* performance by a few
percent in one test we did.

BTW, the Cyrus filenames are much more friendly for Bacula, since they
keep the size of the Filename table down.  with Maildir *every* e-mail
will have to get its own row in Filename.

-- 
regards,  | Redpill  _
Kjetil T. Homme   | Linpro  (_)


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup of in one directory with 800.000 files

2008-11-27 Thread Jesper Krogh
Tobias Bartel wrote:
 Hello,
 
 i am tasked to set up daily full backups of our entire fax communication
 and they are all stored in one single director ;). There are about
 800.000 files in that directory what makes accessing that directory
 extremely slow. The target device is a LTO3 tape drive with an 8 slots
 changer.

What's the average file size? I may be that you're simply hitting the 
filesystems performance. Try measuring the speed with tar

time tar cf /dev/null /path/to/files

Alternatively:

tar cf - /path/to/file | pv  /dev/null

pv is a pipe that can measure throughput.

I've tried with some of the filesystems I have .. they can go all the 
way down to 2-5MB/s when there are a huge amount of really small files.

They you get a rough feeling about the time spend just by reading the 
files off disk. (if you spool data to disk before tape, then you should 
add in some time for that).

 With my current configuration Bacula needs ~48h to make a complete
 backup which kinda conflicts with the requirement of doing dailys.
 
 Does anybody have any suggestions on how i could speed things up? 

Try to find out where the bottleneck is in your system. It may be the 
catalog that's too slow, it may be that you should disable spooling.

 PS: I already talked to my boss and our developers and they will change
 the system so the faxes get stored in subdirectories. But changing that
 doesn't have a very high priority and got scheduled for next summer.

Based on above investigations it may be that they should do it sooner 
rather than later.

Also read the thread about the Large Maildir, thats basically the same
issues.

-- 
Jesper

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Daniel Betz
Hi!

I have the same problem with large amount of files on one filesystem ( Maildir 
).
Now i have 2 concurrent jobs running and the time for the backups need half the 
time.
I havent tested 4 concurrent jobs jet .. :-)


Greetings,

Daniel Betz 
Platform Engineer
___ 

domainfactory GmbH 
Oskar-Messter-Str. 33 
85737 Ismaning 
Germany 

Telefon:  +49 (0)89 / 55266-364 
Telefax:  +49 (0)89 / 55266-222 

E-Mail:   [EMAIL PROTECTED] 
Internet: www.df.eu 



 -Ursprüngliche Nachricht-
 Von: Boris Kunstleben onOffice Software GmbH
 [mailto:[EMAIL PROTECTED]
 Gesendet: Donnerstag, 27. November 2008 15:42
 An: bacula-users@lists.sourceforge.net
 Betreff: Re: [Bacula-users] Large maildir backup
 
 Hi Alan,
 
 i did that already (expect the superblocks). I think it the ext3 itself
 in combination with the Virtual machines.
 
 THX so far
 Boris Kunstleben
 
 
 --
 ---
 ---
 onOffice Software GmbH
 Feldstr. 40
 52070 Aachen
 Tel. +49 (0)241 44686-0
 Fax. +49 (0)241 44686-250
 Email: [EMAIL PROTECTED]
 Web: www.onOffice.com
 ---
 ---
 Registergericht: Amtsgericht Aachen, HRB 12123
 Geschäftsleitung: Stefan Mantl, Torsten Kämper, Stefan Becker
 ---
 ---
 
 - Ursprüngliche Nachricht -
 Von: Alan Brown [EMAIL PROTECTED]
 Gesendet: Donnerstag, 27. November 2008 15:28:49
 An: Boris Kunstleben onOffice Software GmbH [EMAIL PROTECTED]
 Cc: bacula-users@lists.sourceforge.net
 Betreff: Re: [Bacula-users] Large maildir backup
 
 On Thu, 27 Nov 2008, Boris Kunstleben onOffice Software GmbH wrote:
 
  any idea if there is a better filesystem, im using ext3 on the
 clients
  and xfs on the director
 
 I believe XFS copes fine with overstuffed directories.
 
 Ext3 will perform a lot better if you use tune2fs and enable the
 following
 features:
 
  dir_index
   Use  hashed  b-trees  to  speed  up lookups in large
   directories.
 
  filetype
   Store file type information in directory entries.
 
  sparse_super
   Limit the number of backup superblocks to save space
   on large filesystems.
 
 
 man tune2fs - note that you MUST run e2fsck before remounting the
 filesystems.
 
 
 
 
 
 
 
 
 ---
 --
 This SF.Net email is sponsored by the Moblin Your Move Developer's
 challenge
 Build the coolest Linux based applications with Moblin SDK  win great
 prizes
 Grand prize is a trip for two to an Open Source event anywhere in the
 world
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Re-read of last block OK, but block numbers differ

2008-11-27 Thread Allan Black
Hi, all,

Can anyone help me analyse the problem here? On more than
one occasion, a DDS3 drive has produced this, when it reaches
the end of a tape:

23-Nov 22:03 gershwin-dir JobId 506: Using Device DDS3-0
23-Nov 22:06 gershwin-sd JobId 506: End of Volume MainCatalog-004 at 57:4963 
on device DDS3-0 (/dev/rmt/1cbn). Write of 64512 bytes got 0.
23-Nov 22:06 gershwin-sd JobId 506: Error: Re-read of last block OK, but block 
numbers differ. Last block=4963 Current block=4963.
23-Nov 22:06 gershwin-sd JobId 506: End of medium on Volume MainCatalog-004 
Bytes=28,148,843,520 Blocks=436,334 at 23-Nov-2008 22:06.

Having checked the SD source, I believe what is happening is
that the SD tried to write block 4963 to the tape, but got EOM
(End Of Medium) back from the drive. It then re-read the last
block successfully, but 

Because of the EOM, the SD expected that it had failed to write
block 4963. When it re-read the last block on the tape, it expected
the block number (which is in the block header) to be 4962, hence
the error message. However, I think that the drive successfully
wrote the block, but also returned a SCSI sense indicating that the
tape was now full.

I put that tape back in the drive and read it with bls -k, getting
this:

[...]
Block: 4961 size=64512
Block: 4962 size=64512
Block: 4963 size=64512
26-Nov 13:44 bls JobId 0: End of file 58 on device dds3 (/dev/rmt/1cbn), 
Volume MainCatalog-004

Which makes me think that block 4963 has indeed been written to the
tape.

However: I also think that that catalog backup has actually been
corrupted, because I think the SD would write the data from block
4963 at the beginning of the new tape, which means if I restored
that particular backup, I would find a block of data repeated.

[Since this was only a catalog backup, it was no problem to run it
again manually, so I do actually have a good catalog backup!]

I do not believe this is a Solaris or SCSI problem; DLT drives (and
autochangers) work perfectly on the same card (although on a different
segment of the bus). I suspect (yuck) I may have to experiment with
the configuration switches on the drive.

Has anyone come across a similar situation before and, if so, is
able to point me in the correct direction to debug it? In particular,
does anyone have any experience of how a SCSI drive is supposed to
behave at EOM?

Bacula 2.4.3
Solaris 10 x86
HP C1557A DDS-3 autoloader

Allan

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Jesper Krogh
Kjetil Torgrim Homme wrote:
 Jesper Krogh [EMAIL PROTECTED] writes:
 
 Can you give us the time for doing a tar to /dev/null of the fileset.

 time tar cf /dev/null /path/to/maildir

 Then we have a feeling about the actual read time for the file of
 the filesystem.
 
 if you're using GNU tar, it will *not* read the files if you dump to
 /dev/null.  it will simply stat the files as necessary (you can check
 this with strace if you like.)

Thanks The numers actually surprised me when posted. I usually do a

tar cf - /path | pv  /dev/null

pv prints the speed of the stream, so I dont have to wait for it to 
complete to get a feeling.

 use /dev/zero to avoid this optimisation.

Thank, nice tip.

-- 
Jesper

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup of in one directory with 800.000 files

2008-11-27 Thread Ryan Novosielski
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tobias Bartel wrote:
 Hello,
 
 Even with 800,000 files, that sounds very slow.  How much data is
 involved, how is it stored and how fast is your database server?
 
 It's about 70GB of data, stored on a Raid5 (3Ware controller).
 
 The database is a SQLite one, on the same machine but on a Software 
 Raid 1.
 
 The backup device is an LTO3 connected via SCSI
 
 OS is a Debian stable.
 
 
 I already thought about moving the Database to MySQL but there is
 already a MySQL Server on the same box, it is a slave for our MySQL
 master and used for hourly Backups of our database (Stop the
 replication, do the backups and start the replication again).
 I don't really like the idea of adding a DB to the Slave that isn't on
 the master, nor do i like the idea of hacking up some custom MySQL
 install that runs parallel caus that will cost me with every future
 update.
 
 To be honest, i didn't expect that SQLite could be the bottle neck, it
 just can't be that slow. What made me think that its the number files,
 is that when i do an ls in that directory it takes ~15min before I see
 any output.

My understanding is that you cannot expect decent performance out of
SQLite for Bacula for any production level backup. I could be wrong
here, but I say forget about SQLite for anything other than a trial, and
definitely not using it for a backup that is extra demanding.

You could use PostgreSQL if you wanted to avoid messing with the slave
server (though something tells me that's not a major worry, but I am not
sure about it), or just run MySQL on a different port which I don't
think is all that hard (or, actually, use it in socket-only mode, which
is even easier and I think would suffice).

- --
  _  _ _  _ ___  _  _  _
 |Y#| |  | |\/| |  \ |\ |  | |Ryan Novosielski - Systems Programmer II
 |$| |__| |  | |__/ | \| _| |[EMAIL PROTECTED] - 973/972.0922 (2-0922)
 \__/ Univ. of Med. and Dent.|IST/AST - NJMS Medical Science Bldg - C630
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFJLtgHmb+gadEcsb4RApbKAJ4gTg9fF8susc4iS6e44D9s7uWTxwCg2T/n
hd0IuSIG6mg6J4FPrL/aRz8=
=M8R0
-END PGP SIGNATURE-
begin:vcard
fn:Ryan Novosielski
n:Novosielski;Ryan
org:UMDNJ;IST/AST
adr;dom:MSB C630;;185 South Orange Avenue;Newark;NJ;07103
email;internet:[EMAIL PROTECTED]
title:Systems Programmer II
tel;work:(973) 972-0922
tel;fax:(973) 972-7412
tel;pager:(866) 20-UMDNJ
x-mozilla-html:FALSE
version:2.1
end:vcard

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Improving catalog performance

2008-11-27 Thread Arno Lehmann
Hi,

27.11.2008 18:04, Kelly, Brian wrote:
 Arno,
 
 I've been studying different options for improving my database performance. 
 
 After reading your recommendations on improving catalog performance. 
 
 There are two rather simple solutions:
 - Don't keep file information for this job in the catalog. This makes 
 restoring single mails difficult.
 - Tune your catalog database for faster inserts. That can mean moving 
 it to a faster machine, assigning more memory for it, or dropping some 
 indexes (during inserts). If you're not yet using batch inserts, try 
 to recompile Bacula with batch-inserts enabled.
 
 Could you elaborate on how to:
 
 1. Not keep file information for particular jobs in the catalog. 

In the pool definition, use Catalog files=No. I guess it's a good 
idea to use a separate pool for this type of jobs :-)

 2. How does one drop an index during inserts?

A bit more difficult... the simple solution is to simply drop the 
index from the database.

Alternatively, you could use a run before job script to drop the 
index, and a run after job script to re-create it (I never tried 
this). Often, creating an index newly is much faster than adding the 
data for it one by one, as table entries arrive...

Arno


 Thanks,
 
 Brian Kelly
 
 
 
 -
 This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
 Build the coolest Linux based applications with Moblin SDK  win great prizes
 Grand prize is a trip for two to an Open Source event anywhere in the world
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 

-- 
Arno Lehmann
IT-Service Lehmann
Sandstr. 6, 49080 Osnabrück
www.its-lehmann.de

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup of in one directory with 800.000 files

2008-11-27 Thread Jesper Krogh
Tobias Bartel wrote:
 Even with 800,000 files, that sounds very slow.  How much data is
 involved, how is it stored and how fast is your database server?
 
 It's about 70GB of data, stored on a Raid5 (3Ware controller).
 
 The database is a SQLite one, on the same machine but on a Software 
 Raid 1.
 
 The backup device is an LTO3 connected via SCSI
 
 OS is a Debian stable.
 
 
 I already thought about moving the Database to MySQL but there is
 already a MySQL Server on the same box, it is a slave for our MySQL
 master and used for hourly Backups of our database (Stop the
 replication, do the backups and start the replication again).
 I don't really like the idea of adding a DB to the Slave that isn't on
 the master, nor do i like the idea of hacking up some custom MySQL
 install that runs parallel caus that will cost me with every future
 update.

Perhaps a Postgres on the same host?

 To be honest, i didn't expect that SQLite could be the bottle neck, it
 just can't be that slow. What made me think that its the number files,
 is that when i do an ls in that directory it takes ~15min before I see
 any output.

That more likely to be ls playing tricks with you.. Try:

ls -f | head (or just ls -f)

-- 
Jesper

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Jesper Krogh
Daniel Betz wrote:
 Hi!
 
 I have the same problem with large amount of files on one filesystem ( 
 Maildir ).
 Now i have 2 concurrent jobs running and the time for the backups need half 
 the time.
 I havent tested 4 concurrent jobs jet .. :-)


That would be a really nice feature to have directly in the File daemon 
if that is the case. (multiple threads to speed up a single job).

Jesper
-- 
Jesper

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to make the make_catalog_backup script work?

2008-11-27 Thread Ryan Novosielski
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

bacula wrote:
 Hi,
 
 i setup a new bacula installation and only thing what isnt working yet 
 is the make_catalog_backup script (im using sqlite3). I use the original 
 script which comes with the Ubuntu packages.
 
 heres the output of ls -al in /var/lib/bacula:
 
 drwx--  2 bacula bacula  4096 2008-11-27 15:03 .
 drwxr-xr-x 51 root   root4096 2008-11-27 14:15 ..
 -rw-r-  1 bacula bacula 49152 2008-11-27 14:57 bacula.db
 -rw-r-  1 bacula bacula   196 2008-11-27 14:55 bacula-dir.9101.state
 -rw---  1 bacula bacula 0 2008-11-27 12:28 bacula-dir.conmsg
 -rw-r-  1 root   root 196 2008-11-27 14:55 bacula-fd.9102.state
 -rw-r-  1 bacula tape 196 2008-11-27 14:00 bacula-sd.9103.state
 lrwxrwxrwx  1 root   root  20 2008-11-27 12:28 log - 
 ../../log/bacula/log
 
 and here is the output of the dir after i ran the script:
 
 drwx--  2 bacula bacula  4096 2008-11-27 15:13 .
 drwxr-xr-x 51 root   root4096 2008-11-27 14:15 ..
 -rw-r-  1 bacula bacula 49152 2008-11-27 14:57 bacula.db
 -rw-r-  1 bacula bacula   196 2008-11-27 14:55 bacula-dir.9101.state
 -rw---  1 bacula bacula 0 2008-11-27 12:28 bacula-dir.conmsg
 -rw-r-  1 root   root 196 2008-11-27 14:55 bacula-fd.9102.state
 -rw-r-  1 bacula tape 196 2008-11-27 14:00 bacula-sd.9103.state
 -rw-r--r--  1 root   root   0 2008-11-27 15:13 .db
 lrwxrwxrwx  1 root   root  20 2008-11-27 12:28 log - 
 ../../log/bacula/log
 -rw-r--r--  1 root   root  27 2008-11-27 15:13 .sql
 
 
 any help would be fine :)
 
 thanks in advance

Look in the catalog backup definition in the config files. You likely
don't have the syntax right (if you just did make_catalog_backup, for
example, it WILL not work).

If you're running it correctly, try running the lines from the backup
script independently and seeing what fails.

- --
  _  _ _  _ ___  _  _  _
 |Y#| |  | |\/| |  \ |\ |  | |Ryan Novosielski - Systems Programmer II
 |$| |__| |  | |__/ | \| _| |[EMAIL PROTECTED] - 973/972.0922 (2-0922)
 \__/ Univ. of Med. and Dent.|IST/AST - NJMS Medical Science Bldg - C630
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFJLubmmb+gadEcsb4RAtf3AJ9UyXlYs6+j21Dta0NxGoJisAOMqwCeIpwr
QysaEOYvYhV2eWdc4cRUN+o=
=8K6u
-END PGP SIGNATURE-
begin:vcard
fn:Ryan Novosielski
n:Novosielski;Ryan
org:UMDNJ;IST/AST
adr;dom:MSB C630;;185 South Orange Avenue;Newark;NJ;07103
email;internet:[EMAIL PROTECTED]
title:Systems Programmer II
tel;work:(973) 972-0922
tel;fax:(973) 972-7412
tel;pager:(866) 20-UMDNJ
x-mozilla-html:FALSE
version:2.1
end:vcard

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Re-read of last block OK, but block numbers differ

2008-11-27 Thread Martin Simmons
 On Thu, 27 Nov 2008 17:19:21 +, Allan Black said:
 
 Hi, all,
 
 Can anyone help me analyse the problem here? On more than
 one occasion, a DDS3 drive has produced this, when it reaches
 the end of a tape:
 
 23-Nov 22:03 gershwin-dir JobId 506: Using Device DDS3-0
 23-Nov 22:06 gershwin-sd JobId 506: End of Volume MainCatalog-004 at 
 57:4963 on device DDS3-0 (/dev/rmt/1cbn). Write of 64512 bytes got 0.
 23-Nov 22:06 gershwin-sd JobId 506: Error: Re-read of last block OK, but 
 block numbers differ. Last block=4963 Current block=4963.
 23-Nov 22:06 gershwin-sd JobId 506: End of medium on Volume MainCatalog-004 
 Bytes=28,148,843,520 Blocks=436,334 at 23-Nov-2008 22:06.
 
 Having checked the SD source, I believe what is happening is
 that the SD tried to write block 4963 to the tape, but got EOM
 (End Of Medium) back from the drive. It then re-read the last
 block successfully, but 
 
 Because of the EOM, the SD expected that it had failed to write
 block 4963. When it re-read the last block on the tape, it expected
 the block number (which is in the block header) to be 4962, hence
 the error message. However, I think that the drive successfully
 wrote the block, but also returned a SCSI sense indicating that the
 tape was now full.
 
 I put that tape back in the drive and read it with bls -k, getting
 this:
 
 [...]
 Block: 4961 size=64512
 Block: 4962 size=64512
 Block: 4963 size=64512
 26-Nov 13:44 bls JobId 0: End of file 58 on device dds3 (/dev/rmt/1cbn), 
 Volume MainCatalog-004
 
 Which makes me think that block 4963 has indeed been written to the
 tape.
 
 However: I also think that that catalog backup has actually been
 corrupted, because I think the SD would write the data from block
 4963 at the beginning of the new tape, which means if I restored
 that particular backup, I would find a block of data repeated.
 
 [Since this was only a catalog backup, it was no problem to run it
 again manually, so I do actually have a good catalog backup!]
 
 I do not believe this is a Solaris or SCSI problem; DLT drives (and
 autochangers) work perfectly on the same card (although on a different
 segment of the bus). I suspect (yuck) I may have to experiment with
 the configuration switches on the drive.
 
 Has anyone come across a similar situation before and, if so, is
 able to point me in the correct direction to debug it? In particular,
 does anyone have any experience of how a SCSI drive is supposed to
 behave at EOM?
 
 Bacula 2.4.3
 Solaris 10 x86
 HP C1557A DDS-3 autoloader

I would expect it to be OK, as long as the catalog says that block 4963 is on
the second tape.

Have you tried the btape fill test?  That should check Bacula's logic for this
case.

__Martin

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Access denied on access files to backup

2008-11-27 Thread s_mendes
Hello people,
I'm using Bacula-win v. 2.4.3 and everything was working fine; suddenly Bacula stops to make backup of files from the clients. The message is: Cannot open "Path\File..." ERR: Access is denied
The backups are beeing stored on files (I'm not using tapes).
The curious thing is that anything was changed and the backups were working perfectly. I don't know what to do... please someone can help me?Tks!

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Access denied on access files to backup

2008-11-27 Thread James Harper
 Hello people,
 
 I'm using Bacula-win v. 2.4.3 and everything was working fine;
suddenly
 Bacula stops to make backup of files from the clients. The message is:
 Cannot open Path\File... ERR: Access is denied
 
 The backups are beeing stored on files (I'm not using tapes).
 
 The curious thing is that anything was changed and the backups were
 working perfectly. I don't know what to do... please someone can help
me?
 

I assume the cannot open ... message refers to a file on the Windows
system... as the Administrator user under Windows, can you access those
files? What are the security settings currently?

I have definitely seen NTFS get corrupt before in such a way that files
appear to exist but are inaccessible (although this was not using the
Backup Api which may or may not have had a problem accessing them).
Maybe run a chkdsk /f on the disk?

James

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users