Re: [Bacula-users] [Bacula-devel] Load slots timeout

2014-11-09 Thread Jesper Krogh

 On 09/11/2014, at 17.14, Dan Langille d...@langille.org wrote:
 
 Five minutes to load a tape is way too long for something which succeeds.  My 
 hypothesis: there is a problem with the process.

Yes and no

If the tape has been closed properly then Yes, if it hasnt the first thing the 
drive is going to do is to pass over the entire tape to get the end-marker 
position out of the tape which can take way more than 5 minutes and really isnt 
an error state.  

And to my knowledge there is no way to detect it other than to wait

Jesper
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Looking for recommendations on how to backup a Postgresql database using Bacula

2014-08-29 Thread Jesper Krogh
On 20/08/2014, at 23.22, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote:

 FWIW, I usually pg_dump the schema to a text file and run a script that
 does '\copy ... to csv' for each table. Then commit them to git or rcs
 repository right there. Then rsync the repository to a couple of other
 servers. No bacula needed.

That is not going to give you a backup gauranteed to be consistent.

Jesper

--
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO5 speed

2013-01-02 Thread Jesper Krogh
On 02/01/13 16:12, f.staed...@dafuer.de wrote:
   Is there anything wrong here? If I'm right about 140MB/s for an
  LTO5 are quite ok since the data cannot be compressed.

Well, if you are sending uncompressible data, then above picture
looks like a fully saturated LTO5-drive operating at optimal speed
since the 140MB/s are for compressible data to the drive.

-- 
Jesper

--
Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
and much more. Keep your Java skills current with LearnJavaNow -
200+ hours of step-by-step video tutorials by Java experts.
SALE $49.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122612 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] mtx and Overland Neo8000e

2012-04-25 Thread Jesper Krogh
Hi.

Allthough related, this is not a bacula issue. Trying to get
Bacula working with an Overland Neo8000e changer I have
got mtx status/load/unload to work and mt reports
the tapes as excected. The Changer is connected using SAS.

But the time it takes for status/load/unload is about 2 minutes per action
where 1 minute and 45 seconds goes by doing nothing at all.

Have anyone seen/solved similar issue?

Under Linux, do we have any alternatives to mtx for operating the
changer?

Sure it'll work anyway, but it'll just require a bit more patience than
usually to wait for this amount of time in update slots and mount

-- 
Jesper

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] mtx and Overland Neo8000e

2012-04-25 Thread Jesper Krogh
On 25/04/12 20:59, Adrian Reyer wrote:
 2 Minutes sound reasonable, you can see stuff moving through the front
 window. What do you mean by 'nothing at all': just from the software
 point of view, or if you look inside the hardware (if that is possible
 with the 8000e)?
Well, the mtx status command shouldn't do anything but read the
barcodes and tapedrives out of memory on the host and nothing
should move on that command. (thats how it works on the other
libraries I have worked with). Scanning the complete set of labels is
a hugely diffent thing and that should take time to scan above 500 labels.

-- 
Jesper

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Restore jobid without treebuilding?

2011-06-09 Thread Jesper Krogh
Hi.

Can I instruct bacula to restore a full jobid without treebuilding and 
fileselection?

Jesper

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backups

2011-05-04 Thread Jesper Krogh
On 2011-04-28 17:16, Alex Chekholko wrote:
 Try changing your Maximum Network Buffer size in your bacula-sd config.

 Something like
Maximum Network Buffer Size = 262144 #65536
Maximum block size = 262144

 Keep in mind that this will make your sd unable to read previous
 backups, IIRC.
Do you have more on this? I didnt see a warning about that in the
documentation, which I definately would expect if that was the case.

-- 
Jesper

--
WhatsUp Gold - Download Free Network Management Software
The most intuitive, comprehensive, and cost-effective network 
management toolset available today.  Delivers lowest initial 
acquisition cost and overall TCO of any competing solution.
http://p.sf.net/sfu/whatsupgold-sd
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Shutting down SD / blocks kernel

2011-01-24 Thread Jesper Krogh
Jan 20 15:40:34 kernel: INFO: task bacula-sd:28017 blocked for more
 than 120 seconds.
 Jan 20 15:40:34 kernel: echo 0  /proc/sys/kernel/
 hung_task_timeout_secs disables this message.
 Jan 20 15:40:34 kernel: bacula-sd D 000d6fd6dc61fc69 0
 28017  1   30457 (NOTLB)
 Jan 20 15:40:34 kernel:  8800a2ccbcd8  0282
 8800f8de2000  8807b185
 Jan 20 15:40:34 kernel:  000a  8800d7051860
 880009f64080  00039dbb
 Jan 20 15:40:34 kernel:  8800d7051a48  80336433
 Jan 20 15:40:34 kernel: Call Trace:
 Jan 20 15:40:34 kernel:  [8807b185] :scsi_mod:scsi_request_fn
 +0x347/0x39c
 Jan 20 15:40:34 kernel:  [80336433] blk_execute_rq_nowait
 +0x89/0xa0
 Jan 20 15:40:34 kernel:  [8022d660] wake_up_bit+0x11/0x22
 Jan 20 15:40:34 kernel:  [80262fb3] wait_for_completion+0x7d/
 0xaa
 Jan 20 15:40:34 kernel:  [80288e5e] default_wake_function
 +0x0/0xe
 Jan 20 15:40:34 kernel:  [881dbe23] :st:st_do_scsi+0x1f4/0x221
 Jan 20 15:40:34 kernel:  [881dc932] :st:st_int_ioctl
 +0x5f2/0x92b
 Jan 20 15:40:34 kernel:  [881dea58] :st:st_ioctl+0xaa5/0xe1f
 Jan 20 15:40:34 kernel:  [8023a4a7] may_delete+0x69/0x138
 Jan 20 15:40:34 kernel:  [80243fc9] do_ioctl+0x55/0x6b
 Jan 20 15:40:34 kernel:  [802316c1] vfs_ioctl+0x457/0x4b9
 Jan 20 15:40:34 kernel:  [802afcf3] audit_syscall_entry
 +0x180/0x1b3
 Jan 20 15:40:34 kernel:  [8024e52b] sys_ioctl+0x59/0x78
 Jan 20 15:40:34 kernel:  [802602f9] tracesys+0xab/0xb6


 it seems that the tape drive (LTO-3 TANBERG/DELL) needs more time that
 the sd-daemons expects !?

I'be been told, by a tape library technichian, than LTO incorporates
a small chip on the tapes, that will have an index of what and where
stuff is on the tape. If the process gets unclean shut down (power-failure
sd-crash or similar) then then it will know that the chip isn't current and
have to scan the entire tape to update the chip before any actions
can take place. This looks like the daemon is haning to the OS as
is in fact taking way longer than 120s.

That perfectly fits the cases where I see the above messages on
my storage daemon.

-- 
Jesper

--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VirtualFull using tape drives

2011-01-08 Thread Jesper Krogh
On 2011-01-06 23:00, James Harper wrote:
 Hi

 Anyone doing VirtualFull backups using tapedrives only?
 Can they shortly describe their setup? pro/con/etc?

 I'm not, but can tell you it's probably not a good idea - you'd end up
 with extra wear and tear on your tape drives, as well as greatly reduced
 performance while the target tape drive waited for data to be read from
 your source tape drive(s).

 I do a weekly full to do disk, then 3 x daily incremental to disk, and
 then a virtual full to tape every night (for DR purposes). Disk space is
 cheap, and you could easily build a disk array that could keep your
 tapedrive fed with data for much less than the cost of another tape
 drive (and tapes).

A quick count shows that my current Incremental-Pool holds around 350TB
worth of data, both in terms of cost, infrastructure and handling tapes
are way better at that scale.

If I could cut significantly on the retention times, then a disk solution
might be favorable.

Jesper
-- 
Jesper


--
Gaining the trust of online customers is vital for the success of any company
that requires sensitive data to be transmitted over the Web.   Learn how to 
best implement a security strategy that keeps consumers' information secure 
and instills the confidence they need to proceed with transactions.
http://p.sf.net/sfu/oracle-sfdevnl 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] VirtualFull using tape drives

2011-01-06 Thread Jesper Krogh
Hi

Anyone doing VirtualFull backups using tapedrives only?
Can they shortly describe their setup? pro/con/etc?

Thanks.
-- 
Jesper

--
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Why does bacula keep giving me these errors

2010-05-27 Thread Jesper Krogh
On 27/05/2010, at 01.34, randa...@bioinfo.wsu.edu wrote:

 I have been using bacula for some time with an overland neo 2000.

 However, Sometimes I receive these errors:

 You have messages.
 *messages
 26-May 16:07 mlbkp1-sd JobId 3152: 3304 Issuing autochanger load  
 slot 17, drive 0 command.
 26-May 16:08 mlbkp1-sd JobId 3152: 3305 Autochanger load slot 17,  
 drive 0, status is OK.
 26-May 16:08 mlbkp1-sd JobId 3152: Please mount Volume 68L3 or  
 label a new one for:
Job:  FS1_Data_Home_Dirs.2010-05-26_16.07.00_04
Storage:  neodrive-1 (/dev/st0)
Pool: Mainlab-six-month-pool-incremental
Media type:   Ultrium-3

 Clearly the tap changer has loaded the tape into drive 0 from slot  
 17, which I have verified actually does have volume 68L3.

 So why does it say please mount volume or label a new one?  Why does  
 it keep saying this despite the fact the tape is loaded in the drive?

 Anyone know why.  Typically, I just label a new tape, and that fixes  
 the problem, but why the hell does it not just use the tape that it  
 loaded into the drive in the first place instead of complaining  
 about it.

Try to modify the mtx-changer script and add a  sleep 10 after the  
mtx load command. Some libraries return ón the mtx command before the  
tape is fully ready

Jesper


--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Concurrent spooling and despooling

2010-02-16 Thread Jesper Krogh
Daniel Kamm wrote:
 That's maybe a stupid question, but I really wonder...
 
 Using Disk Spooling prior to write data to tape, heads in a sequential 
 write order:
 a) write data from backup client to disk spool directory
 b) write data from spool directory to tape
 
 Why are those tasks done sequentially?
 
 Let's say if the spool is reaching a waterlevel mark, the storage daemon 
 will start to write the spool data to tape _and_ at the same time the 
 backup client still sends data to the spool directory. Isn't that possible?

It's even on the wishlist for future enhancements...

-- 
Jesper

--
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Dynamically assign volumes to pools

2010-02-02 Thread Jesper Krogh
Uwe Schuerkamp wrote:
 I think the scratch pool should provide what you're looking for, but
 I believe once volumes have been reassigned to another pool they'll
 stay there for the rest of their lifetimes (I'm still on 2.x, so
 things may have changed in the meantime).

No, you can set RecyclePool for the individual pools and let them go
back to the scratch pool.. even on 2.x..

-- 
Jesper

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Serious issues, should I start over?

2010-01-31 Thread Jesper Krogh
Eric Downing wrote:
 Ok in my bacula-dir.conf file there was an entry for 'diraddress=127.0.0.1' 
 however   commented that out earlier by myself upon a suggestion I read 
 somewhere. I uncommented it and changed it to the local IP. I restarted 
 bacula-director and tried to bconsole in with the same results.
 
 Additionally, netstat -a|grep 3306 checking for MySQL didn't show, then the 
 same 
 command but used the bacula ports, and no dice. in /etc/services the bacula 
 services are listed. Furthermore ps -Af shows nothing Bacula or MySQL 
 related. I 
 am unfamiliar with MySQL however I have used MSSQL a fair amount in the past 
 (GUI stuff though). My box running Ubuntu is CLI only.
 
 Not sure where to go from here. Thoughts?

MySQL has a similar configuration option in /etc/mysql/my.cnf
(bind-address).. go change that one. You may also have to look into
skip-networking dependent on the version of mysql you use.

Restart mysql afterwards.

-- 
Jesper

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Problem changing tapes due to inconsistency between 'list media' and 'query catalog'?

2010-01-30 Thread Jesper Krogh
Thomas Dhollander wrote:
 Dear all
 
 In our current setup, Bacula does not seem to automatically load the
 correct tape in the drive for its backups. We did specify the correct
 autochanger options in the storage daemon conf file and made sure the
 device uses the autochanger.
 
 When we execute the 'list media' command in bconsole, we get the
 following output:

There is an undocumented feature (bug) in bacula, (i dont know it its
fixed in recent versions, I'm still at 2.something but bacula fails to
load the volume if you:

* start a job
* it requires a tape not in the changer
* you insert the tape and run update-slots
* run mount (to push the job back into operation).

This is (or was) a gauranteed failure mode where you could get it to say
it couldn't get the volume.. over and over.

Doing it in the other order:

* insert tapes
* update slots
* start job

Just works

Which is quite a PITA on restores since it mostly requires to start the
job twice .. once to find the list of wanted tapes and another one to
run when you subsequenctly have changed in the changer.

It could be similar to yours issue.

Jesper
-- 
Jesper

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Serious issues, should I start over?

2010-01-30 Thread Jesper Krogh
Eric Downing wrote:
 OK, giving up. I think I've muddied things up too much so I'm going to 
 restore 
 from a clean backup.
 
 Are there any good step by steps out there? I imagine the ones I'm using 
 aren't 
 working. I'm looking to first backup the local box (to a USB drive) then make 
 the leap to backing up my various boxes around the house here. OS is Ubuntu 
 9.04 
 (Deb5)

Ok. .. 29 minutes.. that was not much responsetime you gave the list
there.. even on a sunday morning.

It looks like you have either:
* A firewall blocking of you installation
* Not configured mysql/bacula to listen to other than 127.0.0.1

Go doublecheck both things and you'll get forward.

-- 
jesper

--
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Which tapes to put into the tape library?

2010-01-21 Thread Jesper Krogh
Dan Langille wrote:
 I have about 100 tapes.  My tape library holds only 10 tapes.
 
 How do you decide which tapes to load into the library?  What strategy do
 you use?

We've gotten a perl-script that gets run as a post-admin script after
eacy cycle. It has knowledge about the Pool, their retention times and
the physical layout of the library, so tapes to be removed can be
transferred to slots easily accessible when standing in front of the
library. There is 185 slots, 2 drives in the library and 694 mixed
LTO3/LTO4 tapes in 4 pools (Archive, Full, Differential, Incremental).

The strategy is to keep tapes in the library for the first weeks after
they have been used for backup in order to make the chance of making a
restore without manually having to put tapes in the library. Then
suggest them to be removed and subsequently put back into the library
a couple of weeks before they would be automatically recycled.

Drop me an email if you want it, it needs to be hacked to fit the
library and so on..

-- 
Jesper

--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity lower than expected

2010-01-07 Thread Jesper Krogh
Thomas Mueller wrote:
 maybe this is the reason for the extra mb/s.
 Modifying the Maximum Block Size to more than 262144 didn't change much
 here. But changing the File Size did. Much.
 
 I found a post from Kern saying that Quantum told him, that about 262144 
 is the best blocksize - increasing it would increase error rate too. 
 
 Anyway, 40 MB/s seems a bit low, even with the defaults. Before tuning
 our setup I got ~75 MB/s. Are you spooling the data to disk or writing
 directly to tape?
 
 yes, i was surprised too that it is that slow. 
 
 I'm spooling to disk first (2x 1TB disk as RAID0, dedicated to bacula for 
 spooling). i will also start a sequential read test to check if the disks 
 are the bottleneck. The slow job was the only one running.
 
 watching iotop i saw the maximum file size problem: it stops writing 
 after 1 GB (default file size) and writes to the DB and then continues 
 writing. so for a LTO-4 it stops nearly 800 times until the tape is full. 

I've also increased the Maximum File Size, I'm spooling of a ram-disk to
LTO4 tapes seeing rates like this:
04-Jan 23:09 bacula-sd JobId 33739: Despooling elapsed time = 00:01:22,
Transfer rate = 121.9 M bytes/second
04-Jan 23:15 bacula-sd JobId 33739: Despooling elapsed time = 00:01:24,
Transfer rate = 119.0 M bytes/second
04-Jan 23:21 bacula-sd JobId 33739: Despooling elapsed time = 00:01:24,
Transfer rate = 119.0 M bytes/second
04-Jan 23:26 bacula-sd JobId 33739: Despooling elapsed time = 00:01:21,
Transfer rate = 123.4 M bytes/second
04-Jan 23:31 bacula-sd JobId 33739: Despooling elapsed time = 00:01:25,
Transfer rate = 117.6 M bytes/second
04-Jan 23:40 bacula-sd JobId 33734: Despooling elapsed time = 00:01:06,
Transfer rate = 151.5 M bytes/second
04-Jan 23:46 bacula-sd JobId 33734: Despooling elapsed time = 00:00:56,
Transfer rate = 178.5 M bytes/second
04-Jan 23:53 bacula-sd JobId 33734: Despooling elapsed time = 00:01:00,
Transfer rate = 166.6 M bytes/second
04-Jan 23:57 bacula-sd JobId 33734: Despooling elapsed time = 00:00:57,
Transfer rate = 175.4 M bytes/second
05-Jan 00:02 bacula-sd JobId 33734: Despooling elapsed time = 00:01:02,
Transfer rate = 161.2 M bytes/second
05-Jan 00:08 bacula-sd JobId 33734: Despooling elapsed time = 00:01:00,
Transfer rate = 166.6 M bytes/second
05-Jan 00:12 bacula-sd JobId 33734: Despooling elapsed time = 00:00:52,
Transfer rate = 192.3 M bytes/second
05-Jan 00:17 bacula-sd JobId 33734: Despooling elapsed time = 00:00:59,
Transfer rate = 169.4 M bytes/second
05-Jan 00:22 bacula-sd JobId 33734: Despooling elapsed time = 00:00:58,
Transfer rate = 172.4 M bytes/second
05-Jan 00:29 bacula-sd JobId 33734: Despooling elapsed time = 00:00:58,
Transfer rate = 172.4 M bytes/second
05-Jan 00:37 bacula-sd JobId 33734: Despooling elapsed time = 00:00:58,
Transfer rate = 172.4 M bytes/second

On LTO3 tapes in the LTO4 drive it tops at around 120MB/s.

-- 
Jesper

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very slow interactive restore

2009-11-24 Thread Jesper Krogh
Christoph Litauer wrote:
 Christoph Litauer schrieb:
 Jesper Krogh schrieb:
 Christoph Litauer wrote:
 Thanks! One last question (hopefully): How big is /var/lib/mysql/ibdata1?
 282GB on ext3

 Dear Jesper,

 in the meantime I made a test setup - not successfull 'til now regarding
 the performance. What I forgot to ask: What mysql-DB version are you
 running?

Standard Ubuntu Hardy mysql version.

 And another demand, please:
 
 Could you - or someone else - please select any JobId and execute the
 following (my)sql-statement:
 
 mysqlEXPLAIN SELECT Path.Path, Filename.Name, File.FileIndex,
 File.JobId, File.LStat
 FROM (
   SELECT max(FileId) as FileId, PathId, FilenameId
   FROM (
   SELECT FileId, PathId, FilenameId
   FROM File
   WHERE JobId IN (insert your JobId here)
   ) AS F GROUP BY PathId, FilenameId
 ) AS Temp JOIN Filename ON (Filename.FilenameId = Temp.FilenameId) JOIN
 Path ON (Path.PathId = Temp.PathId) JOIN File ON (File.FileId =
 Temp.FileId) WHERE File.FileIndex  0 ORDER BY JobId, FileIndex ASC

mysql EXPLAIN SELECT Path.Path, Filename.Name, File.FileIndex,
File.JobId, File.LStat FROM ( SELECT max(FileId) as FileId, PathId,
FilenameId FROM ( SELECT FileId, PathId, FilenameId FROM File WHERE
JobId IN (32348) ) AS F GROUP BY PathId, FilenameId ) AS Temp JOIN
Filename ON (Filename.FilenameId = Temp.FilenameId) JOIN Path ON
(Path.PathId = Temp.PathId) JOIN File ON (File.FileId = Temp.FileId)
WHERE File.FileIndex  0 ORDER BY JobId, FileIndex ASC;
++-+++-+-+-+-+--+-+
| id | select_type | table  | type   | possible_keys   | key
 | key_len | ref | rows | Extra   |
++-+++-+-+-+-+--+-+
|  1 | PRIMARY | derived2 | ALL| NULL| NULL
 | NULL| NULL|   37 | Using temporary; Using filesort |
|  1 | PRIMARY | Path   | eq_ref | PRIMARY | PRIMARY
| 4   | Temp.PathId |1 | |
|  1 | PRIMARY | Filename   | eq_ref | PRIMARY | PRIMARY
| 4   | Temp.FilenameId |1 | |
|  1 | PRIMARY | File   | eq_ref | PRIMARY | PRIMARY
| 4   | Temp.FileId |1 | Using where |
|  2 | DERIVED | derived3 | ALL| NULL| NULL
 | NULL| NULL|   37 | Using temporary; Using filesort |
|  3 | DERIVED | File   | ref| JobId,dump_info_idx | JobId
 | 4   | |   37 | Using index |
++-+++-+-+-+-+--+-+
6 rows in set (0.00 sec)



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Comments about HP Ultrium Drive on Bacula

2009-11-18 Thread Jesper Krogh
Eduardo Sieber wrote:
 Hello people!
 
 We're thinking about buy a LTO Ultrium HP external drive.
 To me more specific, this one:
 http://h71016.www7.hp.com/dstore/MiddleFrame.asp?page=configProductLineId=450FamilyId=1249BaseId=23116oi=E9CEDBEID=19701SBLID=
 
 
 Is there any person on this list, who have experience with this drive on
 bacula? It works well?

I would be very surprised if it didn't work. I have two LTO4 drives
sitting on Fibre Channel in a tape-library, but except from the
changer-device and another connnection interface everything is excactly
the same to Linux/Bacula.



-- 
Jesper

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Comments about HP Ultrium Drive on Bacula

2009-11-18 Thread Jesper Krogh
Joseph L. Casale wrote:
 We're thinking about buy a LTO Ultrium HP external drive.
 
 FWIW, I have had very bad luck w/ HP branded LTO's, for several years
 they had low MTBF and not last long for me...

What do you use insted.. our former Quantum changer was equipped with HP
tape-drives, ditto is our current StorageTec SL500. From my limited view
HP is inside everything..

-- 
Jesper

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Maximum volume size

2009-11-17 Thread Jesper Krogh
Federico Alberto Sayd wrote:
 Hello, I'm new to Bacula and this is my first message. I have a question:
 
 When I backup to disk, what is the recommended maximum volume size? 
 Official doc says that with versions up to and including 1.39.28 the 
 recommended maximun volumen size is 5 GB. I am using Bacula 2.4.4 on 
 Debian Lenny, this info apply for my installation?
 
 Other non official info provides the next tip 
 (http://www.lucasmanual.com/mywiki/Bacula#VolumeManagement):
 
 Limit your volume size to about 10%-15% of the HD capacity. IE, 1TB 
 drive, volume size 100GB. And then set your max volume on the pool to 
 ((HD space/volume space) - 1) so you don't need to worry about a full HD.
 
 There is an official recommendation about volume size?
 
 What is your experience on this topic?

I only have experience with tape-volumes, but you should keep in mind,
that you cannot recycle (free space) the volume before the last job
sitting on the volume has expired. That should probably be the argument
for keeping the size down. What that translates into in your setup is
hard to guess. I suggest you go for something like one job pr volume.
Then decide what the max file-size should be. If  you largest backup is
20G the 5 GB would be fair.. if you largest is 8TB then..

-- 
Jesper

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very slow interactive restore

2009-11-12 Thread Jesper Krogh
Christoph Litauer wrote:
 
 Thanks! One last question (hopefully): How big is /var/lib/mysql/ibdata1?


282GB on ext3

-- 
Jesper

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very slow interactive restore

2009-11-11 Thread Jesper Krogh
Christoph Litauer wrote:
 Jesper Krogh schrieb:
 Christoph Litauer wrote:
 3.) What are your directory building times when using interactive 
 restores?
 For 8.5 million files ~ 2 minutes.

 That would be _very_ nice!
 Jesper, please give the following information, so I can compare our setups:
 1.) Output of 'show index from File'
 http://krogh.cc/~jesper/show-index.txt

 2.) Your /etc/my.cnf
 http://krogh.cc/~jesper/my.cnf

 3.) How much memory your system has.
 48GB where typically 40GB are uses as spool-area for 2 x LTO4 devices
 mounted as a ramfs

 
 Sorry Jesper, a few more questions:
 
 1.) Where is the 'dump_info_idx' index from? I don't think it is created
 by make_mysql_tables ...

I may have created it manually.. years ago.. The setup is actually since
may 2006, file-table around 1.8 billion (10^9) files.

 2.) Is your machine/os 32bit or 64bit based?

64bit

 3.) Do you use InnoDB- or MyISAM-Tables?

Hmm. due to some wierd reason.. it is a mixture, but the large ones are
all InnoDB.

-- 
Jesper

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very slow interactive restore

2009-11-10 Thread Jesper Krogh
Christoph Litauer wrote:
 3.) What are your directory building times when using interactive restores?
 For 8.5 million files ~ 2 minutes.

 That would be _very_ nice!
 Jesper, please give the following information, so I can compare our setups:
 1.) Output of 'show index from File'

http://krogh.cc/~jesper/show-index.txt

 2.) Your /etc/my.cnf

http://krogh.cc/~jesper/my.cnf

 3.) How much memory your system has.

48GB where typically 40GB are uses as spool-area for 2 x LTO4 devices
mounted as a ramfs

-- 
Jesper

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very slow interactive restore

2009-11-09 Thread Jesper Krogh
Christoph Litauer wrote:
 Most often I need to restore whole directories (recursive) out of the
 last backup or using a backup some time ago. Because there is no bacula
 command for restores of directories, I use the interactive restore method.
 When selecting a job that contains about 10 Mio. files, building the
 directory tree lasts about 30 minutes -- which is very long.

Add more memory.. I'll bet your system is swapping out while building
the tree.

 So I have a few questions:
 1.) Any hints how to recover a directory recursive without using
 interactive mode?

It's something like this:
http://krogh.cc/~jesper/bacula-restore.txt

 2.) Any hints concering mysql optimizations for a bacula database as big
 as mine?

Add more memory.

 3.) What are your directory building times when using interactive restores?

For 8.5 million files ~ 2 minutes.

-- 
Jesper

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Need input on an Issue

2009-11-09 Thread Jesper Krogh
jeffrey Lang wrote:
 I noticed in the manual this weekend there is an issue with bacula about
 moving files/directoies into an already backed up location.  Could this
 be the cause of my problem?

I think that's the most likely cause. Say you un-tar an tarball, then
all catalogs and files will be dated somewhere in the past and wont be
picked up by an incremental run. In order to get these files you need to
run Full backups more often. Catalogs moved around has the same pattern.

Wonder if there is a feature request sitting in bacula for accurate
backups? Looking at the projects file it doesn't look like it.

-- 
Jesper

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Monthly Backup Schedule Suggestion

2009-11-09 Thread Jesper Krogh
j...@alpha.net.au wrote:
 Hi
 
 I am backing up a server with a lot of static content and very little 
 dynamic one.  I would like to have , on any given day, a full backup for 
 the past 30 days but am not sure how to choose a proper schedule. The 
 directories are backed up onto a disk partition which I suppose give us 
 a good degree of flexibility in selecting any number of volumes and/or 
 any combination of full, incremental and differential archives.

Just, monthly full, weekly differential and daily incremental .. but any
random choice would depend on:

* Whats the level of granularity you want on your backups?
 - E.g should a file written and deleted X hours later be backed up?
* How much disk space do you have available for you backup.
 - A full backup will fill roughly the same amount as on the other end?

The level of granularity will set the frequency of all your backup runs.
If you need to have a file written and deleted 12 hours later, you need
to run backup twice a day. (at least).

Differential runs, can save disk-space on your storage system, but will
sacrifise you granularity.

Say you retention times (on the above schedule) is 2 months for full
backups, 1 month for differential and 2 week for incrementals. Then you can:
* Within the last week restore to a granularity of 1 day
* Within the last month restore to a granularity of 1 week
* Within the last 2 months restore to a granularity of 1 month.

Play around with the numbers and figure out what you need.

Jesper
-- 
Jesper




--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Using bacula for local and cloud backup

2009-11-09 Thread Jesper Krogh
wvoice wrote:
 Hi,
 
 I'm currently using bacula-3.0.3 (and some systems have 3.1.4) to backup a
 bunch of Linux and Windows machines. These backups are local and everything
 is working fantastically with that. 
 
 However, I'd like to be able to backup the backup data offsite. Right now,
 my storage pool is on a file volume located in /data/backup/. I'm trying to
 figure out the best way to do this. My usual mechanism is to use rsync for
 my offsite replication. But these files are quite large now. Just backing up
 the file will be very costly, unless I can mount it and get access to the
 contents. Then I can copy the diffs.

Isn't the quick'n'dirty solution not just to make the individual
volume-files smaller? Say 100MB or similar. The for
differential/incremental runs you would eventually end up only
transferring the diffs.

Jesper
-- 
Jesper

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] autochanger and barcode

2009-10-17 Thread Jesper Krogh
 But the use of pools on what for? 
 Just for completeness I must save this data:
 
 1) Oracle db (size unknown)
 2) 5-6 MySQL db (about 25 Gb)
 3) Some samba share, with office files (5Gb)
 4) An subversion reposity (size unknown)
 5) Mail server (size unknown)
 6) The configuration of 8 servers
 7) Other things that now I do not remember :)
 
 I should make a pool for each points?

No, you should make the pools reflect your retention times. If you wish
to keep (which you most-likely will) your Full backups for longer time
than your incrementals. Recycling of volumes is based on the
lastwritten time of the volume.

I suggest:

Full-Poll = All full backups.
Incremental-Pool = All incremental backups.
Differential-Poll = All Differential backups.

You would then prefer to keep full backups for the longest period,
Differentials for a shorter and incrementals for even shorter.

I our setup we keep Full for 1y, Differential for 1y and Incremental for
6m. This enables us to:

* restore any file with a days granularity for 6 months.
* restore any file wiht a weeks granularity for 1 year.
* - then we actually have an Archive Pool that week is backed up to ever
6 months that we keep forever.

If you then dont want to have a open tape for longer than a period,
then you set Max Use Duration to that. Then after that period of time
you can move it to a different location.

If your retention times change across above systems, then you most
likely need more pools. But I would definately suggest just buying one
more pack of tapes instead.


Jesper
-- 
Jesper

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] autochanger and barcode

2009-10-17 Thread Jesper Krogh
Nicola Quargentan wrote:
 Jesper Krogh ha scritto:
 Nicola Quargentan wrote:
 I'm very newbie and I want to use a DELL TL2000 autochanger.
 I made 7 pool in config file, one for each day of week:
 Monday, Tuesday, etc.
 Monday, Tuesday and so is mostly used when you dont have an autochanger.
 
 Oh you are right: before autochanger I have only a single tape drive, 
 and I change the tape every day :O
 
 When you do have one I recommend setting up Pools so then reflect your
 retention policies. In my setup we:
 * Keep full backups for a year (run monthly)
 * Keep differential backups for a year (run weekly)
 * Keep incremental backups for 6 months (run daily).

 This translates into 3 Pools, Full, Differential and Incremental. On
 these pools the retentiontime is set and then automatically move back to
 Scratch-pool when retention-time has expired. So all fresh volumes
 just goes directly into Scratch pool.

 Bacula will try to fill tapes completely e.g. fill mondays's tape with
 tuesdays data if not explicitly configured otherwise.
 
 Uhmm, I want to be indipendent from bacula. My purpose is to 
 occasionally carry out my tapes, and store them on a strongbox :)
 If my backup bacula server crash I don't want lose all my data.
 So I want to know where bacula store my data.

Then I'd suggest that you run  mysqldump on the catalog and burn it to a
CD or similar occationally.

I have a small perl-script that can process the catalog and give a list
of files/timestamps/paths/clients stored on a volume .. (can give
information for use with bextract if needed). Would that have any interest?

-- 
Jesper

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Best backup strategy to use the minimum size

2009-10-17 Thread Jesper Krogh
Cedric wrote:
 Do a Differential once every 90 days and Incrementals the rest of the time.
 In theory, you only need to run one Full backup. Ever. In practice, you'll
 probably want to create a new Full backup every year or so to clear out the
 obsolete data and make your differentials smaller (and faster).

Also keep restores in mind.. using above configuration and running a
full yearly and a differential every 90 months could require you to find
1 (full) + 1(differential) + 89(incremental) = 91 tapes to do a
restore. Either you have a huge changer, or you are heading into an
administrative nightmare.

I suggest running differentials at least weekly, just to lower the
amount of tapes needed for restores.

Jesper
-- 
Jesper

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bextract -- restore single occurrence of file

2009-10-17 Thread Jesper Krogh
 I fired up bextract  (standard bacula restore is not appropriate in this
 case) which restored the first occurrence of the file as expected. However,
 instead of halting after restoring the file, bextract continued to the end
 of the tape and found a second occurrence with the same filename which it
 duly restored replacing the originally restored file.

I suggest you open a bug-report/feature request on this.. this seems
like a really nasty situation to get into if stuff really burns.

Jesper

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Using multiple drives

2009-10-10 Thread Jesper Krogh
tpcshadow wrote:
 If I run one job (say job A to pool A), the tape from pool A stays in
 drive 0. After that has finished, if I start another job (say job B
 to pool B), the library will remove the pool A tape from drive 0 and
 replace it with the pool B tape in drive 0.

I think that is just how it works. I suppose it is due to the amount
of people really having spare drives are very few, in my setup I have
2 x LTO3.. and I have the same scenarie, drive 0 basically ends up
taking all incremental work and some differential + full working every
day, but drive 1 is only used during high load (weekends with
full/differential runs).

Without having looked into the code, I would suspect it to be af fairly
trivial patch. I'll suggest you to look into it or at least sending a
feature request for the Projects-file for the project.

-- 
Jesper

--
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Better way to garbage collect postgresql database

2009-03-20 Thread Jesper Krogh
Hemant Shah wrote:
 This is a database question, but I figured some of the bacula users may have 
  come across this problem so I am posting it here.
 
 Every monday I run following commands to check and garbage collect bacula 
 database:
 
 dbcheck command
 vacuumdb -q -d bacula -z -f

There is absolutely no reason to vacumm full unless your data-size 
actually are shrinking over time. (longer periods). A normal vacuum will 
make it available for the next run.. you most likely are running 
autovacuum anyway.

 reindexdb

Might make sense, but weekly?..  There is AFAIK a small amount of 
index-bloat collecting up over time in PG. But in general just yearly 
or monthly should really be sufficient.

 Usually I purge one or two backup volumes and the above commands run in less 
 than 20 minutes. 
 
 Before my monthly Full backup I delete large amount of data from the database 
 as I 
  delete one month worth of Full and Incremental backups. When I run 
the above
  commands after the Full backup, the vacummdb command take 12 hours
  to run. Is there a faster/better way of doing it?

No not really VACUUM FULL i telling PG to reorder the data on-disk and 
free up space for the os, that is a hard task. But it is also not 
needed, since you are going to used it within the next week/month 
anyway.. so ordinary VACUUM is sufficient.

 My database is about 9GB.
 
 If I backup database using pgdump and then restore it, will it do the same
  thing as vacuumdb and reindexdb commands?

Basically yes.

--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Broken LTO tapes?

2009-02-01 Thread Jesper Krogh
Hi list

This is not really a bacula question.. but somehow really related.

How common is it to have a broken LTO-tape? I've (in like 3-4 years 
working with LTO) never seen this, but it looks like a borken tape.

Last night when bacula should recycle a tape it didnt success, the tape 
never got loaded sucessfully by the mt -f /dev/nst0 status command, it 
returns just input/output error.

The problem is that this raises some doubt about all our shelf-tapes.. 
so have I just been really unlucky, or could there be other reasons to
this behaviour.

j...@bacula:~$ sudo mtx -f /dev/changer load 102 1
Loading media from Storage Element 102 into drive 1...done
j...@bacula:~$ sudo dd if=/dev/nst1 of=/dev/nul
dd: opening `/dev/nst1': Input/output error
j...@bacula:~$ sudo dd if=/dev/zero of=/dev/nst1
dd: opening `/dev/nst1': Input/output error
j...@bacula:~$ sudo mt -f /dev/nst1 status
/dev/nst1: Input/output error

(Same drive .. just a functional tape)
j...@bacula:~$ sudo mtx -f /dev/changer unload 102 1
Unloading drive 1 into Storage Element 102...done
j...@bacula:~$ sudo mt -f /dev/nst1 status
SCSI 2 tape drive:
File number=-1, block number=-1, partition=0.
Tape block size 0 bytes. Density code 0x0 (default).
Soft error count since last status=0
General status bits on (5)
j...@bacula:~$ sudo mt -f /dev/nst1 status
SCSI 2 tape drive:
File number=0, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x44 (LTO-3).
Soft error count since last status=0
General status bits on (4101):
  BOT ONLINE IM_REP_EN

Now.. This is pure luck, that I didnt have to use this for anything, it 
was found during recycling of the incremental pool so the actual tape 
should just have been overwritten anyway. But it makes me think a bit.

-- 
Jesper

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Broken LTO tapes?

2009-02-01 Thread Jesper Krogh
James Harper wrote:
 Hi list

 This is not really a bacula question.. but somehow really related.

 How common is it to have a broken LTO-tape? I've (in like 3-4 years
 working with LTO) never seen this, but it looks like a broken tape.

 
 We do field service repairs for HP hardware and I have seen one or two
 cases where the tape has somehow become entangled in the internal drive
 mechanism and has broken.
 
 Reading your email a bit more maybe your problem is just that the tape
 can't be read or written to anymore. If it's a HP tape drive, use the HP
 Library  Tape Tools which is freely downloaded from HP and can run all
 sorts of tests on the media and the drive. If LTT says the drive is
 failed then it is probably right.

Well, the drive works fine on other tapes.. so it is just this specific 
tape that has the problem.

Jesper
-- 
Jesper

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Mixed Drives

2009-01-23 Thread Jesper Krogh
Alan Brown wrote:
 On Wed, 21 Jan 2009, Ryan Novosielski wrote:
 
 Media type is completely arbitrary. If you want the new drive to be able
 to share media with the old drive, you should use the same media type.
 If not, you should have them be different. It would be best if you could
 make a supported media type list, to allow for backward compatible
 drives, but that's not possible at present.
 
 This is something I raised at least 2-3 years ago. There's been no
 apparent interest in solving the issue and likely won't be until one of
 the developers encounters the problem.
 
 Supported media lists would have to cover r/w and ro support:
 
 LTO drives are r/w compatible with the previous generation, but read only
 with the one before that.
 
 ie: LTO4 can R/W LTO3 tapes, but only read LTO2 and can't read LTO1 at all.
 
 (As far as I can tell these are _minimum_ specs for LTO. It's perfectly
 possible to exceed them and produce a LTO4 drive which can write LTO2 and
 read LTO1, but as far as I know no manufacturer has done that yet.

Sure about that?, the marketing material I've seen shows only read-only
capabillity one-generation back and write capabillity one generation, if
the tape was un-used before (never written before).


-- 
Jesper

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 performance

2009-01-03 Thread Jesper Krogh
Jason A. Kates wrote:
 For a speed test /dev/zero isn't the best item to use as the hardware
 compression will show how good it can be.   I would test on files that
 aren't in the OS cache and will have representative level of
 compression.

I agree, but if the test with /dev/zero ends with 10MB/s then there is 
definately something wrong in the system. So the speed test is indeed 
valid as a first shot test of the drive only.

Jesper
-- 
Jesper

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Time for change

2008-12-18 Thread Jesper Krogh
Thanks for the elaborate reply. Just a few more querious questions.

Alan Brown wrote:
 On Thu, 18 Dec 2008, Jesper Krogh wrote:
 
 100-200Gb ram and systems capable of addressing that amount of memory are
 still far more expensive than a stack of flash drives, else I'd use them.
 But do you need to spool a complete tape? In order to avoid doing evil 
 stuff
 to you tape drive, much less is sufficient.
 
 I'm multiplexing anything up to 20 jobs at a time. To ensure that small
 incremental and diff jobs are dumped in one hit and to ensure that full
 backups are laid in as large chunks as possible, this is the kindof size
 which is required.

Are you having that many clients or is it actually larger volumes that 
have been split up in smaller filesets?

 My concern isn't just backup run time.
 So you'd like to spool a complete Job? Whats you average job-size? (mine is
 less than 8GB)
 
 Full backups run 500Gb to 1Tb apiece, the average nightly incremental is
 about 80-150Gb - multiplied out by 90 filesystems.
 
 You get the idea.

Absolutely.

 Concerned about job run time, its my impression that spool space only speeds
 up incremental/differential.
 
 It does at the moment. HOWEVER if spooling isn't used then jobs are
 interleaved on the tape at record time, resulting in massive shoeshining
 on restores and restore throughput rates measured in kb/Sec instead of
 25-40Mb/sec

Yes, I use spooling even for our +2TB volumes.

 Whats the time consuming part in this? Seeking on tapes? Neither SSD's or
 memory will change that. AFAIK the spooling area is only used when going TO
 tape, not FROM tape.
 
 Using large spool areas increases the size of the chunks dropped to tape
 and thus allows the tapes to stream for longer periods. ONE of my backups
 might span 5 LTO2s

I can have the same for LTO3, but that due to a few large jobs running 
full backups concurrenly.

 Can you give some numbers, so we have a feeling about the sizes you talk
 about?
 
 250Tb startup, 1Pb within 18 months, and growing past that. Filesizes
 measured in tens of Mb apiece.

Still over Gigabit netwok or do you have better infrastructure?

-- 
Jesper

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Time for change

2008-12-17 Thread Jesper Krogh
Alan Brown wrote:
 On Tue, 9 Dec 2008, Brian Debelius wrote:
 
 John Drescher wrote:
 In linux, I find this to be completely wrong. I have 15TB of software
 raid 6 and the most load that it puts on the cpu is around 7% and
 these are raid arrays that net over 200MB/s writes on single core
 systems that are 3 or so years old.
 
 So what do you think a reasonable cpu for bacula would be?
 
 I'm running spooling on a 4 drive software raid0 quite happily on a 4Gb
 3GHz P4D machine. The limiting factors are disk head seek time(*) when
 running concurrent backups to 2 LTO2 drives and available SATA ports.
 Because of that I'm considering dropping in solid state disks.

I still have got to see a reasonable priced SSD' disk that can deliver 
around 100MB/s both ways at the same time.

http://www.slashgear.com/samsung-64gb-ssd-performance-benchmarks-278717/

I have beefed up my director with sufficient amount of memory and 
mounted it as a ramdisk for spooling. That doesn't impose any 
limitations on the 2 LTO3 drives attached.

And spooling doesnt need any form for persistence, so its fine that its 
gone after reboot.

-- 
Jesper

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Time for change

2008-12-17 Thread Jesper Krogh
Alan Brown wrote:
 Jesper Krogh wrote:
 I'm running spooling on a 4 drive software raid0 quite happily on a 4Gb
 3GHz P4D machine. The limiting factors are disk head seek time(*) when
 running concurrent backups to 2 LTO2 drives and available SATA ports.
 Because of that I'm considering dropping in solid state disks.
 
 I still have got to see a reasonable priced SSD' disk that can deliver 
 around 100MB/s both ways at the same time.
   
 There aren't any mechanical disks which can do it either.
 
 Which is why I'm not trying to do that - replacing 4 RAID0 mechanical 
 disks with 4 SSDs will provide similar sustained throughput to the 
 mechanical RAID0, but provide _much_ better performance for anything 
 where the mechanical disks had head seeking involved - such as multiple 
 simultaneous input/output streams to LTO drives.
 
 http://www.slashgear.com/samsung-64gb-ssd-performance-benchmarks-278717/

   
 
 Make sure you compare apples with something remotely looking like 
 apples. The ONLY SSds which are suitable fo this kind of use are SLCs, 
 not MLCs

Ok. I havent spend enough time in that area.

 I have beefed up my director with sufficient amount of memory and 
 mounted it as a ramdisk for spooling. That doesn't impose any 
 limitations on the 2 LTO3 drives attached.
   
 How much do you regard as sufficient?
 
 100-200Gb ram and systems capable of addressing that amount of memory 
 are still far more expensive than a stack of flash drives, else I'd use 
 them.

But do you need to spool a complete tape? In order to avoid doing evil 
stuff to you tape drive, much less is sufficient.

 My concern isn't just backup run time.

So you'd like to spool a complete Job? Whats you average job-size? (mine 
is less than 8GB) if its larger, we just need a period of despooling 
(I'd love to have concurrent spooling/despooling in bacula). Currenly I 
use at most 32GB for spooling area, with a Job Concurrency at 4 and 2 
tape drives. When doing large backups(full+archive) I mostly have one
(or two) drives in action at the same time while spooling to disk with 
2(or 3) threads at the same time. The LTO3 drives far outperform our 
network speed (1gbit). Transfer to tapes are in the range from 60MB/s to 
100MB/s (and I unfortunately have no idea why they spread that much).

In total numbers we're around 25TB to disk/month with monthly full and
daily incrementals.

Concerned about job run time, its my impression that spool space only 
speeds up incremental/differential.

 Restore times are also important and having a tape read back 1Gb, then 
 seek, then pull back another 1Gb (or even 10Gb) is a significant 
 penalty  over reading larger blocks when worst-case 75Tb+ restores are 
 considered (25-60 days on 2 drive LTO2, dpeending on the directory 
 structures being restored.)

Whats the time consuming part in this? Seeking on tapes? Neither SSD's 
or memory will change that. AFAIK the spooling area is only used when 
going TO tape, not FROM tape.

 And spooling doesnt need any form for persistence, so its fine that its 
 gone after reboot
 Indeed.
 
 If it was practical I'd use ramdisks. Right now it's not. Apart from the 
 cost factor there is very little hardware which can address more than 
 128Gb of Dram. There are RAM arrays which are setup to operate as F/O 
 scsi devices, but these are currently silly money as they're marketed 
 at the world of high end, high cost databases.

Again, I assume we're talking about spooling space, then try to think 
about if you need that much.

 In 12 months time that may change, Ram is always falling in price - but 
 Flash drive pricing is falling faster,  performance/durability is rising 
 at the same time and there isn't the same issue with massive address 
 ranges as it just looks like more disk, vs having to change out entire 
 servers at $20k a time if RAM limits are reached.
 
 I'm not just looking at the issue of my current setup. Projects are 
 already pencilled onsite which will increase storage demands by a factor 
 of 20 from the current size within 12 months and I have to try and be 
 ready to back that data up.

Can you give some numbers, so we have a feeling about the sizes you talk 
about?


-- 
Jesper

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Restore problem (bug ?)

2008-12-11 Thread Jesper Krogh
Marco Schaerfke wrote:
 I have the following setup:
 bacula 2.4.3: dir and sd runs on RHEL 4 (x86_64), client is CentOS5.
 my storage device is an LTO3 tape drive with 11 slots
 
 I started the restore job, but unfortunately I forgot to change the 
 tapes in the library.
 After a while I noticed that a status sd in bconsole gives BLOCKED ... 
 so fare so good.
 I tried to unmount the tape, but it was already unmounted.
 I changed the tapes and run in the bconsole update slots and a mount 
 command
 
 While the right tapes are inserted bacula seams to ignore the new 
 position of the tapes, because the system load a tape from the wrong 
 position. I checked the mysql entries after the slot update procedure 
 and everything seams to be OK, so I aspect that the slot information for 
 a running job is not updated.
 
 Any comments ?

Like this one?
http://bugs.bacula.org/view.php?id=1194

Jesper

-- 
Jesper

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Please mount volume

2008-12-08 Thread Jesper Krogh
Kshatriya wrote:
 Hello group,
 
 Lately I have a rather annoying problem. I'm using bacula already for 
 quite a long time with my AIT4 Changer without problems. I don't know 
 exactly since when (maybe during upgrades), but lately backups are often 
 failing and I don't know why. Bacula asks to mount a volume, for example :
 
 06-Dec 03:21 bacula-sd JobId 16918: Please mount Volume A082Y2 or label 
 a new
 one for:
   Job:  OblTape.2008-12-05_21.00.29
   Storage:  AIT4-1 (/dev/nst0)
   Pool: FridayPool-Tape
   Media type:   AIT4
 
 However, update slots has been run before, the volume is in the changer, 
 but it seems like Bacula doesn't run the mtx-changer script for one 
 reason or so.
 
 The problem is that this behaviour is not consequent. Sometimes the backup 
 just works. When it doesn't work (and asks for a tape), then I try to type 
 'mount', but then I just get the message 'OK Mount' but nothing happens. 
 The same with umount.
 
 I've tried running btape with the 'test' and the 'fill' command (complete 
 test with 2 tapes) again, which works without problems and happily changes 
 tapes.
 
 Currently I'm using Bacula 2.4.3 with all the latest patches (incl. 
 slots-patch).
 
 This is my relevant bacula-sd.conf information for the autochanger:
 
 Autochanger {
 Name = Autochanger
 Device = AIT4-1
 Changer Command = /usr/local/bacula/etc/mtx-changer %c %o %S %a %d
 Changer Device = /dev/sg0
 }
 
 Device {
 Name = AIT4-1
 Drive Index = 0
 Media Type = AIT4
 Archive Device = /dev/nst0
 AutomaticMount = yes;   # when device opened, read it
 AlwaysOpen = no;
 RemovableMedia = yes;
 RandomAccess = no;
 AutoChanger = yes;
 Alert Command = sh -c 'tapeinfo -f /dev/sg0 |grep TapeAlert|cat'
 Spool Directory = /mnt/storage/spool
 }
 
 I don't see any errors in the working log.
 
 If anybody has an idea how I can try to find out further where the problem 
 lies, or what I can do to debug things furher, please let me know.

It somehow seems awfully lot like this bug I've reported. Its just 
during a restore I notished the problem:

http://bugs.bacula.org/view.php?id=1194

I havent had time to dig more into it, but it seems like the storage 
daemon actually cached the slot number from the catalog, making 
update-slots not effective even it ran correctly and updated the slots.

-- 
Jesper

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backing up a Scalix mail server

2008-11-30 Thread Jesper Krogh
Kevin Keane wrote:
 Is there another way to accomplish this?

snapshots.

Make sure your installation recides on an lvm volume that has sufficient
space for doing a snapshot. Then do the snapshot in a RunBefore and 
remove it in a RunAfter.

This should provide you with no downtime. The state of a restore of 
your installation whould be excactly like if you had booted your system 
after a hard power-down.

Of course, make sure to actually test that it works for you :-)

Jesper
-- 
Jesper


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Jesper Krogh
Boris Kunstleben onOffice Software GmbH wrote:
 Hi,
 
 i am doing exactly that since last Thursday. 
 I have about 1.6TB in Maildirs and an huge number of small files. I have to 
 say it is awfull slow. Backing up a directory with about 190GB of Maildirs 
 took Elapsed time: 1 day 14 hours 49 mins 34 secs.
 On the other hand i have a server with Documents and images (about 700GB) 
 took much less time.
 All the Servers are virtuall Enviroments (Virtuozzo).

Can you give us the time for doing a tar to /dev/null of the fileset.

time tar cf /dev/null /path/to/maildir

Then we have a feeling about the actual read time for the file of the 
filesystem.

-- 
Jesper


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup of in one directory with 800.000 files

2008-11-27 Thread Jesper Krogh
Tobias Bartel wrote:
 Hello,
 
 i am tasked to set up daily full backups of our entire fax communication
 and they are all stored in one single director ;). There are about
 800.000 files in that directory what makes accessing that directory
 extremely slow. The target device is a LTO3 tape drive with an 8 slots
 changer.

What's the average file size? I may be that you're simply hitting the 
filesystems performance. Try measuring the speed with tar

time tar cf /dev/null /path/to/files

Alternatively:

tar cf - /path/to/file | pv  /dev/null

pv is a pipe that can measure throughput.

I've tried with some of the filesystems I have .. they can go all the 
way down to 2-5MB/s when there are a huge amount of really small files.

They you get a rough feeling about the time spend just by reading the 
files off disk. (if you spool data to disk before tape, then you should 
add in some time for that).

 With my current configuration Bacula needs ~48h to make a complete
 backup which kinda conflicts with the requirement of doing dailys.
 
 Does anybody have any suggestions on how i could speed things up? 

Try to find out where the bottleneck is in your system. It may be the 
catalog that's too slow, it may be that you should disable spooling.

 PS: I already talked to my boss and our developers and they will change
 the system so the faxes get stored in subdirectories. But changing that
 doesn't have a very high priority and got scheduled for next summer.

Based on above investigations it may be that they should do it sooner 
rather than later.

Also read the thread about the Large Maildir, thats basically the same
issues.

-- 
Jesper

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Jesper Krogh
Kjetil Torgrim Homme wrote:
 Jesper Krogh [EMAIL PROTECTED] writes:
 
 Can you give us the time for doing a tar to /dev/null of the fileset.

 time tar cf /dev/null /path/to/maildir

 Then we have a feeling about the actual read time for the file of
 the filesystem.
 
 if you're using GNU tar, it will *not* read the files if you dump to
 /dev/null.  it will simply stat the files as necessary (you can check
 this with strace if you like.)

Thanks The numers actually surprised me when posted. I usually do a

tar cf - /path | pv  /dev/null

pv prints the speed of the stream, so I dont have to wait for it to 
complete to get a feeling.

 use /dev/zero to avoid this optimisation.

Thank, nice tip.

-- 
Jesper

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup of in one directory with 800.000 files

2008-11-27 Thread Jesper Krogh
Tobias Bartel wrote:
 Even with 800,000 files, that sounds very slow.  How much data is
 involved, how is it stored and how fast is your database server?
 
 It's about 70GB of data, stored on a Raid5 (3Ware controller).
 
 The database is a SQLite one, on the same machine but on a Software 
 Raid 1.
 
 The backup device is an LTO3 connected via SCSI
 
 OS is a Debian stable.
 
 
 I already thought about moving the Database to MySQL but there is
 already a MySQL Server on the same box, it is a slave for our MySQL
 master and used for hourly Backups of our database (Stop the
 replication, do the backups and start the replication again).
 I don't really like the idea of adding a DB to the Slave that isn't on
 the master, nor do i like the idea of hacking up some custom MySQL
 install that runs parallel caus that will cost me with every future
 update.

Perhaps a Postgres on the same host?

 To be honest, i didn't expect that SQLite could be the bottle neck, it
 just can't be that slow. What made me think that its the number files,
 is that when i do an ls in that directory it takes ~15min before I see
 any output.

That more likely to be ls playing tricks with you.. Try:

ls -f | head (or just ls -f)

-- 
Jesper

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Large maildir backup

2008-11-27 Thread Jesper Krogh
Daniel Betz wrote:
 Hi!
 
 I have the same problem with large amount of files on one filesystem ( 
 Maildir ).
 Now i have 2 concurrent jobs running and the time for the backups need half 
 the time.
 I havent tested 4 concurrent jobs jet .. :-)


That would be a really nice feature to have directly in the File daemon 
if that is the case. (multiple threads to speed up a single job).

Jesper
-- 
Jesper

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to set up large database backup

2008-11-25 Thread Jesper Krogh
Dan Langille wrote:
 There is no way to dump interactively?  I'm a PostgreSQL fan and  
 creating a backup doesn't add overhead.

And using PITR, postgresql even allows you to fetch out the raw database 
files from underneath a running (and active with insert/updates) 
postgresql instance. This is just plain awfully cool.

we backing up pg databases in 300GB+ sizes with this.
.. but it is quite far from the orignial question :-)

-- 
Jesper

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Bacula BETA version 2.5.16 released

2008-10-24 Thread Jesper Krogh
Kern Sibbald wrote:
 Hello,
 
 We have released Bacula BETA version 2.5.16 source tar file and Win32 
 binaries 
 to the bacula-beta and win32-beta sections of the Bacula Source Forge 
 download area.
 
 This is our first beta version of the new development code, which we hope to 
 release around the end of the year.  For the moment, the documentation is 
 still being worked on so in some areas it is a bit sketch and not all the new 
 features are yet documented.  You can find the documentation at:
 
 http://www.bacula.org/manuals/en/concepts/concepts/New_Features.html

Quick question on the Job Control feature described above.

http://www.bacula.org/manuals/en/concepts/concepts/New_Features.html#SECTION0054

The phrase: A duplicate job in the sense we use it here means a second 
or subsequent job with the same name starts.

I'd expect it to be the same name and level?


-- 
Jesper

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Bacula BETA version 2.5.16 released

2008-10-24 Thread Jesper Krogh
Kern Sibbald wrote:
 On Friday 24 October 2008 09:05:27 Jesper Krogh wrote:
 Kern Sibbald wrote:
 Hello,

 We have released Bacula BETA version 2.5.16 source tar file and Win32
 binaries to the bacula-beta and win32-beta sections of the Bacula Source
 Forge download area.

 This is our first beta version of the new development code, which we hope
 to release around the end of the year.  For the moment, the documentation
 is still being worked on so in some areas it is a bit sketch and not all
 the new features are yet documented.  You can find the documentation at:

 http://www.bacula.org/manuals/en/concepts/concepts/New_Features.html
 Quick question on the Job Control feature described above.

 http://www.bacula.org/manuals/en/concepts/concepts/New_Features.html#SECTIO
 N0054

 The phrase: A duplicate job in the sense we use it here means a second
 or subsequent job with the same name starts.

 I'd expect it to be the same name and level?
 
 No, it is simply the same name.  There are directives that deal with the 
 level 
 but the basic implementation is to be able to control *any* duplicate jobs of 
 the same Job name.

Ok. That means that the default behavior of Bacula will change, so we
risk getting a Full (later scheduled) job cancelled in favor of a
current running Incremental. I suggest changing defaults so bacula's
behaviour is preserved during upgrades.

Jesper
-- 
Jesper

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Feature request: List inChanger flag when doing restore.

2008-10-18 Thread Jesper Krogh
Kern Sibbald wrote:
 Hello,
 
 Well, in the beginning of a development cycle, smaller changes are sometimes 
 implemented rapidly by the developers, but at the end of a development cycle, 
 as is currently the case, a request could get lost if it is not officially 
 submitted. 

I know. I just needed to ship it, otherwise I would have forgotten it.

-- 
Jesper

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Feature request: List inChanger flag when doing restore.

2008-10-17 Thread Jesper Krogh
I dont know if an official feature request is needed for this small 
stuff.

Item n: List inChanger flag when doing restore.
   Origin: Jesper Krogh[EMAIL PROTECTED]
   Date:   17 oct. 2008
   Status:

   What:
When doing a restore the restore selection dialog ends by telling stuff 
like this:
The job will require the following
Volume(s) Storage(s)SD Device(s)
===

000741L3  LTO-4 LTO3 

000866L3  LTO-4 LTO3 

000765L3  LTO-4 LTO3 

000764L3  LTO-4 LTO3 

000756L3  LTO-4 LTO3 

001759L3  LTO-4 LTO3 

001763L3  LTO-4 LTO3 

001762L3  LTO-4 LTO3 

001767L3  LTO-4 LTO3 




When having an autochanger, it would be really nice with an inChanger 
column so the operator knew if this restore job would stop waiting for 
operator intervention. This is done just by selecting the inChanger flag 
from the catalog and printing it in a seperate column.


   Why:This would help getting large restores through minimizing the 
time spent waiting for operator to drop by and change tapes in the library.



NB: The above is a real restore.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Media is full very early

2008-02-16 Thread Jesper Krogh
Juan Antonio Vera (Internet) wrote:
 Thanks for you response.
 
 I saw in my /var/log/messages a scsi error at the same time
 that bacula marked my tape like full (I think in cable problem
 or scsi driver bug).

Just to let you know.. I have had a similar problem recently. It was due 
to having 2 LTO-3 drives on the same SCSI-channel.

Problem wanished when moving one of the drives to a second channel.

-- 
Jesper

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Archive, bscan etc?

2008-02-15 Thread Jesper Krogh

 On Fri, 15 Feb 2008, Jesper Krogh wrote:


 No need to delete the volumes from the catalog... just set their
 status to something unusable... or, preferrably, disable them.

 Am I sure that bacula wont use a disabled Full-run as basis for a
 subsequent Incremental run?

 It will, but just do another full backup after disabling them.

That will work.. but I have some volumes of like 2TB and scheduling 2
subsequent full runs of those will definately prospone the regular
Incremental-run quite a lot.

Jesper
-- 
Jesper Krogh


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Archive, bscan etc?

2008-02-15 Thread Jesper Krogh
Jesper Krogh wrote:
 On Fri, 15 Feb 2008, Jesper Krogh wrote:


 No need to delete the volumes from the catalog... just set their
 status to something unusable... or, preferrably, disable them.
 Am I sure that bacula wont use a disabled Full-run as basis for a
 subsequent Incremental run?
 It will, but just do another full backup after disabling them.
 
 That will work.. but I have some volumes of like 2TB and scheduling 2
 subsequent full runs of those will definately prospone the regular
 Incremental-run quite a lot.

Actually, this will gaurantee that I loose some files. The changes 
appeared in the interval from the Archive-run startet to to subsequent 
full-run, will only appear on the archive tapes.

so inless the filesystem has no changes or I could snapshot them, then I 
would be sure to have some files only on the archived-tapes.

-- 
Jesper

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Archive, bscan etc?

2008-02-14 Thread Jesper Krogh

Hi.

We'd like to send some tapes every now and then to an external archive.
And make sure that doing a subsequent restore, bacula would think these
tapes are gone, since the amount of work getting them back is generally
only for emergency situations.

So I can just delete the Volumes from bacula and go on.

But if I by some chanche would like to get a file back, I'd like to be
able to tell which of the send volumes that I should bscan to get it
working.

Is there a script somewhere that takes a volumename and give me a tar
tvf-like output of the complete volume content?

Og is there a different solution for this problem that I have overlooked?

Jesper

-- 
Jesper Krogh


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Archive, bscan etc?

2008-02-14 Thread Jesper Krogh
 Is there a script somewhere that takes a volumename and give me a
 tar
 tvf-like output of the complete volume content?

 'bls' and 'bextract' will let you work directly with volumes.

Sorry.. I should have mentionend. It was my intention just to extract it
from the DB before removing the Volume from there. That wouldn't require
me to manually load the tape in the drive again.


Jesper

-- 
Jesper Krogh


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Archive, bscan etc?

2008-02-14 Thread Jesper Krogh
Arno Lehmann wrote:
 14.02.2008 10:53, Jesper Krogh wrote:
 We'd like to send some tapes every now and then to an external archive.
 And make sure that doing a subsequent restore, bacula would think these
 tapes are gone, since the amount of work getting them back is generally
 only for emergency situations.

 So I can just delete the Volumes from bacula and go on.
 
 No need to delete the volumes from the catalog... just set their 
 status to something unusable... or, preferrably, disable them.

Am I sure that bacula wont use a disabled Full-run as basis for a 
subsequent Incremental run?

 But if I by some chanche would like to get a file back, I'd like to be
 able to tell which of the send volumes that I should bscan to get it
 working.

 Is there a script somewhere that takes a volumename and give me a tar
 tvf-like output of the complete volume content?
 
 Some of the queries will help you there. Try the query command in 
 bconsole. You can redirect the output to a file, too.

Nice, I wasn't aware.

 Og is there a different solution for this problem that I have overlooked?
 
 I like to print the volume's contents (jobs listing, not each file in 
 my case) and store this printout with the tape in question. Of course, 
 having a CD with the source code of your current Bacula version there, 
 too, is also nice :-)

I'll have to remember that.

-- 
Jesper

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Block checksum mismatch

2008-02-13 Thread Jesper Krogh
Hi.

This is really old. I would just let you know that the problems was solve
by moving the second LTO-3-tape drive onto its own SCSI-channel. This
error happened whenever there were written to both devices at the same
time.

Jesper


 Hi


 We've extended our Quantum PX502 with a PX506 so we got 2 drives in one
 LTO-3 (misnamed as LTO-4) changes.


 It recycles the volume but fails to write at it?.. How can that happen?


 05-Apr 10:22 bacula-sd: Recycled volume 000727L3 on device LTO-4-0
 (/dev/nst0), all previous data lost.
 05-Apr 10:22 bacula-sd: Spooling data ...
 mascot-fd:  Disallowed filesystem. Will not descend from / into
 /var/lock
 mascot-fd:  Disallowed filesystem. Will not descend from / into
 /var/run
 mascot-fd:  Disallowed filesystem. Will not descend from / into /dev
 mascot-fd:  Disallowed filesystem. Will not descend from / into /sys
 05-Apr 10:22 bacula-sd: Job write elapsed time = 00:00:13, Transfer rate
 = 95.99 M bytes/second
 05-Apr 10:22 bacula-sd: Committing spooled data to Volume 000727L3.
 Despooling 1,250,286,800 bytes ...
 05-Apr 10:22 bacula-sd: Mascot_Daily.2007-04-05_10.13.01 Error:
 block.c:569 Write error at 0:9272 on device LTO-4-0 (/dev/nst0).
 ERR=Input/output error.
 05-Apr 10:22 bacula-sd: Mascot_Daily.2007-04-05_10.13.01 Error:
 block.c:317 Volume data error at 0:4294967295!
 Block checksum mismatch in block=9270 len=64512: calc=ebac9a9f
 blk=cd6990a4 05-Apr 10:22 bacula-sd: Mascot_Daily.2007-04-05_10.13.01
 Error: Re-read
 last block at EOT failed. ERR=block.c:317 Volume data error at
 0:4294967295!
 Block checksum mismatch in block=9270 len=64512: calc=ebac9a9f
 blk=cd6990a4 05-Apr 10:22 bacula-sd: End of medium on Volume 000727L3
 Bytes=598,155,264 Blocks=9,271 at 05-Apr-2007 10:22.


 --
 Jesper Krogh, [EMAIL PROTECTED]



 -
  Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to share
 your opinions on IT  business topics through brief surveys-and earn cash
 http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
  ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 
Jesper Krogh


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Spooling / Misconfiguration.

2008-02-12 Thread Jesper Krogh
Hi.

I'm runnning 2 concurrent jobs, and that works without any problems (after
getting the 2 LTO3 drives on their own controller).

Is it by design og a misconfiguraion that the two concurrent jobs hit
the spooldata size at the same time? It would be nice if the Spool-limit
was pr. job or something. Now, even I've got spooling, I'll actually put
the 2 jobs to the same tape at the same time (this is a situation where
the jobs go to the same pool)

12-Feb 09:49 bacula-sd JobId 13174: User specified spool size reached.
12-Feb 09:49 bacula-sd JobId 13181: User specified spool size reached.
12-Feb 09:49 bacula-sd JobId 13181: Writing spooled data to Volume.
Despooling 18,198,928,610 bytes ...
12-Feb 09:49 bacula-sd JobId 13174: Writing spooled data to Volume.
Despooling 281,801,145,301 bytes ...

Thanks.

-- 
Jesper Krogh


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exabyte 224 - Additional magazine not properly recognized

2008-02-12 Thread Jesper Krogh

 Do you see all 24 slots in the archive console? I believe there is an
 option somewhere to enable the other magazine?

 Also have you rebooted since adding the magazine?

Hi.

Our Quantum systems (PX502+PX506) reqireres that you call mtx -f
/dev/changer status several times to get the correct result. Allthough
this seems like a but in the changer, you may have similar problems?

Jesper

-- 
Jesper Krogh


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Spooling / Misconfiguration.

2008-02-12 Thread Jesper Krogh
John Drescher wrote:
 Can I make the storagedaemon reload it's configuration without
 restarting like the Director?

 Yes. As long as it is not in use you can restart the sd with the
 director running. There is also a reload command in the console that
 reloads the config, but I am not sure it does that for the bacula-sd.

I knew that..but currently it it catching up on some TB-volumes, so I'd 
prefer not to disturb it :-)

So if I could get these config-changes in, without interrupting current 
running jobs.

Jesper
-- 
Jesper

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Spooling / Misconfiguration.

2008-02-12 Thread Jesper Krogh
Arno Lehmann wrote:
 It would be nice if the Spool-limit
 was pr. job or something.
 
 It's in the manual :-)
 
 At 
 http://www.bacula.org/manuals/en/install/install/Storage_Daemon_Configuratio.html#DeviceResource
  
 there is Maximum Job Spool Size = bytes.

Excellent..  I somehow had got the idea that it should be configured in 
the Job Resource in the director, since the limit was something 
connected to the Job. This is absolutely what I need.

Can I make the storagedaemon reload it's configuration without 
restarting like the Director?

Jesper
-- 
Jesper

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Knocking out cancelled job from the director?

2008-01-25 Thread Jesper Krogh

Hi.

I have a job sitting in the director as cancelled, it is occupying one
of the two concurrent jobs the director is allowed to run but is not
present in either the storage daemon or the corresponding file-daemon.

Has been haning for about 1.5 hours now.

... history ...
The job was hanging in the file-daemon due to a stale NFS handle, I
cancelled the job from the director and rebooted the server.



Jesper
-- 
Jesper Krogh


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Knocking out cancelled job from the director?

2008-01-25 Thread Jesper Krogh
 I have a job sitting in the director as cancelled, it is occupying one
 of the two concurrent jobs the director is allowed to run but is not
 present in either the storage daemon or the corresponding file-daemon.

 Has been haning for about 1.5 hours now.

And then it came through by itself:
25-Jan 07:57 bacula-sd JobId 12892: Job Genome_Daily.2008-01-25_01.06.21
marked to be canceled.
25-Jan 08:10 bacula-sd JobId 12892: Job Genome_Daily.2008-01-25_01.06.21
marked to be canceled.
25-Jan 08:10 bacula-sd JobId 12892: Fatal error: append.c:159 Error
reading data header from FD. ERR=Connection reset by peer
25-Jan 08:10 bacula-sd JobId 12892: Job write elapsed time = 00:21:35,
Transfer rate = 1.044 M bytes/second
25-Jan 09:49 bacula-dir JobId 12892: Fatal error: Network error with FD
during Backup: ERR=Connection reset by peer
25-Jan 09:49 bacula-dir JobId 12892: Fatal error: No Job status returned
from FD

Jesper
-- 
Jesper Krogh


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Autochangers, Pools, and Carousels Oh My!

2008-01-24 Thread Jesper Krogh
Justin Maness wrote:
 To Who It May Concern,
 
   I've recently hit a problem that I thought shouldn't be happening.  I 
 have two autochangers with media existing in two common pools.  At HQ 
 our directory is trying to issue load commands to our library to load 
 tapes, tapes which only exist offsite in the other, identical, library. 
   The output error is very weird when our library is asked to load a 
 tape that isn't there.  Now, I don't _see_ a facility in which it knows 
 where the tapes are beyond that they are 'inchanger.'  I hope I am 
 missing something rather simple because I've contemplated creating 
 independent pools at each location to work around this problem, but I do 
 not particularly want to micromanage twice the amount of pools.  per 
 autochanger.  If anyone has setup anything similar or might know 
 something about this behaviour I would be grateful to hear it.

I've had similar concerns, how to tell Bacula that a tape is beyond 
reach without deleting it from the database. For me it is the Archived 
tapes. If I by any chance should need them, then it would be nice not to 
be forced to bscan them before, but in all other circumstances, they 
should be regarded as gone.

-- 
Jesper


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Suggestion for more output in report.

2008-01-22 Thread Jesper Krogh
It would be nice to have the Transfer rate for the individual transfers
also. Currently I got a message for transfer client disk to spool disk
like this.

But this is average over 51 hours which also contains spooling to tape
interleaved with the spooled blocks.

atlas-fd JobId 12787:  Disallowed filesystem. Will not descend from /
into /net
22-Jan 07:54 bacula-sd JobId 12787: Job write elapsed time = 51:35:32,
Transfer rate = 12.89 M bytes/second

Jesper
-- 
Jesper Krogh


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Configuring SpoolData?

2008-01-20 Thread Jesper Krogh
Arno Lehmann wrote:
 Well, but your setup doesn't consider upgraded jobs. Imagine you add a 
 client today. The Run = Level=Incremental Pool=Daily SpoolData=Yes 
 mon-tue thu-sat at 2:10 line will match and initiate a backup with 
 spooling. This backup will then be upgraded to a Full one (and might 
 even go to the Daily pool.

I'm not sure that I explained myself clear enough in the first message, 
but this was my actual problem (the upgraded jobs). Yes we do use an 
autochanger/tape, so different Storage-devices wont make sense here.

Jesper

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Configuring SpoolData?

2008-01-19 Thread Jesper Krogh
Hi.

I'd like to configure this simple policy:

Bacula should SpoolData = No if it is doing a Full backup (even if it 
has been automatically upgraded from Incremental), otherwise it should 
use SpoolData = Yes.

Is that possible?

Jesper

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Unblock device?

2008-01-18 Thread Jesper Krogh

Hi,

my jobs has stopped with a blocked device, how do I make it start again?

status 2

Device status:
Autochanger LTO-4 with devices:
   LTO-4-0 (/dev/nst1)
Device LTO-4-0 (/dev/nst1) is not open.
Device is BLOCKED. User unmounted during wait for media/mount.
Drive 1 status unknown.


*mount drive=1
Automatically selected Catalog: MyCatalog
Using Catalog MyCatalog
Automatically selected Storage: LTO-4
3301 Issuing autochanger loaded? drive 1 command.
3302 Autochanger loaded? drive 1, result: nothing loaded.
3901 open device failed: ERR=dev.c:433 Unable to open device LTO-4-0
(/dev/nst1): ERR=No medium found

*


-- 
Jesper Krogh


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Block checksum mismatch

2007-04-06 Thread Jesper Krogh
Hi

We've extended our Quantum PX502 with a PX506 so we got 2 drives in one
LTO-3 (misnamed as LTO-4) changes.

It recycles the volume but fails to write at it?.. How can that happen?

05-Apr 10:22 bacula-sd: Recycled volume 000727L3 on device LTO-4-0
(/dev/nst0), all previous data lost.
05-Apr 10:22 bacula-sd: Spooling data ...
mascot-fd:  Disallowed filesystem. Will not descend from / into
/var/lock
mascot-fd:  Disallowed filesystem. Will not descend from / into /var/run
mascot-fd:  Disallowed filesystem. Will not descend from / into /dev
mascot-fd:  Disallowed filesystem. Will not descend from / into /sys
05-Apr 10:22 bacula-sd: Job write elapsed time = 00:00:13, Transfer rate
= 95.99 M bytes/second
05-Apr 10:22 bacula-sd: Committing spooled data to Volume 000727L3.
Despooling 1,250,286,800 bytes ...
05-Apr 10:22 bacula-sd: Mascot_Daily.2007-04-05_10.13.01 Error:
block.c:569 Write error at 0:9272 on device LTO-4-0 (/dev/nst0).
ERR=Input/output error.
05-Apr 10:22 bacula-sd: Mascot_Daily.2007-04-05_10.13.01 Error:
block.c:317 Volume data error at 0:4294967295!
Block checksum mismatch in block=9270 len=64512: calc=ebac9a9f blk=cd6990a4
05-Apr 10:22 bacula-sd: Mascot_Daily.2007-04-05_10.13.01 Error: Re-read
last block at EOT failed. ERR=block.c:317 Volume data error at 0:4294967295!
Block checksum mismatch in block=9270 len=64512: calc=ebac9a9f blk=cd6990a4
05-Apr 10:22 bacula-sd: End of medium on Volume 000727L3
Bytes=598,155,264 Blocks=9,271 at 05-Apr-2007 10:22.

-- 
Jesper Krogh, [EMAIL PROTECTED]


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Speed of backup

2007-02-22 Thread Jesper Krogh
Alan Brown wrote:
 On Wed, 14 Feb 2007, Jesper Krogh wrote:
 
 The attached Tape is an LTO-3(Quantum PX506) with has a reported rate
 at 80
 MB/s (I havent tested this). The network is a gigabit network, which I
 can
 put around 600 mbit/s through using nc in both ends on some junk-files.
 
 Is that native or compressable speed?
 
 LTO3 are 400Gb native or 800Gb compressed - where marketing assumes
 2:1 compression ratios.
 
 Have you tried testing using btape's 'fill' command?
 
 If you're spooling, you shoudl expect throughputs to be considerably
 slower than non-spooling for streaming data as there's double handling
 of the files.

I'm spooling.. and that actuall gives me this information:
22-Feb 17:41 bacula-sd: Despooling elapsed time = 03:09:40, Transfer
rate = 52.72 M bytes/second


So it isn't the speed of the tapedrive that gives the limit..

And:
22-Feb 06:24 bacula-sd: Spooling data again ...
22-Feb 14:30 bacula-sd: User specified spool size reached.

So spooling 600GB is around 8 hours..  = 166 mbit/s..

It is the differece up to around 600 mbit/s that I'm missing (for
network/file-transfer).

Jesper
-- 
Jesper Krogh, [EMAIL PROTECTED]


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Speed of backup

2007-02-14 Thread Jesper Krogh
Hi.

We've just upgraded our bacula-installation from 1.36 to 2.0 .. That worked
excellent.. I'm very impressed with the smooth transistion.

In the old installation we had transferrates around 30 MB/s (measured using
iptraf when bacula-fd was processing some big files) (never more, often
less).. and with Bacula 2.0 this seems to be the limit again.

The attached Tape is an LTO-3(Quantum PX506) with has a reported rate at 80
MB/s (I havent tested this). The network is a gigabit network, which I can
put around 600 mbit/s through using nc in both ends on some junk-files.

The SCSI-interface is supposed to do 320MB/s

Likewise on the disk.. I can do a cat /big-file  /dev/null and get the
double block-in using vmstat when running a backup, so that doesn't seem to
be the limit either.

Anyone who can tell if this is typical.. or where my bottleneck is in this
system?

Jesper
-- 
Jesper Krogh


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Losing scheduled jobs when restarting director.

2006-09-05 Thread Jesper Krogh
Hi.

Sometimes I need to cycle the machine running bacula, if bacula is in the
middle of a backup and having some jobs scheduled, those jobs are lost..
not indicated as failed and thus not rerun when it gets back up.

I'm probably missing some configuration option? Any suggestions?

= Still running 1.36

Jesper
-- 
Jesper Krogh


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Feature request: Automatically detection of silent data corruption

2006-09-04 Thread Jesper Krogh
 On Saturday 02 September 2006 00:20, Jesper Krogh wrote:
 The cost of doing a Verify -- Catalog is really quite minimal, so in my
 opinion, I don't see the benefit in complicatint Backup jobs any more than
 they are.

I don't know if they are complicated already.. This feature is far to small
to allow to make the codebase be complicated for that. But the cost of a
verify is a read-through of the disk. There is an administrative overhead of
doing it via verify (since I basically what the check on all files that
has been unchanged according to metadata since last backup. This is
pretty hard to describe as a FileSet.

  I wouldn't even know what syntax to use to turn it on and
 control it.

The assertion that:

Data may not be altered, without the mtime has been changed should
always be true. Thus making this check and alerting if is fails
does not seem to harm anyone. So .. just dont make it configurable seems ok
to me.

If we dont have the data (a new file) then we cant assume anything, but
otherwise.. if the above assertion fails, then we have a disk that is
beginning to corrupt data.

As another suggested, this can be done via post-analysis of the catalog.
I'll try to write some scripts to do that, my first try:

select PathId,FilenameId,LStat, count(MD5) as md5s from File group by
PathId, FilenameId,LStat where md5s  1; [1]

Is invalid in MySQL :-/ and without the where-clause is has currently
been running for 49 hours.

I'll work a bit on optimizing this.

Jesper
[1] This is even wrong since it doesn't take the client-hostname into
consideration.


-- 
Jesper Krogh


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] [Bacula-devel] Feature request: Automatically detection of silent data corruption

2006-09-01 Thread Jesper Krogh

Item 1:   Automatically detection of silent data corruption
  Date:   1. september 2006
  Origin: Jesper Krogh [EMAIL PROTECTED]
  Status: Unimplemented.

  What:   Detect silent data-corruption automatically.
  When a full backup are performed there is (when configured so)
  put checksums into the catalog. These checksums can be used to
  detect silent corruption of data when the subsequent full
  backup is performed by comparing the checksum with the newly
  read files if the metadata of the file hasn't been updated
  since last full backup. The performance cost would be next to
  zero since all data is read and checksummed anyway.
  Why:A verification of diskdata is a very time/performnce comsuming
  process especially on large disks. This would be a
  shortcut to get a nearly as good feature to detect silent
  data corruption.

  Notes:  The idea came up when our 1TB disk went down with SCSI-errors
  and the time didnt allow us to make a check.


-- 
Jesper Krogh, [EMAIL PROTECTED]

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] Feature request: Automatically detection of silent data corruption

2006-09-01 Thread Jesper Krogh
Kern Sibbald wrote:
 Unless I hear back from you otherwise, I am considering that there is no need 
 to add this to the list of projects.

I'm totally aware of the Verify feature and yes it accomplishes the same
task. But in order for me to try to detect a silent data corruption
with the verify feature I need to do a full read-through of the entire
disk just for the verify process. But Bacula does a full read-through
of the disk everytime I schedule a Full backup anyway.

When bacula is performing the second full backup it will just
automatically have the abillity to check for silent corruption in
every file that hasn't changed since last full backup. This at no
cost[1]. Basically just check when inserting the file in the catalog:
That if the checksum is changes then some of the metadata is changed too
and alert if it isn't.

[1] In terms of performance, backup-time and I/O(but not coding time :-)

Jesper

-- 
Jesper Krogh, [EMAIL PROTECTED]


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Feature request: Filesystemwatch triggered backup.

2006-08-31 Thread Jesper Krogh
First: Sorry for the trouble I made by posting it to the bugtracker.

Then.

Item 1:   Filesystemwatch triggered backup.
  Date:   31 August 2006
  Origin: Jesper Krogh [EMAIL PROTECTED]
  Status: Unimplemented, depends probably on client initiated backups

  What:   With inotify and similar filesystem triggeret notification
  systems is it possible to have the file-daemon to monitor
  filesystem changes and initiate backup.

  Why:There are 2 situations where this is nice to have.
  1) It is possible to get a much finer-grained backup than
 the fixed schedules used now.. A file created and deleted
 a few hours later, can automatically be caught.

  2) The introduced load on the system will probably be
 distributed more even on the system.

  Notes:  This can be combined with configration that specifies
  something like: at most every 15 minutes or when changes
  consumed XX MB.




-- 
Jesper Krogh, [EMAIL PROTECTED]


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula install in Ubuntu 5.10

2006-05-31 Thread Jesper Krogh
Hi.

1.38.something has been dropped into Debian/unstable. If you (like I) would
like to have a system where there is as small part as possible installed
using sources, you could easily backport these to Ubuntu 5.10 (or 6.06) by:

1) Adding the deb-src entry for Debian/unstable in your sources.list'
2) apt-get source bacula
3) cd bacula-somthing
4) dpkg-buildpackage -a -rfakeroot

Then you'd get the packages build for your Ubuntu system.
(When you've installed alle the build dependencies).

Jesper (running bacula 1.36 on Ubuntu 5.10/6.06)

 Hi!


 I just did this today, but then on ubuntu 6.06.


 If you install the packages:
 build-essential, mysql-server and gnome-devel (last if you want the gnome
 console)

 and do the following configure: ./configure --enable-gnome
 --enable-tray-monitor --with-mysql
 --with-fd-password=[password] --with-sd-password=[password]
 --with-dir-password=[password] --enable-smartalloc


 then it should work... (worked for me)

 Greetings,
 Ger.



 Op dinsdag 30 mei 2006 14:17, schreef Danie:

 Hi all ,


 Busy with a new install in Ubuntu 5.10 , I first tried the apt-get way
 ,
 but the packages seems to be depreciated (1.36 I think) , so I opted to
 compile bacula from source however I'm having trouble with mysql.

 When I run the ./configure script (I used the example in the manual)
 all goes well up to the mysql where it complains about not finding the
 working directory :

 configure: error: Invalid MySQL directory /usr/include/mysql/ - unable
 to find mysql.h under /usr/include/mysql/

 now I check and mysql.h is in fact in this directory , please help as I
  am a bit stuck.

 TIA


 Daniel



 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 
Jesper Krogh



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fatal: Device busy writing to another volume.

2005-12-05 Thread Jesper Krogh
How can this happen? 

Clone_System.2005.12-03_08.22.33 Fatal error: Device /dev/nst0 is busy
writing on another volume. 

I have a quite well working setup using 1.36-3 and 25 GB of spool disk
setup. Should Bacula not just wait until the device was done writing to
the volume before trying? 

Thanks Jesper
-- 
./Jesper Krogh, [EMAIL PROTECTED], Jabber ID: [EMAIL PROTECTED]




---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Fatal: Device busy writing to another volume.

2005-12-04 Thread Jesper Krogh
How can this happen?

Clone_System.2005.12-03_08.22.33 Fatal error: Device /dev/nst0 is busy
writing on another volume.

I have a quite well working setup using 1.36-3 and 25 GB of spool disk
setup. Should Bacula not just wait until the device was done writing to
the volume before trying?

Thanks Jesper



-- 
Jesper Krogh



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_idv37alloc_id865op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fatal: Device busy writing to another volume.

2005-12-04 Thread Jesper Krogh

Kern Sibbald wrote:

On Sunday 04 December 2005 18:40, Jesper Krogh wrote:


How can this happen?

Clone_System.2005.12-03_08.22.33 Fatal error: Device /dev/nst0 is busy
writing on another volume.

I have a quite well working setup using 1.36-3 and 25 GB of spool disk
setup. Should Bacula not just wait until the device was done writing to
the volume before trying?



There is a bug report on this error.  However, I have been unable to reproduce 
it.  If you are able to distill the problem down to something simple and 
reproducible, a second bug report would probably be worth while.


It is reproducible here so I'll try to dig up some debugging information.

I've searched the bugtracker, but I cannot find the original bug report.



There is another unrelated feature request.

The spool seems to work ineffectively. Due to network bandwith etc. etc. 
I would like to run 2-3 concurrent jobs spooling to diske and then 
interleaving them to tape. (Please correct me if this is not one of the 
usual use-case for disk-spools ?)


In the situation where 2 concurrent jobs are started with the same 
speeds (both larger than $spool_size/2) then both build up until the sum 
of the spools reaches the $spool_size and then both starts requesting to 
spool to tape. (in the mean time they dont spool to disk, since the 
disk-spool is full at least until one of the jobs have finished spooling 
to tape).


I have a disk spool of 25GB, it would be quite more effecient to slice 
the spool into spool-chunks then:

first job spolling to chunk1, chunk3, chunk5, etc. etc.
second job spooling to chunk2, chunk4, chunk6, etc. etc.

The first that fails to get a chunk would starte spooling the chunk-set 
to tapes, deleting the chunks as it goes on, thereby freeing up 
disk-spool and letting the other job continue spooling to disk while the 
first job spools to tape.

(A round-robin-style disk-spool)

As it is now, both clients wait for spooling for the full time that the 
the first spooled block takes to write to disk.


There is probably just some feature in the documentation i've overlooked?

Jesper
--
Jesper Krogh, [EMAIL PROTECTED], JabberID: [EMAIL PROTECTED]



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637alloc_id=16865op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users