[Bacula-users] 5.0.1: Copy job setup as documented leads to fatal error on

2010-11-04 Thread roos
No one?

+--
|This was sent by r...@symentis.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bad disk noises - replace the volume without restarting the backup?

2010-11-04 Thread Mikael Fridh
On Wed, Nov 3, 2010 at 6:34 PM, Mark Luntzel m...@luntzel.com wrote:
 The answer is probably no but...

 I can hear the disk currently being written to making bad noises, and
 the speed is extremely slow. Bad disk for sure, about to fail. This is
 at the end of a multi-Terabyte backup, just about 500 gig left and I
 would REALLY hate to think there is no way out but to start over
 completely.

 So is there a way for me to replace that disk without invalidating the
 entire backup? Running on Linux with an external SATA / FW enclosure,
 bacula version 3.0.2

If there is a way to temporarily freeze a job (possible using signals)
to prevent it from writing anything while using LVM to attempt to
vgextend; pvmove; vgreduce; vgremove that physical disk could be a
viable solution.

Of course you would have to:
1. use LVM
2. Have the same amount of physical extents available somewhere else
in the system as the failing disk.

However, odds are some extents are unreadable and I'm not sure what
exactly happens when a pvmove fails to read an extent. Anyone
experienced with this particular case?

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] What are your suggestions for backups on a large RAID?

2010-11-04 Thread Oliver Hoffmann
Hi all,

I'll do backups to disk on a raid6 (28 TB) which is attached via
fibre channel.
There will be 50 clients with data ranging from a few MB to 100 GB or
more for a full backup, tiny files from mail servers as well as large
database ones.
Speed and reliability are both important (as always).

The question now is simply what is the best setup? 

Should I do one big volume pool or better a few smaller ones?
I think one big pool is easier to manage.

What is the best size for the volumes? 
100 GB seems to be reasonable. 

Which file system to have the best transfer rates? xfs? ext4?
xfs could be better here but I am not sure about it.

I like ubuntu. 10.04.1 LTS or the newer 10.10?
I tend to LTS.

What do you think?

Thanks a lot,

Oliver


--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] What are your suggestions for backups on a large RAID?

2010-11-04 Thread John Drescher
 I'll do backups to disk on a raid6 (28 TB) which is attached via
 fibre channel.

I recommend against using a single raid for backups. If the raid
controller silently corrupts your raid or there is a file system
problem you can easily loose all of your backups. I have seen both
happen in my 15 years in the industry. I was not in-charge of the data
when either happened and we were not using bacula.

 There will be 50 clients with data ranging from a few MB to 100 GB or
 more for a full backup, tiny files from mail servers as well as large
 database ones.
 Speed and reliability are both important (as always).

 The question now is simply what is the best setup?


I do not think there is one simple best setup.


 Should I do one big volume pool or better a few smaller ones?
 I think one big pool is easier to manage.

This is a user preference. I have 15 to 20 pools with about the same
amount of space and clients but most of these pools are for archival.
I only have 2 pools for backup.


 What is the best size for the volumes?

My opinion is 5 to 10 GB. But others use much larger volumes. Since
you have so much space 100GB would be fine. Remember that recycling is
all or none. I mean an entire volume needs to be recycled to reclaim
any space from after a job expires. So if you make your volumes too
large the recycling may take longer than you think.

 100 GB seems to be reasonable.

 Which file system to have the best transfer rates? xfs? ext4?

Either xfs or ext4 are good choices. On the subject of filesystems I
do not believe ext3 is a good choice however since it will take
minutes to delete a large file like this with xfs and ext4 taking less
than 1 second.

 xfs could be better here but I am not sure about it.

 I like ubuntu. 10.04.1 LTS or the newer 10.10?
 I tend to LTS.

I would go for the newer.

John

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Exchange plugin: Unable to restore

2010-11-04 Thread Michael Heydenbluth
Hello,

I'm trying to backup/restore an Exchange database.

Following configuration:
W2K3 R2 Server (32) with Exchange 2003 SP2, bacula-fd 5.0.3 from the
download area,
bacula-sd and bacula-dir (5.0.3 - mysql) compiled from source to run on
SLES 9.

Backing up the Information Store works and I can browse the files in
bconsole by typing:

*restore, then 3 (specify by jobno), then number of backup-job

I select all files from the Postfachspeicher (servername)-directory
and run the restore job. I always get the error message
Fatal error: Invalid restore path specified, must start with
'/@EXCHANGE/', then the job seems to just sit there and does nothing,
even not ending.

*status director

Running Jobs:
Console connected at 04-Nov-10 13:49
 JobId Level   Name   Status
==
  1208 RestoreFiles.2010-11-04_14.52.32_06 is waiting on
Storage SuperLoader3

*status SuperLoader3

Device status:
Autochanger Autochanger with devices:
   LTO-4 (/dev/nst1)
Device LTO-4 (/dev/nst1) is mounted with:
Volume:  KYE718L4
Pool:WinServer
Media type:  LTO-4
Slot 6 is loaded in drive 0.
Total Bytes Read=151,151,616 Blocks Read=2,343 Bytes/block=64,512
Positioned at File=212 Block=2,343

These values don't change over the time.

Am I missing something?

Greetings
Michael

just in case, it might be useful: Here's the relevant part of what I
get when typing show fileset:

O Mie
WD [A-Z]:/Dokumente und Einstellungen/*/Lokale
Einstellungen/History 
WD [A-Z]:/Dokumente und Einstellungen/*/Lokale
Einstellungen/Temp
WD [A-Z]:/Dokumente und Einstellungen/*/Lokale
Einstellungen/Temporary Internet Files
WD [A-Z]:/Dokumente und
Einstellungen/*/Lokale Einstellungen/Cookies
WD [A-Z]:/dokumente und
einstellungen/*/lokale einstellungen/verlauf
WD [A-Z]:/Winnt/system32/config
WD [A-Z]:/Windows/system32/config
WF [A-Z]:/pagefile.sys
WF [A-Z]:/hibernate.sys
N
I c:/
I d:/
N
P exchange:/@EXCHANGE/Microsoft Information Store
N
E d:/MDBDATA
E d:/EASY_DATA
E d:/MP3
N


--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] What are your suggestions for backups on a large RAID?

2010-11-04 Thread Martin Simmons
 On Thu, 4 Nov 2010 15:08:22 +0100, Oliver Hoffmann said:
 
 Should I do one big volume pool or better a few smaller ones?
 I think one big pool is easier to manage.

Consider using more than one pool if you want to keep some backups for longer
than others, because Bacula can only recycle complete volumes.  E.g. for
keeping some clients longer and/or for keeping Full longer than
Differential/Incremental.

__Martin

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exchange plugin: Unable to restore

2010-11-04 Thread Graham Keeling
On Thu, Nov 04, 2010 at 03:34:35PM +0100, Michael Heydenbluth wrote:
 Hello,
 
 I'm trying to backup/restore an Exchange database.
 
 Following configuration:
 W2K3 R2 Server (32) with Exchange 2003 SP2, bacula-fd 5.0.3 from the
 download area,
 bacula-sd and bacula-dir (5.0.3 - mysql) compiled from source to run on
 SLES 9.
 
 Backing up the Information Store works and I can browse the files in
 bconsole by typing:
 
 *restore, then 3 (specify by jobno), then number of backup-job
 
 I select all files from the Postfachspeicher (servername)-directory
 and run the restore job. I always get the error message
 Fatal error: Invalid restore path specified, must start with
 '/@EXCHANGE/', then the job seems to just sit there and does nothing,
 even not ending.

Can you copy and paste the exact commands that you enter, and the output of
them?
By the way, I have found out that it is very easy to crash the file daemon
when running the Exchange plugin, and it is not always obvious.
Whenever anything goes wrong, it is a good idea to check that it is still
running. Restarting the director helps too.

Also, I have found that you can only restore reliably from a full or a full
and a single incremental. If you have a chain of incrementals, it doesn't work.
Which rather defeats the point of the plugin.
There has been an unacknowledged bug entry for this for a month now
(bugid 1647).

 *status director
 
 Running Jobs:
 Console connected at 04-Nov-10 13:49
  JobId Level   Name   Status
 ==
   1208 RestoreFiles.2010-11-04_14.52.32_06 is waiting on
 Storage SuperLoader3
 
 *status SuperLoader3
 
 Device status:
 Autochanger Autochanger with devices:
LTO-4 (/dev/nst1)
 Device LTO-4 (/dev/nst1) is mounted with:
 Volume:  KYE718L4
 Pool:WinServer
 Media type:  LTO-4
 Slot 6 is loaded in drive 0.
 Total Bytes Read=151,151,616 Blocks Read=2,343 Bytes/block=64,512
 Positioned at File=212 Block=2,343
 
 These values don't change over the time.
 
 Am I missing something?
 
 Greetings
 Michael
 
 just in case, it might be useful: Here's the relevant part of what I
 get when typing show fileset:
 
 O Mie
 WD [A-Z]:/Dokumente und Einstellungen/*/Lokale
 Einstellungen/History 
 WD [A-Z]:/Dokumente und Einstellungen/*/Lokale
 Einstellungen/Temp
 WD [A-Z]:/Dokumente und Einstellungen/*/Lokale
 Einstellungen/Temporary Internet Files
 WD [A-Z]:/Dokumente und
 Einstellungen/*/Lokale Einstellungen/Cookies
 WD [A-Z]:/dokumente und
 einstellungen/*/lokale einstellungen/verlauf
 WD [A-Z]:/Winnt/system32/config
 WD [A-Z]:/Windows/system32/config
 WF [A-Z]:/pagefile.sys
 WF [A-Z]:/hibernate.sys
 N
 I c:/
 I d:/
 N
 P exchange:/@EXCHANGE/Microsoft Information Store
 N
 E d:/MDBDATA
 E d:/EASY_DATA
 E d:/MP3
 N
 
 
 --
 The Next 800 Companies to Lead America's Growth: New Video Whitepaper
 David G. Thomson, author of the best-selling book Blueprint to a 
 Billion shares his insights and actions to help propel your 
 business during the next growth cycle. Listen Now!
 http://p.sf.net/sfu/SAP-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Restore after weof

2010-11-04 Thread rainbow7of9
Hello,

yesterday we deltete our ultrium 2 media
with 

 [u...@host]$  mt -f /dev/st0 rewind
 [u...@host]$  mt -f /dev/st0 weof


Is it possible to restore after this command, like giving the same label name 
again?

Thanks!

+--
|This was sent by 5schus...@gmx.net via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] What are your suggestions for backups on a large RAID?

2010-11-04 Thread Oliver Hoffmann
  I'll do backups to disk on a raid6 (28 TB) which is attached via
  fibre channel.
 
 I recommend against using a single raid for backups. If the raid
 controller silently corrupts your raid or there is a file system
 problem you can easily loose all of your backups. I have seen both
 happen in my 15 years in the industry. I was not in-charge of the data
 when either happened and we were not using bacula.

Well, then I'd need two raids or the like. So, that's the old matter of
how much effort (aka money) do I put in redundancy. Besides I'll attach
a tape later on for full backups every two or three weeks.

  There will be 50 clients with data ranging from a few MB to 100 GB
  or more for a full backup, tiny files from mail servers as well as
  large database ones.
  Speed and reliability are both important (as always).
 
  The question now is simply what is the best setup?
 
 
 I do not think there is one simple best setup.

Sure, but one that fits my needs ;-)

 
  Should I do one big volume pool or better a few smaller ones?
  I think one big pool is easier to manage.
 
 This is a user preference. I have 15 to 20 pools with about the same
 amount of space and clients but most of these pools are for archival.
 I only have 2 pools for backup.
 
 
  What is the best size for the volumes?
 
 My opinion is 5 to 10 GB. But others use much larger volumes. Since
 you have so much space 100GB would be fine. Remember that recycling is
 all or none. I mean an entire volume needs to be recycled to reclaim
 any space from after a job expires. So if you make your volumes too
 large the recycling may take longer than you think.

I had volumes with 20 GB on slow USB-drives, which was ok concerning
recycling. 
As long as the RAID is quite fast I do not expect problems with 100 GB
volumes.

  100 GB seems to be reasonable.
 
  Which file system to have the best transfer rates? xfs? ext4?
 
 Either xfs or ext4 are good choices. On the subject of filesystems I
 do not believe ext3 is a good choice however since it will take
 minutes to delete a large file like this with xfs and ext4 taking less
 than 1 second.

Yep, ext3 is out of question. Advantages so far for xfs: mkfs takes a
fraction of a second, just 4.9 MB taken after formating and no problem
with all of the 28 TB. ext4 has a restriction of 16 TiB due to
e2fsprogs.

  xfs could be better here but I am not sure about it.
 
  I like ubuntu. 10.04.1 LTS or the newer 10.10?
  I tend to LTS.
 
 I would go for the newer.

Because later is greater? And after a while I simply do a dist-upgrade
with a (small) risk of messing all up?

 John
 
thx,

Oliver


--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] What are your suggestions for backups on a large RAID?

2010-11-04 Thread Oliver Hoffmann
  On Thu, 4 Nov 2010 15:08:22 +0100, Oliver Hoffmann said:
  
  Should I do one big volume pool or better a few smaller ones?
  I think one big pool is easier to manage.
 
 Consider using more than one pool if you want to keep some backups
 for longer than others, because Bacula can only recycle complete
 volumes.  E.g. for keeping some clients longer and/or for keeping
 Full longer than Differential/Incremental.
 
 __Martin
 
 --
 The Next 800 Companies to Lead America's Growth: New Video Whitepaper
 David G. Thomson, author of the best-selling book Blueprint to a 
 Billion shares his insights and actions to help propel your 
 business during the next growth cycle. Listen Now!
 http://p.sf.net/sfu/SAP-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
 

That's right. Thanks for pointing this out. 

Oliver

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] PostgreSQL 9.0 - passes regresssion tests

2010-11-04 Thread Dan Langille
FYI, the latest svn version of Bacula passes the regression tests with
PostgreSQL 9.0:

  http://regress.bacula.org/buildSummary.php?buildid=4916

-- 
Dan Langille -- http://langille.org/


--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore after weof

2010-11-04 Thread Dan Langille

On Thu, November 4, 2010 5:23 am, rainbow7of9 wrote:
 Hello,

 yesterday we deltete our ultrium 2 media
 with

  [u...@host]$  mt -f /dev/st0 rewind
  [u...@host]$  mt -f /dev/st0 weof


 Is it possible to restore after this command, like giving the same label
 name again?

Short answer: no.

Long answer: no.

I'm sure some of the data can be retrieved by people who know what they
are doing.  It is very technical and not for the faint of heart.  You have
a lot of work and a lot of stuff to learn in order to do that.  You should
talk to your OS support to find out what to do.

Or better still, if the data is not 100% required, move on.


-- 
Dan Langille -- http://langille.org/


--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] PostgreSQL 9.0 - passes regresssion tests

2010-11-04 Thread Mehma Sarja
Why does the click-through report that it failed the test?

http://regress.bacula.org/viewTest.php?onlyfailedbuildid=4916

Mehma
===
On 11/4/10 9:19 AM, Dan Langille wrote:
 FYI, the latest svn version of Bacula passes the regression tests with
 PostgreSQL 9.0:

http://regress.bacula.org/buildSummary.php?buildid=4916




--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Can you tell which are active clients from Bacula's database?

2010-11-04 Thread Matthew Seaman

Hi there,

We have a variable population of client machines being backed up
by bacula.  What I'd like to do is build a query for the bacula DB
that will detect eg. if any clients haven't had a full backup within the
last week.  (Yes -- I know there are configuration options to
automatically promote incrementals etc. to fulls in that situation:
we're using them.)  We'll then hook this up to our Nagios so the Ops
team gets alerted.

So I have come up with this query:

SELECT clientid, name, max(endtime) FROM job
WHERE level = 'F' AND type = 'B' AND jobstatus = 'T'
GROUP BY clientid, name
HAVING max(endtime)  now() - interval '7 day'
ORDER BY name

(We're using Postgresql)

This does pretty much what I want, except that the output includes job
records from clients that have been decommissioned and removed from
bacula-dir.conf.  Now, for the life of me, I can't see anything in the
DB that indicates whether a client backup job is active or not.  Is it
just me being blind or am I going to have to parse that out of the
bacula config files?

Cheers,

Matthew

-- 
Matthew Seaman
Systems Administrator
E msea...@squiz.co.uk

Squiz Ltd. A Zetland House, 109-123 Clifton Street, London EC2A 4LD
P +44 (0) 207 101 8300 F +44 (0) 870 112 3394 W www.squiz.co.uk

UNITED KINGDOM AUSTRALIA NEW ZEALAND EUROPE UNITED STATES
LONDON EDINBURGH

SUPPORTED OPEN SOURCE SOLUTIONS



signature.asc
Description: OpenPGP digital signature
--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] unable to label a tape

2010-11-04 Thread John Stoffel
 Arnhold == Arnhold  bacula-fo...@backupcentral.com writes:

Arnhold Because the the link
Arnhold /dev/tape/by-id/scsi-3500e09e0001af92f-nst is static.  When
Arnhold you change something in yout SCSI configuration the
Arnhold devicefile /dev/nstX could be changed and you have to change
Arnhold your config. The symlinks /dev/tape/by* (or /dev/disk/by-*
Arnhold for hard drives) are generated dynamyc.

Arnhold ps: I have tried /dev/nst0 but it shows the same error.

See my reply I just sent about writing your own udev rules to give
consistent names to your tape drive(s) using the SCSI Generic device
matched up with the output of:

/lib/udev/scsi_id --whitelisted /dev/sg#

for how I solved this problem on Debian Squeeze.

Arnhold ls -lsa /dev/tape/by-id/scsi-3500e09e0001af92f-nst 
Arnhold 0 lrwxrwxrwx 1 root root 10 25. Okt 14#58;24 
/dev/tape/by-id/scsi-3500e09e0001af92f-nst - ../../nst0

For some reason, I could never get this to work consistently on my
Debian box.  Not sure why.  

John

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Fiber channel tapes permanent name howto ?

2010-11-04 Thread John Stoffel

Javier I'm having the same problem that other thread in november 2008:
Javier 
http://sourceforge.net/mailarchive/message.php?msg_id=BBDDF0B7CFFFCE4FB5110F0A37CE60030942EE%40q.leblancnet.us

Javier But I can't find a solution ...

I've run into this issue too, but with just regular SCSI tape drives.
What happens if you do:

  sudo /lib/udev/scsi_id --replace-whitespace --whitelisted /dev/sg#

where /dev/sg#?  Do your FC tape drives have /dev/sg# entries?  What
is the output of:

  sudo lsscsi -g

and do you see the drives listed there?

Javier I have a autochanger rule in udev that points /dev/autochanger1 to 
/dev/sgX

Javier My problem is that this autochanger has 4 drives, and I cannot
Javier get serial number or other attributes which change, and when I
Javier reboot I will have to change bacula config every time.

Javier My lsscsi output (There is a autochanger and a NSR, 2 fc drives and 2
Javier scsi drives):

Javier # lsscsi
Javier [0:0:0:0]mediumx HP   MSL6000 Series   0520  /dev/sch0
Javier [0:0:0:1]tapeHP   Ultrium 3-SCSI   G63W  /dev/st2
Javier [0:0:0:2]tapeHP   Ultrium 3-SCSI   G63W  /dev/st3
Javier [0:0:0:3]storage HP   NS E1200-320 593d  -
Javier [0:0:1:0]storage HP   HSV200   5110  -
Javier [0:0:2:0]storage HP   HSV200   5110  -
Javier [1:0:0:0]cd/dvd  TEAC CD-224E  9.9A  /dev/sr0
Javier [3:0:3:0]tapeHP   Ultrium 3-SCSI   G63W  /dev/st0
Javier [3:0:4:0]tapeHP   Ultrium 3-SCSI   G54W  /dev/st1

You need to get the output of 'lsscsi -g' here...

Javier I can't use /dev/tape/by-path, because these devices could change,
Javier this is the ls output:

Here's what I use in /etc/udev/rules.d/dlt7k.rules file:

# Left drive in library - bacula uses it
KERNEL==st*[0-9],ENV{ID_SERIAL}==SQUANTUM_DLT7000_CX752S1059, \
SYMLINK+=dlt7k-left
KERNEL==nst*[0-9],ENV{ID_SERIAL}==SQUANTUM_DLT7000_CX752S1059, \
SYMLINK+=dlt7k-left-nst

#SUBSYSTEM==scsi_generic, PROGRAM=/lib/udev/scsi_id --replace-whitespace 
--whitelisted /dev/%k,
RESULT==SQUANTUM_DLT7000_CX752S1059, SYMLINK=dlt7k-left

# Right drive - not currently used by bacula
KERNEL==st*[0-9],ENV{ID_SERIAL}==SQUANTUM_DLT7000_PXB09S0552, \
SYMLINK+=dlt7k-right
KERNEL==nst*[0-9],ENV{ID_SERIAL}==SQUANTUM_DLT7000_PXB09S0552, \
SYMLINK+=dlt7k-right-nst


What you need to do is replace the ID_SERIAL match with the data you
get from the:

/lib/udev/scsi_id --replace-whitespace --whitelisted /dev/sg#

command.

Good luck!
John


--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Exchange plugin: Unable to restore

2010-11-04 Thread Michael Heydenbluth
Graham Keeling schrieb:

 On Thu, Nov 04, 2010 at 03:34:35PM +0100, Michael Heydenbluth wrote:
  Hello,
  
  I'm trying to backup/restore an Exchange database.
  
  Following configuration:
  W2K3 R2 Server (32) with Exchange 2003 SP2, bacula-fd 5.0.3 from the
  download area,
  bacula-sd and bacula-dir (5.0.3 - mysql) compiled from source to
  run on SLES 9.
  
 [short description of commands I entered]

 Can you copy and paste the exact commands that you enter, and the
 output of them?

Here we go. Sorry it's bit long, but you asked for it :-):

(just to be sure, there's nothing executing right now):
*status director

sqlbsrv1-dir Version: 5.0.3 (04 August 2010) i686-pc-linux-gnu suse 9
Daemon started 03-Nov-10 15:08, 18 Jobs run since started.
 Heap: heap=946,176 smbytes=167,123 max_bytes=2,761,470 bufs=564
max_bufs=5,473

Scheduled Jobs:
Level  Type Pri  Scheduled  Name Volume
===
[10 jobs or so scheduled to run at 11:00pm]

Running Jobs:
Console connected at 04-Nov-10 20:16
No Jobs running.


Terminated Jobs:
 JobId  LevelFiles  Bytes   Status   FinishedName

12050 0   Cancel   04-Nov-10 14:10 RestoreFiles
12060 0   Cancel   04-Nov-10 14:17 RestoreFiles
12070 0   Cancel   04-Nov-10 14:52 RestoreFiles 
12080 0   Cancel   04-Nov-10 15:50 RestoreFiles



*status storage=SuperLoader3
Connecting to Storage daemon SuperLoader3 at sqlbsrv1:9103

sqlbsrv1-sd Version: 5.0.3 (04 August 2010) i686-pc-linux-gnu suse 9
Daemon started 03-Nov-10 15:08. Jobs: run=15, running=0.
 Heap: heap=421,888 smbytes=161,927 max_bytes=342,749 bufs=114
max_bufs=202
Sizes: boffset_t=8 size_t=4 int32_t=4 int64_t=8

Running Jobs:
No Jobs running.


Jobs waiting to reserve a drive:


Terminated Jobs:
 JobId  LevelFiles  Bytes   Status   FinishedName
===
[...]
12050 0   Cancel   04-Nov-10 14:10 RestoreFiles
12060 0   Cancel   04-Nov-10 14:17 RestoreFiles
12070 0   Cancel   04-Nov-10 14:52 RestoreFiles
12080 0   Cancel   04-Nov-10 15:50 RestoreFiles


Device status:
Autochanger Autochanger with devices:
   LTO-4 (/dev/nst1)
Device FileStorage (/bacula) is not open.
Device LTO-4 (/dev/nst1) is mounted with:
Volume:  KYE718L4
Pool:WinServer
Media type:  LTO-4
Slot 6 is loaded in drive 0.
Total Bytes Read=151,151,616 Blocks Read=2,343 Bytes/block=64,512
Positioned at File=212 Block=2,343

Used Volume status:
KYE718L4 on device LTO-4 (/dev/nst1)
Reader=0 writers=0 devres=0 volinuse=0


Data spooling: 0 active jobs, 0 bytes; 1 total jobs, 2,304,272,492 max
bytes/job. Attr spooling: 0 active jobs, 2,436,193 bytes; 1 total jobs,
2,436,193 max bytes.


*restore
Automatically selected Catalog: MyCatalog
Using Catalog MyCatalog

First you select one or more JobIds that contain files
to be restored. You will be presented several methods
of specifying the JobIds. Then you will be allowed to
select which files from those JobIds are to be restored.

To select the JobIds, you have the following choices:
 1: List last 20 Jobs run
 2: List Jobs where a given File is saved
 3: Enter list of comma separated JobIds to select
[...]
13: Cancel
Select item:  (1-13): 3
Enter JobId(s), comma separated, to restore: 1200

You have selected the following JobId: 1200

Building directory tree for JobId(s) 1200 ...
++ 24,089 files inserted
into the tree.

You are now entering file selection mode where you add (mark) and
remove (unmark) files to be restored. No files are initially added,
unless you used the all keyword on the command line.
Enter done to leave this mode.

cwd is: /
$ cd /@EXCHANGE/Microsoft Information Store/Erste Speichergruppe/
cwd is: /@EXCHANGE/Microsoft Information Store/Erste Speichergruppe/
$ ls
C:\Programme\Exchsrvr\mdbdata\E0002449.log
C:\Programme\Exchsrvr\mdbdata\E000244A.log
C:\Programme\Exchsrvr\mdbdata\E000244B.log
C:\Programme\Exchsrvr\mdbdata\E000244C.log
C:\Programme\Exchsrvr\mdbdata\E000244D.log
C:\Programme\Exchsrvr\mdbdata\E000244E.log
C:\Programme\Exchsrvr\mdbdata\E000244F.log
C:\Programme\Exchsrvr\mdbdata\E0002450.log
C:\Programme\Exchsrvr\mdbdata\E0002451.log
C:\Programme\Exchsrvr\mdbdata\E0002452.log
C:\Programme\Exchsrvr\mdbdata\E0002453.log
C:\Programme\Exchsrvr\mdbdata\E0002454.log
C:\Programme\Exchsrvr\mdbdata\E0002455.log
C:\Programme\Exchsrvr\mdbdata\E0002456.log
Informationsspeicher f▒r ▒ffentliche Ordner (DMS)/
Postfachspeicher (DMS)/
$ mark Postfachspeicher (DMS)/
4 files marked.
$ cd Postfachspeicher (DMS)
$ ls
*D:\MDBDATA\priv1.edb

Re: [Bacula-users] What are your suggestions for backups on a large RAID?

2010-11-04 Thread Phil Stracchino
On 11/04/10 10:08, Oliver Hoffmann wrote:
 Hi all,
 
 I'll do backups to disk on a raid6 (28 TB) which is attached via
 fibre channel.
 There will be 50 clients with data ranging from a few MB to 100 GB or
 more for a full backup, tiny files from mail servers as well as large
 database ones.
 Speed and reliability are both important (as always).
 
 The question now is simply what is the best setup? 
 
 Should I do one big volume pool or better a few smaller ones?
 I think one big pool is easier to manage.

I have a total of four pools:  a Full pool on tape and a Full pool on
disk, both with one-year retention; a Differential pool on disk, with
two months retention (thus spanning across two monthly Full backups);
and an Incremental pool on disk with one week retention (sufficient to
span from one weeky differential backup to the next).

 What is the best size for the volumes? 
 100 GB seems to be reasonable. 

I do not limit my volume sizes.  I manage their size using volume use
duration instead.  Each days backups go into a single volume, whatever
size that volume needs to be, a few GB or several hundred.

 Which file system to have the best transfer rates? xfs? ext4?
 xfs could be better here but I am not sure about it.
 
 I like ubuntu. 10.04.1 LTS or the newer 10.10?
 I tend to LTS.
 
 What do you think?

If your backup server will be running Linux, then for the time being I
would suggest XFS.  It is optimized for sustained streaming reads and
writes, with multimedia and video originally in mind, but just the thing
for Bacula volumes.  You might want to consider btrfs when it becomes
production-ready, though.

My backup server runs Solaris 10 x86, and backs up to ZFS.


-- 
  Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
  ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
 Renaissance Man, Unix ronin, Perl hacker, Free Stater
 It's not the years, it's the mileage.

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book Blueprint to a 
Billion shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users