Re: [Bacula-users] Full to a diffrent SD and migrate to primary

2012-06-18 Thread Marcus Hallberg
Hi!We use VirtualFull but at least the first backup has to be a real one and after that I want one real per year.How much problem would it be to manually migrate the fullbackup into the database?Process:take a fullbackup to another SDI have the File and I have all the filepaths and so on in the database since I use the same directorManually change which SD that is responsible for the the fullbackup in the database so when I want to do a restore or a virtual full./marcus--/Marcus HallbergWimlet Consulting ABGamla Varvsgatan 1414 59 GöteborgTel: 031-3107000Direkt 031-3107010e-post: mar...@wimlet.sehemsida: www.wimlet.seFrån: "Georges" rm...@free.frTill: bacula-users@lists.sourceforge.netSkickat: torsdag, 14 jun 2012 21:46:13Ämne: Re: [Bacula-users] Full to a diffrent SD and migrate to primary
  

  
  
Hi Marcus
Le 14/06/2012 10:33, Marcus Hallberg a écrit:

  
  Hi!


We take a lot of
  backups over the internet from our clients and we sometimes
  have problems with the fullbackups which sometimes takes a
  very long time.
  

Did you investigate virtuall full backups ? This could reduce time
and bandwidth.

  


I would
  appreciate some thoughts and points to how to make this
  easier.


My plan is to install a laptop with a SD and drive
it out to the clients location and plug it in on their local
network and then tell the director to send backups to the
temporary SD. When it's ready take it back to mylocationand

migrate the job over to the primary SD and change the config
back so future backups end up in the primary SD.


Is thisdoableor is there a
better way to do it?

  

AFAIK Bacula does not permit copy or migration over SDs. One way to
manage this would be IMHO: copy volumes into the central SD, ajust
media type with primary storage definition... I am not sure to
recommend this.

G.

  

  /marcus
--



        /Marcus Hallberg
Wimlet Consulting AB
Gamla Varvsgatan 1
414 59 Göteborg
Tel: 031-3107000
Direkt 031-3107010
e-post: mar...@wimlet.se
hemsida: www.wimlet.se
  

  
  
  
  
  --
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
  
  
  
  ___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users




  

--Live Security Virtual ConferenceExclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___Bacula-users mailing listBacula-users@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bacula-users--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Full to a diffrent SD and migrate to primary

2012-06-14 Thread Marcus Hallberg
Hi!We take a lot of backups over the internet from our clients and we sometimes have problems with the fullbackups which sometimes takes a very long time.I would appreciate some thoughts and points to how to make this easier.My plan is to install a laptop with a SD and drive it out to the clients location and plug it in on their local network and then tell the director to send backups to the temporary SD. When it's ready take it back to mylocationand migrate the job over to the primary SD and change the config back so future backups end up in the primary SD.Is thisdoableor is there a better way to do it?/marcus--/Marcus HallbergWimlet Consulting ABGamla Varvsgatan 1414 59 GöteborgTel: 031-3107000Direkt 031-3107010e-post: mar...@wimlet.sehemsida: www.wimlet.se--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backing up Zimbra on-the-fly

2011-11-28 Thread Marcus Hallberg
Hi!

I am a long time user of both Zimbra and bacula and I use bacula to backup logs 
and configuration from the Zimbraserver but uses Zimbras backup for email and 
accounts.

I have /opt/zimbra/backup nfsmounted from our backupserver and it works great.

For the amount of data that is being used for backups Zimbra changed there 
default configuration a while ago to zip all the backups which led to that it 
does not uses hardlinks in the backups any more so all fullbackups include 
everything. The hardlinks (nozip) option is still available if you look in the 
configuration options for the backup. 

http://www.zimbra.com/forums/installation/46919-upgrade-5-6-use-hardlinks-zcs-backups.html

I hope it helps a little.

/marcus 
-- 



/Marcus Hallberg 
Wimlet Consulting AB 
Gamla Varvsgatan 1 
414 59 Göteborg 
Tel: 031-3107000 
Direkt 031-3107010 
e-post: mar...@wimlet.se 
hemsida: www.wimlet.se 

- Ursprungligt meddelande -
Från: Silver Salonen sil...@serverock.ee
Till: bacula-users@lists.sourceforge.net
Skickat: lördag, 26 nov 2011 22:09:27
Ämne: Re: [Bacula-users] backing up Zimbra on-the-fly

On Fri, 25 Nov 2011 13:08:40 -0500 (EST), Bill Arlofski wrote:
 Hi.

 Is anyone backing up Zimbra on-the-fly? I don't think taking server
 offline for pure file-based copy is a modern method of doing things.
 Neither do I want to use zmbackup, because as I understand, that 
 dumps
 all the mailboxes (which are on disk anyway) to separate files which
 would just waste so much space.

 Hi Silver... The Network Edition (eg: commercial/pay-for) version
 of Zimbra supports internal full and incremental backups that it does
 on-the-fly and automatically once configured.

 At our client sites, we use Bacula to backup the automatic Zimbra
 backups directory structure.

 It's a pretty reliable method of backing up Zimbra, and I have
 unfortunately had the experience of having to fully test process this
 when a client's Zimbra server lost 4 drives in a 6-drive RAID5 array
 at the same time. :(

 The good new though is that we were able to rebuild the Zimbra server
 (virtual this time), install the Zimbra software, restore Zimbra's
 automatic full and inc backups from our Bacula backup, and then
 re-import all Zimbra accounts/emails/calendars etc

 I think with the non-commercial Community Edition (assuming that is
 what you are using) you are best off running an live rsync of the
 /opt/zimbra directory structure, then shutdown Zimbra services
 (zmcontrol stop), run an offline rsync of the /opt/zimbra directory
 structure to the same place, restart Zimbra services (zmcontrol 
 start,
 THEN run a Bacula backup of the rsync'ed directory.

 On smaller sites using the non-commercial edition of Zimbra, we do
 those steps in a RunBefore script for the Zimbra job.

 Does this cost you a few minutes of Zimbra downtime each night?
 Yes, but only a few at most while the offline rsync runs.

 But if you are running the non-commercial version the benefit of this
 method is in your cost savings - IMHO.

 Hope this helps.

Thanks for the tips. I'm running the Network Edition, so I do have the 
backup possibility, but I'd prefer using Bacula, especially because I 
want to do backups to a remote server. Zimbra's backup scripts are meant 
storing backups locally, right? Also the backups take more-or-less the 
amount of space the data is.

As for the rsync-method, the downside of this is that it needs the same 
amount of disk-space for backup as for the data itself. This is what I 
meant by non-modern in the initial e-mail.

Anyway, would it suffice to make MySQL-dump, LDAP-dump and just backup 
the whole /opt/zimbra with Bacula from an LVM-snapshot or smth?

--
Silver

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full backup fails after a few days with Fatal error: Network error with FD during Backup: ERR=Interrupted system call

2011-09-26 Thread Marcus Hallberg
Hi!

I ran into this today too.

Does anyone know if it has been implemented?

/marcus 
-- 



/Marcus Hallberg 
Wimlet Consulting AB 
Gamla Varvsgatan 1 
414 59 Göteborg 
Tel: 031-3107000 
Direkt 031-3107010 
e-post: mar...@wimlet.se 
hemsida: www.wimlet.se 

- Ursprungligt meddelande -
Från: Jeremy Maes j...@schaubroeck.be
Till: R. Leigh Hennig rlh1...@gmail.com
Kopia: bacula-users@lists.sourceforge.net
Skickat: måndag, 26 sep 2011 16:28:23
Ämne: Re: [Bacula-users] Full backup fails after a few days with Fatal error: 
Network error with FD during Backup: ERR=Interrupted system call

Op 26/09/2011 16:01, R. Leigh Hennig schreef:
 Morning,

 I have a client that whenever I try to do a full backup, after 6 days, 
 the backup fails with this error:

 Fatal error: Network error with FD during Backup: ERR=Interrupted 
 system call


 In bacula-dir.conf, for that job definition, I have this:

 Full Max Run Time = 1036800

 So it should be able to run for up to 12 days, but after the 6th day, 
 it's stopping. During that time it writes about 4.7 TB (with another 1 
 TB to go). Running CentOS 5.5 with Bacula 5.0.2. Any thoughts?


 Thanks,

Bacula has a hardcoded time limit on jobs of 6 days. Kern called it an 
insanity check as any job that runs that long isn't really something 
you'd want ...

See 
http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg20159.html 
for a discussion on the mailing list from the past, and a pointer on 
where to change the time limit in the code if you wish.

Last time this was asked on the list someone pointed to a possible 
configuration option to override the hardcoded limit that should've been 
added by now, but given the 0 responses to that I can't say if it 
actually exists.

Regards,
Jeremy

  DISCLAIMER 
http://www.schaubroeck.be/maildisclaimer.htm

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Full backup fails after a few days withFatalerror: Network error with FD during Backup: ERR=Interrupted systemcall

2011-09-26 Thread Marcus Hallberg
Hi!This is the errormessage that I got.26-Sep 17:15 neo-dir JobId 45532: Error: Watchdog sending kill after 518421 secs to thread stalled reading File daemon.I broke the seconds down into days and it got to be:6,000243056I think that it would be to much of a coincidence that bacula has had a 6 day limit and something else kills it within 21 seconds of that limit./marcus--/Marcus HallbergWimlet Consulting ABGamla Varvsgatan 1414 59 GöteborgTel: 031-3107000Direkt 031-3107010e-post: mar...@wimlet.sehemsida: www.wimlet.seFrån: "Steve Costaras" stev...@chaven.comTill: j...@schaubroeck.be, "R. Leigh Hennig" rlh1...@gmail.comKopia: bacula-users@lists.sourceforge.netSkickat: måndag, 26 sep 2011 16:45:40Ämne: Re: [Bacula-users] Full backup fails after a few days with"Fatalerror: Network error with FD during Backup: ERR=Interrupted systemcall" I'm running 5.0.3 and don't see this 6-day limit for jobs and do not have max run time set in the config files. Pretty much all of my full backup jobs run into the 15-30 day range due to the shear size of the backup and the constant pause/flushing of the spool. I would think you're running into a different problem (going through a firewall or some other device that is timing out connections for long-running tcp).
-Original Message-From: Jeremy Maes [mailto:j...@schaubroeck.be]Sent: Monday, September 26, 2011 09:28 AMTo: 'R. Leigh Hennig'Cc: bacula-users@lists.sourceforge.netSubject: Re: [Bacula-users] Full backup fails after a few days with "Fatal error: Network error with FD during Backup: ERR=Interrupted system call"Op 26/09/2011 16:01, R. Leigh Hennig schreef: Morning, I have a client that whenever I try to do a full backup, after 6 days,  the backup fails with this error: Fatal error: Network error with FD during Backup: ERR=Interrupted  system call In bacula-dir.conf, for that job definition, I have this: Full Max Run Time = 1036800 So it should be able to run for up to 12 days, but after the 6th day,  it's stopping. During that time it writes about 4.7 TB (with another 1  TB to go). Running CentOS 5.5 with Bacula 5.0.2. Any thoughts? Thanks,Bacula has a hardcoded time limit on jobs of 6 days. Kern called it an "insanity check" as any job that runs that long isn't really something you'd want ...See http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg20159.html for a discussion on the mailing list from the past, and a pointer on where to change the time limit in the code if you wish.Last time this was asked on the list someone pointed to a possible configuration option to override the hardcoded limit that should've been added by now, but given the 0 responses to that I can't say if it actually exists.Regards,Jeremy  DISCLAIMER http://www.schaubroeck.be/maildisclaimer.htm--All the data continuously generated in your IT infrastructure contains adefinitive record of customers, application performance, securitythreats, fraudulent activity and more. Splunk takes this data and makessense of it. Business sense. IT sense. Common sense.http://p.sf.net/sfu/splunk-d2dcopy1___Bacula-users mailing listBacula-users@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bacula-users
--All the data continuously generated in your IT infrastructure contains adefinitive record of customers, application performance, securitythreats, fraudulent activity and more. Splunk takes this data and makessense of it. Business sense. IT sense. Common sense.http://p.sf.net/sfu/splunk-d2dcopy1___Bacula-users mailing listBacula-users@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bacula-users--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2dcopy1___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual diff instead of full

2011-08-10 Thread Marcus Hallberg
Hi!

Is there anyone out there that has seen this problem or have any idea of what 
it could be?

I have rewritten all the configparts related to this job, I have copied and 
pasted from other configfiles that works and as you can see from earlier 
postings it selcts the correct files needed to do the job but then it just 
ignores the fullbackup and starts reading from the diffbackup and if I let it 
do the job it finishes happy, says that it worked fine but the size is that of 
a diffbackup.

This problem is only for one backupjob all the others where I use VirtualFull 
is working great, unfortunatly this is my largest fullbackup so I would really 
want to make it work. 



/marcus 
-- 



/Marcus Hallberg 
Wimlet Consulting AB 
Gamla Varvsgatan 1 
414 59 Göteborg 
Tel: 031-3107000 
Direkt 031-3107010 
e-post: mar...@wimlet.se 
hemsida: www.wimlet.se 

- Ursprungligt meddelande -
Från: Marcus Hallberg mar...@wimlet.se
Till: bacula-users@lists.sourceforge.net
Skickat: måndag, 25 jul 2011 11:47:10
Ämne: Re: [Bacula-users] Virtual diff instead of full

Hi!

Thanks for the tip and sorry for the delay in feedback.

I turned on logging and I have tried to sift through to find out where it goes 
wrong...

I made a new Differential backup and I tried to make a new VirtualFull but with 
the same result.

According to the logs it makes a query containing the four JobIds needed to 
complete a virtual full (42420,43284,43296,43332)

SELECT Path.Path, Filename.Name, Temp.FileIndex, Temp.JobId, LStat, MD5 FROM ( 
SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, 
File.FilenameId AS FilenameId, LStat, MD5 FROM Job, File, ( SELECT 
MAX(JobTDate) AS JobTDate, PathId, FilenameId FROM ( SELECT JobTDate, PathId, 
FilenameId FROM File JOIN Job USING (JobId) WHERE File.JobId IN 
(42420,43284,43296,43332) UNION ALL SELECT JobTDate, PathId, FilenameId FROM 
BaseFiles JOIN File USING (FileId) JOIN Job  ON(BaseJobId = Job.JobId) 
WHERE BaseFiles.JobId IN (42420,43284,43296,43332) ) AS tmp GROUP BY PathId, 
FilenameId ) AS T1 WHERE (Job.JobId IN ( SELECT DISTINCT BaseJobId FROM 
BaseFiles WHERE JobId IN (42420,43284,43296,43332)) OR Job.JobId IN 
(42420,43284,43296,43332)) AND T1.JobTDate = Job.JobTDate AND Job.JobId = 
File.JobId AND T1.PathId = File.PathId AND T1.FilenameId = File.FilenameId ) AS 
Temp JOIN Filename ON (Filename.FilenameId = Temp.FilenameId) JOIN Path ON 
(Path.PathId = Temp.PathId) WHERE FileIndex  0 ORDER BY Temp.JobId, FileIndex 
ASC;

If I query the database to make sure that this is the four jobbs needed I get 
the right result:

mysql select Job, Type, Level, ClientId, JobStatus from Job where JobId = 
42420 or JobId = 43284 or JobId = 43296 or JobId = 43332;
++--+---+--+---+
| Job| Type | Level | ClientId | JobStatus |
++--+---+--+---+
| edstrom.2010-09-17_23.55.00_07 | B| F |8 | T |
| edstrom.2011-07-22_22.42.11_23 | B| D |8 | T |
| edstrom.2011-07-23_22.04.00_37 | B| I |8 | T |
| edstrom.2011-07-24_22.04.00_19 | B| I |8 | T |
++--+---+--+---+
4 rows in set (0.00 sec)

After this I do not get another referens to the Fulljob not by JobId or by name 
but the Diffjob is referenced both by Id and by name.


I'm not really sure where to go from here so if anyone has an Ideas I am 
willing to try it.


/marcus 
-- 





- Ursprungligt meddelande -
Från: James Harper james.har...@bendigoit.com.au
Till: Marcus Hallberg mar...@wimlet.se, Bacula-users@lists.sourceforge.net
Skickat: tisdag, 12 jul 2011 13:37:41
Ämne: RE: [Bacula-users] Virtual diff instead of full

 Hi!
 
 I have problem with Virtual full on one set of backups where it does
not give
 me a virtual full but a virtual diff instead.
 
 The last fullbackup was about 125 GB and an new estimate says that a
new full
 would be about 200 GB.
 
 When I ask i to produce a new VirtualFull it starts reading from the
last Diff
 and gives me a virtual full file of about 50 GB wich is about the size
of my
 diffs.
 
 Does anyone have any pointers they would be greatly appressiated
 

If you want to get your hands dirty and like sifting through logfiles
you can turn on mysql logging (assuming you are using mysql) and have a
look at what queries are used to determine the volumes that make up the
virtualfull.

Regular bacula debug logging may help too.

James

--
Storage Efficiency Calculator
This modeling tool is based on patent-pending intellectual property that
has been used successfully in hundreds of IBM storage optimization engage-
ments, worldwide.  Store less, Store more with what you own, Move data to 
the right place. Try It Now! http://www.accelacomm.com/jaw

Re: [Bacula-users] Virtual diff instead of full

2011-07-25 Thread Marcus Hallberg
Hi!

Thanks for the tip and sorry for the delay in feedback.

I turned on logging and I have tried to sift through to find out where it goes 
wrong...

I made a new Differential backup and I tried to make a new VirtualFull but with 
the same result.

According to the logs it makes a query containing the four JobIds needed to 
complete a virtual full (42420,43284,43296,43332)

SELECT Path.Path, Filename.Name, Temp.FileIndex, Temp.JobId, LStat, MD5 FROM ( 
SELECT FileId, Job.JobId AS JobId, FileIndex, File.PathId AS PathId, 
File.FilenameId AS FilenameId, LStat, MD5 FROM Job, File, ( SELECT 
MAX(JobTDate) AS JobTDate, PathId, FilenameId FROM ( SELECT JobTDate, PathId, 
FilenameId FROM File JOIN Job USING (JobId) WHERE File.JobId IN 
(42420,43284,43296,43332) UNION ALL SELECT JobTDate, PathId, FilenameId FROM 
BaseFiles JOIN File USING (FileId) JOIN Job  ON(BaseJobId = Job.JobId) 
WHERE BaseFiles.JobId IN (42420,43284,43296,43332) ) AS tmp GROUP BY PathId, 
FilenameId ) AS T1 WHERE (Job.JobId IN ( SELECT DISTINCT BaseJobId FROM 
BaseFiles WHERE JobId IN (42420,43284,43296,43332)) OR Job.JobId IN 
(42420,43284,43296,43332)) AND T1.JobTDate = Job.JobTDate AND Job.JobId = 
File.JobId AND T1.PathId = File.PathId AND T1.FilenameId = File.FilenameId ) AS 
Temp JOIN Filename ON (Filename.FilenameId = Temp.FilenameId) JOIN Path ON 
(Path.PathId = Temp.PathId) WHERE FileIndex  0 ORDER BY Temp.JobId, FileIndex 
ASC;

If I query the database to make sure that this is the four jobbs needed I get 
the right result:

mysql select Job, Type, Level, ClientId, JobStatus from Job where JobId = 
42420 or JobId = 43284 or JobId = 43296 or JobId = 43332;
++--+---+--+---+
| Job| Type | Level | ClientId | JobStatus |
++--+---+--+---+
| edstrom.2010-09-17_23.55.00_07 | B| F |8 | T |
| edstrom.2011-07-22_22.42.11_23 | B| D |8 | T |
| edstrom.2011-07-23_22.04.00_37 | B| I |8 | T |
| edstrom.2011-07-24_22.04.00_19 | B| I |8 | T |
++--+---+--+---+
4 rows in set (0.00 sec)

After this I do not get another referens to the Fulljob not by JobId or by name 
but the Diffjob is referenced both by Id and by name.


I'm not really sure where to go from here so if anyone has an Ideas I am 
willing to try it.


/marcus 
-- 





- Ursprungligt meddelande -
Från: James Harper james.har...@bendigoit.com.au
Till: Marcus Hallberg mar...@wimlet.se, Bacula-users@lists.sourceforge.net
Skickat: tisdag, 12 jul 2011 13:37:41
Ämne: RE: [Bacula-users] Virtual diff instead of full

 Hi!
 
 I have problem with Virtual full on one set of backups where it does
not give
 me a virtual full but a virtual diff instead.
 
 The last fullbackup was about 125 GB and an new estimate says that a
new full
 would be about 200 GB.
 
 When I ask i to produce a new VirtualFull it starts reading from the
last Diff
 and gives me a virtual full file of about 50 GB wich is about the size
of my
 diffs.
 
 Does anyone have any pointers they would be greatly appressiated
 

If you want to get your hands dirty and like sifting through logfiles
you can turn on mysql logging (assuming you are using mysql) and have a
look at what queries are used to determine the volumes that make up the
virtualfull.

Regular bacula debug logging may help too.

James

--
Storage Efficiency Calculator
This modeling tool is based on patent-pending intellectual property that
has been used successfully in hundreds of IBM storage optimization engage-
ments, worldwide.  Store less, Store more with what you own, Move data to 
the right place. Try It Now! http://www.accelacomm.com/jaw/sfnl/114/51427378/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Virtual diff instead of full

2011-07-12 Thread Marcus Hallberg
Hi!

I have problem with Virtual full on one set of backups where it does not give 
me a virtual full but a virtual diff instead.

The last fullbackup was about 125 GB and an new estimate says that a new full 
would be about 200 GB.

When I ask i to produce a new VirtualFull it starts reading from the last Diff 
and gives me a virtual full file of about 50 GB wich is about the size of my 
diffs.

Does anyone have any pointers they would be greatly appressiated

/marcus
--



/Marcus Hallberg
Wimlet Consulting AB
Gamla Varvsgatan 1
414 59 Göteborg
Tel: 031-3107000
Direkt 031-3107010
e-post: mar...@wimlet.se
hemsida: www.wimlet.se






attachment: g12.png--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security 
threats, fraudulent activity, and more. Splunk takes this data and makes 
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2d-c2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backups over an unsteady internet connection

2007-08-16 Thread Marcus Hallberg
Hi!

I've been having some issues with Bacula when using a somewhat unsteady 
internet connection.
Larger backups (spanning 3-4 days) are frequently interrupted before 
they have time to finish. This seems to be due to brief internet 
downtime out of my control (My ISP is working on the problem).

I don't think that these connection interrupts are very long so it seems 
that bacula would be able to continue when the connection is 
reestablished. Is there a configuration setting that would help me make 
bacula less sensitive for these interruptions?

/marcus

-- 
/Marcus Hallberg
Wimlet Consulting AB
Djurgårdsgatan 10
414 62 Göteborg
mobil: 0707-141716
e-post: [EMAIL PROTECTED]
hemsida: www.wimlet.se


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now   http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bwlimit for FD? Featurerequest

2007-07-19 Thread Marcus Hallberg
HI!

I had the same problem as you have some time ago even if my problems 
came from clients on adsl with a saturation limit on 0.8 Mb :)

I agree that this would be a nice feature but in the meantime I solved 
the problem with trafficshaping on the storage daemon. I know it's not 
funny to throw away perfectly good packages but it does at least only 
give you one place of configuration instead of configuring all your clients.

I have a script that on workdaymornings shape the incoming traffic so 
that it does not saturate the clients upload speed and after working 
hours it removes the limit.

I'm using tc which is part of the iproute2 package.

My limitation can look like this:
tc filter add dev eth1 parent : protocol ip prio 50 u32 match ip src 
xx.xxx.xxx.xx/32 police rate 500kbit burst 3k drop flowid :1

I hope this can help you or someone else.

/marcus

Daniel J. Priem wrote:
 Hi,
 i have about 50 clients to backup and needs some bwlimit for the
 Clients.
 Description:
 SD connected to 1Gbit behind firewall (own subnet)
 SD is able to write about 900Mbit.
 FDs are connected to 100Mbit.
 FDs are able to send with more than 100Mbit. (about 500 Mbit total)

 Currently i run in 2 Problems

 1. If a FD sends data to the SD it will saturate his own link, so wont
be able to do other things on the network, and is also not usable
accesible for other Users on the network.

 2. If i backup multiple FDs at the same time the link to the SD gets
saturated, and also the network is unusable on the subnet where the
SD lives.

 Solutions:
 2. can be solved by not backing up too much clients simultan

 1. can be solved if i put for every client a setting on the firewall
 FD-net  SD-net dest baculaports bwlimit xxxKb

 Problems:
 2. i would like to backup at much at possible clients simultan to make
the backuptime very short
 1. will putt a ot of load onto the firewall and administrative work on
 my shoulders.
 ( I ve also thinked about doing trafficshaping on the SD per client with
 some bridgingtools etc...)

 So any suggestions from your side?

 Best would be to have

 Director {
   Name = mydirector
   Password = secret
   BWLimit = 50M
 }
 Or such a setting on the director in clients-config part.

 Bets Regards and thanks for any hints in advance

 Daniel
   
 

 -
 This SF.net email is sponsored by DB2 Express
 Download DB2 Express C - the FREE version of DB2 express and take
 control of your XML. No limits. Just data. Click to get it now.
 http://sourceforge.net/powerbar/db2/
 

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
   


-- 
/Marcus Hallberg
Wimlet Consulting AB
Djurgårdsgatan 10
414 62 Göteborg
mobil: 0707-141716
e-post: [EMAIL PROTECTED]
hemsida: www.wimlet.se


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore Failed with Segmentation Violation error

2007-02-14 Thread Marcus Hallberg
Hi!

[EMAIL PROTECTED] wrote:
 I successfully backed up my data last night while doing the tape test. I
 did the 3 backup jobs of the same directory, restarted bacula, then backed
 up the same director again. That worked great. I tried to restore the
 files to a tmp location and I get this error message in my logs:

 13-Feb 22:34 maint-dir: Start Restore Job RestoreFiles.2007-02-13_22.34.04
 13-Feb 22:34 maint-dir: Fatal Error because: Bacula interrupted by signal
 11: Segmentation violation

 Whatever happened killed bacula-dir from running. I don't know what I've
 got configured incorrectly. I'm pretty sure about everything but what the
 FileSet on the restore job should be.

 Thanks,
 Jason


 -
 Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to share your
 opinions on IT  business topics through brief surveys-and earn cash
 http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
   
Did you try it again? My bacula version 1.38.11 segfaulted today on the 
restorecommand but after a restart of the director and it worked fine.

/marcus

-- 
/Marcus Hallberg
Wimlet Consulting AB
Djurgårdsgatan 10
414 62 Göteborg
mobil: 0707-141716
e-post: [EMAIL PROTECTED]
hemsida: www.wimlet.se


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup of virtual machines

2007-02-14 Thread Marcus Hallberg
Hi!

John Drescher wrote:
 On 2/13/07, Eduardo Júnior [EMAIL PROTECTED] wrote:
   
 Hello,

 i'm with a problem.

 I'm using the bacula 1.38 for to make the backups of to servers of the place
 where work.
 Only that now, i'm lost with a new situation: one of our servers was placed
 in a virtual machine, with the use of the Xen.

 And same placing the correct address of the client to be backup, in
 bacula-dir as in bacula-fd, backup always fails, accusing the following
 error:

 
 Are you sure that the filedaemon is started on bli and that the
 password for the client is correct?

 John

 -
 Using Tomcat but need to do more? Need to support web services, security?
 Get stuff done quickly with pre-integrated technology to make your job easier.
 Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
   
We are using xen and bacula and there is no different from working with 
a real (non virtual) machine. The virtual machine has it's own IP and 
it's own filedaemon. So do your faultsearching the same way you would 
any other machine.

/marcus

-- 
/Marcus Hallberg
Wimlet Consulting AB
Djurgårdsgatan 10
414 62 Göteborg
mobil: 0707-141716
e-post: [EMAIL PROTECTED]
hemsida: www.wimlet.se


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Retension of diskbased volumes

2006-12-05 Thread Marcus Hallberg
Thanks this was the nodge I needed to understand the retentiontimes.

/marcus

Arno Lehmann wrote:
 Hello,

 On 12/4/2006 4:17 PM, Marcus Hallberg wrote:
   
 Hi!

 I just wants to see if I have understood this correctly... I am backing 
 up to disk and I want to configure my retention times correctly.

 What I do is to take one fullbackup every 3 months and daily 
 incrementals in between.
 

 You should consider running incrementals from time to time.

   
 What I want is to be able to restore data that is three months old and 
 have bacula to recycle my volumes and prune my database. In the example 
 I have added one extra month to be on the safe side.
 

 That's a good idea.

   
 configuration of the poolresource:

 recycle = yes
 autoprune = yes
 maximum volume jobs = 1
 volume retention = 7 (for fullbackups)
 volume retention = 4 (for incremental)
 

 You NEED the time units, because you get what you set up - and without 
 units, that's seconds.

   
 would this recycle my volumes (diskfiles) after the specified time and 
 remove the corresponding jobrecords from the database?
 

 Yes...

   
 Am I missing something or am I good to go?
 

 ... but you should also verify your client related retention times. File 
 Retention and Job Retention are the keywords you should look up. In your 
 situation, I'd expect them to be both seven months long.

 Arno

   
 /marcus

 

   


-- 
/Marcus Hallberg
Wimlet Consulting AB
Djurgårdsgatan 10
414 62 Göteborg
mobil: 0707-141716
e-post: [EMAIL PROTECTED]
hemsida: www.wimlet.se


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Retension of diskbased volumes

2006-12-04 Thread Marcus Hallberg
Hi!

I just wants to see if I have understood this correctly... I am backing 
up to disk and I want to configure my retention times correctly.

What I do is to take one fullbackup every 3 months and daily 
incrementals in between.
What I want is to be able to restore data that is three months old and 
have bacula to recycle my volumes and prune my database. In the example 
I have added one extra month to be on the safe side.

configuration of the poolresource:

recycle = yes
autoprune = yes
maximum volume jobs = 1
volume retention = 7 (for fullbackups)
volume retention = 4 (for incremental)

would this recycle my volumes (diskfiles) after the specified time and 
remove the corresponding jobrecords from the database?

Am I missing something or am I good to go?

/marcus

-- 
/Marcus Hallberg
Wimlet Consulting AB
Djurgårdsgatan 10
414 62 Göteborg
mobil: 0707-141716
e-post: [EMAIL PROTECTED]
hemsida: www.wimlet.se


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] ERR=Operation timed out

2006-11-27 Thread Marcus Hallberg
Hi!

I had the exact same problem... In my case it was a backup over the 
internet and the server I was backing up was behind a combined 
raouter/adslmodem and it was this router that for some reason dropped 
the packets after a certain time (I think mine was 2 hours 10 minutes 
and 20 seconds). In my case it was a modem that TELIA gives to there 
customers and it is called SurfInBird.

My fix for the problem was that I removed the router and replaced it 
with a pure adsl-modem and configured the server to act as router and 
firewall by itself.

/marcus

Dahlgren Mattias wrote:
 Hello everyone.

 Im trying to set up bacula to do the backup of the about 12 FreeBSD
 webservers we have.

 I got it working on all but 2 servers, on these servers i keep
 continuosly getting errors that the operation times out. The strange
 thing is that it seems to ALWAYS occur after almost the exact same
 time on both servers. That time is: 2 hours 10 mins 10 secs. The secs
 can vary between 10-14 but its definitely the same time.

 I'v read some other posts here about similar problems but nothing that
 exactly seems to match our issue.

 I have tried setting the heartbeat interval in the SD resource to 15
 seconds as i saw mentioned in another post which didnt help. I tried
 setting it in the Client resource aswell as suggested in the Bacula
 manual. However this causes Bacula-dir to refuse to start saying there
 is a syntax error in the config file and pointing to this exact line
 in the client resource.

 Basically im lost and i really need to get this operational, is there
 anyone who has any ideas? I imagine it could be the network somehow
 timing out since its happening after the exact same elapsed time on
 both servers but i cant think of where to change this time out.

 Here is a cut from my log file with regards to this issue:

 23-Nov 01:47 -dir: No prior Full backup Job record found.
 23-Nov 01:47 -dir: No prior or suitable Full backup found. Doing
 FULL backup.
 23-Nov 01:47 -dir: Start Backup JobId 1046, Job=.2006-11-23_00.30.01
 23-Nov 01:47 xxx-sd: Volume Full-0002 previously written, moving
 to end of data.
 23-Nov 03:57 -dir: .2006-11-23_00.30.01 Fatal error: Network
 error with FD during Backup: ERR=Operation timed out
 23-Nov 03:57 -dir: obelix.2006-11-23_00.30.01 Fatal error: No Job
 status returned from FD.
 23-Nov 03:57 -dir: obelix.2006-11-23_00.30.01 Error: Bacula
 1.38.11 (28Jun06): 23-Nov-2006 03:57:43
   JobId:  1046
   Job:.2006-11-23_00.30.01
   Backup Level:   Full (upgraded from Incremental)
   Client: -fd i386-portbld-freebsd6.1,freebsd,6.1-STABLE
   FileSet: Full FileSet 2006-11-21 17:28:05
   Pool:   -Full-Pool
   Storage:File3
   Scheduled time: 23-Nov-2006 00:30:00
   Start time: 23-Nov-2006 01:47:33
   End time:   23-Nov-2006 03:57:43
   Elapsed time:   2 hours 10 mins 10 secs
   Priority:   10
   FD Files Written:   0
   SD Files Written:   0
   FD Bytes Written:   0 (0 B)
   SD Bytes Written:   0 (0 B)
   Rate:   0.0 KB/s
   Software Compression:   None
   Volume name(s): Full-0002
   Volume Session Id:  6
   Volume Session Time:1164209750
   Last Volume Bytes:  31,997,951,399 (31.99 GB)
   Non-fatal FD errors:0
   SD Errors:  0
   FD termination status:  Error
   SD termination status:  Error
   Termination:*** Backup Error ***


 Any help would be appreciated.

 -
 Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to share your
 opinions on IT  business topics through brief surveys - and earn cash
 http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
   


-- 
/Marcus Hallberg
Wimlet Consulting AB
Djurgårdsgatan 10
414 62 Göteborg
mobil: 0707-141716
e-post: [EMAIL PROTECTED]
hemsida: www.wimlet.se


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula storage daemon - didn't accept device filestorage command - segmentation fault

2006-11-13 Thread Marcus Hallberg
HI!

I have come across the same problem... I am to on a gentoo system and I 
am running 1.38.9 on a amd64.

My bacula configuration has been working almost perfectly for a long 
time and today  realized that all of my jobs that would have been run 
last night had status error and the description was:

Storage daemon didn't accept Device computerDir command.
and the SD has died.

I can still restore from the SD. I restored and compared configfiles 
from before and there is no difference... I have been running 1.38.9 
since july whith no significant problems.

The problem arouses every time I try to run a job from console.

Any input would be greatly apprecciated.

/marcus

Claudinei Matos wrote:
 Hi,

 I have a bacula setup on my network with a director, some clients and 
 a storage machine. Everything works fine (except for some troubles 
 with automatic labels), but now I want to add another storage with a 
 DVD recorder.
 Well, I did setup the DVD storage like the examples found over the 
 site documentation but always when I do try to run the backup job I 
 get an error that which says Storage daemon didn't accept Device 
 FileStorage command. and then bacula-sd dies with signal 11 
 (SEGMENTATION FAULT)
 Since even trying some different configurations did not solve the 
 problem I'd tried to copy the working storage configuration just 
 changing the necessary parameters (like name, etc) but even with a 
 configuration that works on the other storage, I can't get my new 
 storage to work.
 I'd started to think that maybe I have some problem with my 
 installation and since I do use gentoo I'd completely removed bacula 
 installation and compiled/installed a new one, but again, the same 
 problems.

 Since my configuration files are corrects I do not past they here but 
 just the daemon debug messages when I run the job from director:

 ti01-sd: cram-md5.c:52 send: auth cram-md5 
 [EMAIL PROTECTED] ssl=0
 ti01-sd: cram-md5.c:68 Authenticate OK vF/KDV+sMw8u6z+5y7/vBB
 ti01-sd: cram-md5.c:97 cram-get: auth cram-md5 
 [EMAIL PROTECTED] ssl=0
 ti01-sd: cram-md5.c:114 sending resp to challenge: tW+/jF+BJ9+u/k/Pl7/V6C
 ti01-sd: dircmd.c:187 Message channel init completed.
 ti01-sd: job.c:72 dird: JobId=152 job=teste.2006-10-13_15.49.41 
 job_name=teste client_name=alpha-fd type=66 level=70 FileSet=DoConfigs 
 NoAttr=0 SpoolAttr=0 FileSetMD5=4D+rUSIfDx/Fb+APhzcMEB SpoolData=0 
 WritePartAfterJob=0 PreferMountedVols=1
 ti01-sd: job.c:125 dird: 3000 OK Job SDid=1 SDtime=1160764425 
 Authorization=NFMG-EMJK-LKBI-MMJO-EBOG-LJAG-FGDK-FFLN
 ti01-sd: pythonlib.c:224 No startup module.
 ti01-sd: reserve.c:353 dird: use storage=Ti01Dir media_type=File 
 pool_name=Full-Splited-Pool pool_type=Backup append=1 copy=0 stripe=0
 ti01-sd: reserve.c:376 dird device: use device=FileStorage
 ti01-sd: askdir.c:245 dird: CatReq Job=teste.2006-10-13_15.49.41 
 FindMedia=1 pool_name=Full-Splited-Pool media_type=File
 ti01-sd: reserve.c:146 New Vol=DVD_0002 dev=FileStorage (/tmp)
 ti01-sd: reserve.c:700 JobId=152 looking for Volume=DVD_0002
 ti01-sd: reserve.c:245 free_unused_olume DVD_0002
 Kaboom! bacula-sd, ti01-sd got signal 11. Attempting traceback.
 Kaboom! exepath=/usr/sbin/
 Calling: /usr/sbin/btraceback /usr/sbin/bacula-sd 22181
 bsmtp: bsmtp.c:286 Fatal connect error to localhost: ERR=Connection 
 refused
 Traceback complete, attempting cleanup ...
 ti01-sd: jcr.c:154 write_last_jobs seek to 192
 ti01-sd: stored.c:550 Term device /tmp
 ti01-sd: reserve.c:200 free_volume: no vol on dev FileStorage (/tmp)
 ti01-sd: dev.c:1804 term_dev: FileStorage (/tmp)
 ti01-sd: dev.c:1691 really close_dev FileStorage (/tmp)
 ti01-sd: dvd.c:93 Enter unmount_dev
 ti01-sd: reserve.c:200 free_volume: no vol on dev FileStorage (/tmp)
 Pool   Maxsize  Maxused  Inuse
 NoPool  2566  0
 NAME1309  9
 FNAME   2562  1
 MSG 5124  3
 EMSG   10241  0


 Thanks in any help,


 Claudinei Matos
 

 -
 Using Tomcat but need to do more? Need to support web services, security?
 Get stuff done quickly with pre-integrated technology to make your job easier
 Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
 

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
   


-- 
/Marcus Hallberg
Wimlet Consulting AB
Djurgårdsgatan 10
414 62 Göteborg
mobil: 0707-141716
e-post: [EMAIL PROTECTED]
hemsida: www.wimlet.se


-
Using Tomcat but need to do more? Need to support web services

Re: [Bacula-users] Bacula storage daemon - didn't accept device filestorage command - segmentation fault

2006-11-13 Thread Marcus Hallberg
Marcus Hallberg wrote:
 HI!

 I have come across the same problem... I am to on a gentoo system and I 
 am running 1.38.9 on a amd64.

 My bacula configuration has been working almost perfectly for a long 
 time and today  realized that all of my jobs that would have been run 
 last night had status error and the description was:

 Storage daemon didn't accept Device computerDir command.
 and the SD has died.

 I can still restore from the SD. I restored and compared configfiles 
 from before and there is no difference... I have been running 1.38.9 
 since july whith no significant problems.

 The problem arouses every time I try to run a job from console.

 Any input would be greatly apprecciated.

 /marcus

 Claudinei Matos wrote:
   
 Hi,

 I have a bacula setup on my network with a director, some clients and 
 a storage machine. Everything works fine (except for some troubles 
 with automatic labels), but now I want to add another storage with a 
 DVD recorder.
 Well, I did setup the DVD storage like the examples found over the 
 site documentation but always when I do try to run the backup job I 
 get an error that which says Storage daemon didn't accept Device 
 FileStorage command. and then bacula-sd dies with signal 11 
 (SEGMENTATION FAULT)
 Since even trying some different configurations did not solve the 
 problem I'd tried to copy the working storage configuration just 
 changing the necessary parameters (like name, etc) but even with a 
 configuration that works on the other storage, I can't get my new 
 storage to work.
 I'd started to think that maybe I have some problem with my 
 installation and since I do use gentoo I'd completely removed bacula 
 installation and compiled/installed a new one, but again, the same 
 problems.

 Since my configuration files are corrects I do not past they here but 
 just the daemon debug messages when I run the job from director:

 ti01-sd: cram-md5.c:52 send: auth cram-md5 
 [EMAIL PROTECTED] ssl=0
 ti01-sd: cram-md5.c:68 Authenticate OK vF/KDV+sMw8u6z+5y7/vBB
 ti01-sd: cram-md5.c:97 cram-get: auth cram-md5 
 [EMAIL PROTECTED] ssl=0
 ti01-sd: cram-md5.c:114 sending resp to challenge: tW+/jF+BJ9+u/k/Pl7/V6C
 ti01-sd: dircmd.c:187 Message channel init completed.
 ti01-sd: job.c:72 dird: JobId=152 job=teste.2006-10-13_15.49.41 
 job_name=teste client_name=alpha-fd type=66 level=70 FileSet=DoConfigs 
 NoAttr=0 SpoolAttr=0 FileSetMD5=4D+rUSIfDx/Fb+APhzcMEB SpoolData=0 
 WritePartAfterJob=0 PreferMountedVols=1
 ti01-sd: job.c:125 dird: 3000 OK Job SDid=1 SDtime=1160764425 
 Authorization=NFMG-EMJK-LKBI-MMJO-EBOG-LJAG-FGDK-FFLN
 ti01-sd: pythonlib.c:224 No startup module.
 ti01-sd: reserve.c:353 dird: use storage=Ti01Dir media_type=File 
 pool_name=Full-Splited-Pool pool_type=Backup append=1 copy=0 stripe=0
 ti01-sd: reserve.c:376 dird device: use device=FileStorage
 ti01-sd: askdir.c:245 dird: CatReq Job=teste.2006-10-13_15.49.41 
 FindMedia=1 pool_name=Full-Splited-Pool media_type=File
 ti01-sd: reserve.c:146 New Vol=DVD_0002 dev=FileStorage (/tmp)
 ti01-sd: reserve.c:700 JobId=152 looking for Volume=DVD_0002
 ti01-sd: reserve.c:245 free_unused_olume DVD_0002
 Kaboom! bacula-sd, ti01-sd got signal 11. Attempting traceback.
 Kaboom! exepath=/usr/sbin/
 Calling: /usr/sbin/btraceback /usr/sbin/bacula-sd 22181
 bsmtp: bsmtp.c:286 Fatal connect error to localhost: ERR=Connection 
 refused
 Traceback complete, attempting cleanup ...
 ti01-sd: jcr.c:154 write_last_jobs seek to 192
 ti01-sd: stored.c:550 Term device /tmp
 ti01-sd: reserve.c:200 free_volume: no vol on dev FileStorage (/tmp)
 ti01-sd: dev.c:1804 term_dev: FileStorage (/tmp)
 ti01-sd: dev.c:1691 really close_dev FileStorage (/tmp)
 ti01-sd: dvd.c:93 Enter unmount_dev
 ti01-sd: reserve.c:200 free_volume: no vol on dev FileStorage (/tmp)
 Pool   Maxsize  Maxused  Inuse
 NoPool  2566  0
 NAME1309  9
 FNAME   2562  1
 MSG 5124  3
 EMSG   10241  0


 Thanks in any help,


 Claudinei Matos
 

 -
 Using Tomcat but need to do more? Need to support web services, security?
 Get stuff done quickly with pre-integrated technology to make your job easier
 Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
 

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
   
 


   
After a few more hours of seraching I think I have found the answer. The 
problem has aroused after I upgraded gcc. I found an answer in the 
bacula-users archive:

http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg10009

Re: [Bacula-users] backup failed

2006-08-30 Thread Marcus Hallberg
Hi!

I'm sorry but if the fullbackup fails it will have to start over from 
the beginning. I have a client that also has a lot of data and it will 
usually fail a couple of times before it succeeds.

Keep the faith :) As long as you send the data over the internet it is 
bound to fail somewhere a long the line sometimes.

/marcus

Marco Strullato wrote:
 hi all!

 i have this problem: I'm doing the 1st backup ok about 50GB from a 
 server but I have connection problem with sd. As you can see below I 
 have a connection timeout...

 Can I start another time full backup from where the last failed?


 28-ago 14:33 DirectorServer: BackupArchidoc.2006-08-28_11.42.29 Fatal 
 error: Network error with FD during Backup: ERR=Timeout della connessione
 28-ago 14:33 DirectorServer: BackupArchidoc.2006-08-28_11.42.29 Fatal 
 error: No Job status returned from FD.
 

 -
 Using Tomcat but need to do more? Need to support web services, security?
 Get stuff done quickly with pre-integrated technology to make your job easier
 Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
 

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
   


-- 
/Marcus Hallberg
Wimlet Consulting AB
Djurgårdsgatan 10
414 62 Göteborg
mobil: 0707-141716
e-post: [EMAIL PROTECTED]
hemsida: www.wimlet.se


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula with multiple storages

2006-08-22 Thread Marcus Hallberg
Hi!

I have a tip for you.

When I started to use bacula I wrote all my fullbackups for a specific 
host to one file and all the incrementals to another file. The bad thing 
with this setup is that you will not be able to remove the data from old 
backups that you no longer wish to keep on disk. So I changed my setup 
to go with automatic labeling and creation of new files for every backup 
so that I don't have to save data forever.

I followed the manual and it now works great. Good luck

/marcus

Arno Lehmann wrote:
 Hello,

 On 4/1/2006 5:08 PM, Alexander Nolte wrote:
 Hi everybody!

 We (the chair for Informations- and Engineering-Management at the
 Ruhr-University of Bochum) are using bacula as our main backup system 
 for
 our Linux servers for about 4 months now and we are very confident 
 with the
 system.
 Since I am searching for a way to backup our workstations and laptops 
 with
 bacula also, I came over the following problem:

 I dont want to have the server-backups and the workstation/laptop 
 backups in
 one file. So I tried to use different labels for them but all that 
 happened
 was that bacula created a file with the new name (e.g. workstation) but
 still wrote the data to the old file (e.g. servers).
 After that I tried to create 2 different storages in one storage 
 daemon. It
 went quite well but when I did the server backup first in the file 
 servers
 and did the workstation backup later in another file (+ a different
 directory) and I tried to backup the servers again, bacula always 
 searched
 for the end of the file of the workstation backup which he didnt find 
 there
 of course. The same happened when I created 2 separated storage 
 daemons on
 one machine.

 So my question is:
 How can I write the backup for the servers in one and the backup for the
 workstations in a different file?

 Use different pools, and assigne the jobs to the right pools.

 Happy Easter (I'm off on vacation :-)

 Arno

 We are using bacula version 1.36.3 on a SuSe 9.2 system and we write the
 data to a harddisk on a Windows 2003 server over the cifs.

 Thanks in advance for your help.

 Kindest regards
 Alexander Nolte



 ---
 This SF.Net email is sponsored by xPML, a groundbreaking scripting 
 language
 that extends applications into web and mobile media. Attend the live 
 webcast
 and join the prime developer group breaking into this new coding 
 territory!
 http://sel.as-us.falkag.net/sel?cmd=lnkkid=110944bid=241720dat=121642
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




-- 
/Marcus Hallberg
Wimlet Consulting AB
Djurgårdsgatan 10
414 62 Göteborg
mobil: 0707-141716
e-post: [EMAIL PROTECTED]
hemsida: www.wimlet.se


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula error in Windows client

2006-07-22 Thread Marcus Hallberg
Hi!

I may not have the answer for you but I might have a plausible explanation.

I do a lot backups over the internet and some of them are quite large. 
Sometimes when I do the fullbackups they fail a few times. The reason 
for that is that the connection is not maintained through the entire 
time that it takes to complete the fullbackup. I had a fullbackup which 
was 62 GB large and I had to do it 4 times before it completed successfully.

Try and make the fileset really small so it is virtually impossible for 
the backup to fail because of network errors.

/marcus

DAve wrote:
 Good afternoon,

 We have had Bacula running several months now and been very happy with 
 it. We just began doing network wide backups of our servers and all is 
 going well.

 This week we began backing up a clinet machine inside our NOC. In order 
 to do so I have to go outside my PIX and back though the client's 
 firewall. This works fine for one machine inside the client firewall, 
 but not the other. Not real sure what to make of it. I tried searching 
 and found one message with firewall issues but no report of success.

 I believe both sides of the firewall are configured correctly, certainly 
 because one client machine backs up fine. The only difference I can see 
 is that the failing server has a far larger backup file. The file being 
 backed up is the system-state.bkf file created by the ntbackup utility.

 Below is a script of the director session showing the version of the 
 director (1.38.5) and the version of the FD (1.38.10). The final message 
 is the report on the failed backup.

 Not real sure where to look next. Any suggestions are appreciated.

 DAve


 *version
 director-dir Version: 1.38.5 (18 January 2006)
 *st
 Using default Catalog name=DataVault DB=bacula
 Status available for:
   1: Director
   2: Storage
   3: Client
   4: All
 Select daemon type for status (1-4): 3
 The defined Client resources are:
   1: director-fd
   2: web6-fd
   3: allied1-fd
   4: mulberryss1-fd
   5: newnfs-fd
   6: avhost-fd
 Select Client (File daemon) resource (1-6): 4
 Connecting to Client mulberryss1-fd at mail.SOMECLIENTcom:49201

 allied-mulberryss1-fd Version: 1.38.10 (08 June 2006)  VSS Windows 
 Server 2003 M
 VS NT 5.2.3790
 Daemon started 19-Jul-06 16:52, 5 Jobs run since started.

 Terminated Jobs:
   JobId  Level Files Bytes  Status   FinishedName
 ==
  52  Full  1 13,987,252,757 Error19-Jul-06 18:08 
 Allied-mulberrys
 s1
  57  Full  1 13,987,245,632 Error20-Jul-06 12:00 
 Allied-mulberrys
 s1
 
 Running Jobs:
 Director connected at: 20-Jul-06 14:54
 No Jobs running.
 
 *mes
 20-Jul 11:49 director-dir: Allied-mulberryss1.2006-07-20_10.41.26 Fatal 
 error: Network error with FD during Backup: ERR=Connection reset by peer
 20-Jul 11:50 director-dir: Allied-mulberryss1.2006-07-20_10.41.26 Fatal 
 error: No Job status returned from FD.
 20-Jul 11:50 director-dir: Allied-mulberryss1.2006-07-20_10.41.26 Error: 
 Bacula 1.38.5 (18Jan06): 20-Jul-2006 11:50:12
JobId:  57
Job:Allied-mulberryss1.2006-07-20_10.41.26
Backup Level:   Full
Client: mulberryss1-fd Windows Server 2003,MVS,NT 
 5.2.3790
FileSet:Allied-mulberryss1 2006-07-19 16:53:14
Pool:   Daily-Mulberryss1-Pool
Storage:storage1-allied-mulberryss1
Scheduled time: 20-Jul-2006 10:41:24
Start time: 20-Jul-2006 10:41:28
End time:   20-Jul-2006 11:50:12
Priority:   10
FD Files Written:   0
SD Files Written:   0
FD Bytes Written:   0
SD Bytes Written:   0
Rate:   0.0 KB/s
Software Compression:   None
Volume name(s): Daily-mulberryss1-0003
Volume Session Id:  1
Volume Session Time:1153410120
Last Volume Bytes:  12,999,167,664
Non-fatal FD errors:0
SD Errors:  0
FD termination status:  Error
SD termination status:  Running
Termination:*** Backup Error ***

 *

   


-- 
/Marcus Hallberg
Wimlet Consulting AB
Djurgårdsgatan 10
414 62 Göteborg
mobil: 0707-141716
e-post: [EMAIL PROTECTED]
hemsida: www.wimlet.se


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] From single to multiple volumes!

2006-07-12 Thread Marcus Hallberg
Hi!

I have a question! When I started using bacula I configured it to write 
the data to one file for full backups and one file for incremental 
backups for each host. Now I want it to always create new files so that 
I can remove data that is no longer useful.
My question:

Is it possible to just add the directive Maximum Volume Jobs = 1 to 
the pool definition and get the effect that it will create new volumes 
for every coming job and still be able to restore from the old volume?

/marcus

-- 
/Marcus Hallberg
Wimlet Consulting AB
Djurgårdsgatan 10
414 62 Göteborg
mobil: 0707-141716
e-post: [EMAIL PROTECTED]
hemsida: www.wimlet.se



-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid=120709bid=263057dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Consistent backup failures with Windows 2000 clients

2006-06-01 Thread Marcus Hallberg
In my case it turned out to be a network problem... It has now finished. 
It turned out that I had missed to set the compress directive so it 
didn't break just in the end as I thought.

I did get som errors during the backup:
Could not stat C:/some/file ERR=reference (handle) is faulty (the 
error message is translated by me so it could differ...)

Does anyone know what it means?

Thanks for the help.

/marcus


Marcus Hallberg wrote:

Bill Moran wrote:

  

Marcus Hallberg [EMAIL PROTECTED] wrote:

 



Now has the director reported as failed it took two hours.

The final output from the director is:

26-May 14:06 edstrom-fd: edstrom.2006-05-22_10.07.42 Fatal error: 
c:\cygwin\home\kern\bacula\k\src\win32\filed\../../filed/backup.c:500 
Network send error to SD. ERR=Input/output error
26-May 14:06 edstrom-fd: edstrom.2006-05-22_10.07.42 Error: 
c:\cygwin\home\kern\bacula\k\src\win32\lib\../../lib/bnet.c:393 Write error 
sending len to Storage daemon:localhost:9103: ERR=Input/output error
26-May 16:05 neo-dir: edstrom.2006-05-22_10.07.42 Fatal error: Network error 
with FD during Backup: ERR=Connection reset by peer
   

  

[snip]

 



The Network error with FD during Backup: does that indicate that there 
is a problem for the director to talk to the filedaemon or for the 
filedaemon to talk to the storagedaemon?
   

  

Looks to be between the FD and the SD.  Sure you don't have any firewalls
or other traffic control equipment between the two that's interfering with
the FD reliably contacting the SD?

 



There is no firewall blocking the communication between the bacula units 
and it worked fine for five days... It has broken twice on what could be 
the same place.

I started a new fullbackup wich should be at this place in a couple of 
days so if it breaks on the same place then I will know that it is not 
network related.

/marcus


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
  



-- 
/Marcus Hallberg
Wimlet Consulting AB
Djurgårdsgatan 10
414 62 Göteborg
mobil: 0707-141716
e-post: [EMAIL PROTECTED]
hemsida: www.wimlet.se



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Consistent backup failures with Windows 2000 clients

2006-05-28 Thread Marcus Hallberg
Bill Moran wrote:

Marcus Hallberg [EMAIL PROTECTED] wrote:

  

Now has the director reported as failed it took two hours.

The final output from the director is:

26-May 14:06 edstrom-fd: edstrom.2006-05-22_10.07.42 Fatal error: 
c:\cygwin\home\kern\bacula\k\src\win32\filed\../../filed/backup.c:500 Network 
send error to SD. ERR=Input/output error
26-May 14:06 edstrom-fd: edstrom.2006-05-22_10.07.42 Error: 
c:\cygwin\home\kern\bacula\k\src\win32\lib\../../lib/bnet.c:393 Write error 
sending len to Storage daemon:localhost:9103: ERR=Input/output error
26-May 16:05 neo-dir: edstrom.2006-05-22_10.07.42 Fatal error: Network error 
with FD during Backup: ERR=Connection reset by peer



[snip]

  

The Network error with FD during Backup: does that indicate that there 
is a problem for the director to talk to the filedaemon or for the 
filedaemon to talk to the storagedaemon?



Looks to be between the FD and the SD.  Sure you don't have any firewalls
or other traffic control equipment between the two that's interfering with
the FD reliably contacting the SD?

  

There is no firewall blocking the communication between the bacula units 
and it worked fine for five days... It has broken twice on what could be 
the same place.

I started a new fullbackup wich should be at this place in a couple of 
days so if it breaks on the same place then I will know that it is not 
network related.

/marcus


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Consistent backup failures with Windows 2000 clients

2006-05-26 Thread Marcus Hallberg

Hi I am having the same problem.

In my case it dosen't happen until the end of the backupjob. I am trying 
to backup 62 GB of data and at 47 GB of compressed data this occures, 
which makes it hard to replicate (it takes five days to get there...). I 
have tried two times. When it fails it is still listed as running in the 
diretor but through messages I get:


26-May 14:06 edstrom-fd: edstrom.2006-05-22_10.07.42 Fatal error: 
c:\cygwin\home\kern\bacula\k\src\win32\filed\../../filed/backup.c:500 
Network send error to SD . ERR=Input/output error

26-May 14:06 edstrom-fd: edstrom.2006-05-22_10.07.42 Error:
c:\cygwin\home\kern\bacula\k\src\win32\lib\../../lib/bnet.c:393 Write 
error sending len to Storage daemon:localhost:9103: ERR=Input/output error


The client is a Windows Server 2003 which I installed three weeks ago 
with the latest stable bacula client. The Director and storagedaemon are 
both running on a Gentoo Linux x86 with  bacula 1.38.5


Any insight on this problem would be greatly appressiated.

/marcus

Ted Cabeen wrote:


I'm having a lot of trouble getting Windows 2000 clients to backup
with my current bacula setup.  Backups consistently crash with the
following error:
23-May 16:31 cas-fd: CAS.2006-05-23_14.57.24 Fatal error: 
c:\cygwin\home\kern\bacula\k\src\win32\filed\../../filed/backup.c:500 Network 
send error to SD. ERR=Input/output error
23-May 16:31 black-dir: CAS.2006-05-23_14.57.24 Error: Bacula 1.38.9 (02May06): 
23-May-2006 16:31:59

I've tried a number of things to resolve this problem, and nothing has
helped.  First I figured it was some sort of network problem.
However, these machines have a solid connection, and a Linux box in
the same datecenter backs up cleanly every time.  


With full debugging turned on, this is the debug message I get on the FD:
cas-fd: ../../filed/backup.c:506 Send data to SD len=6884
cas-fd: ../../filed/backup.c:506 Send data to SD len=7048
cas-fd: ../../filed/backup.c:506 Send data to SD len=7047
cas-fd: ../../filed/backup.c:111 end blast_data ok=0
cas-fd: ../../filed/job.c:1266 Error in blast_data.
cas-fd: ../../filed/job.c:1334 End FD msg: 2800 End Job TermCode=102 
JobFiles=6386 ReadBytes=15769107088 JobBytes=3128401861 Errors=0

On the server running the Director and Storage Daemon, I get the
following:
black-sd: block.c:430 binbuf=64503 buf_len=64512
black-dir: backup.c:302 FDStatus=f
black-dir: msgchan.c:306 === End msg_thread. use=2
black-dir: backup.c:363 Enter backup_cleanup 69 E

What does the Error in blast_data error mean?

Here's my bacula-fd.conf file:

Director {
 Name = black-dir
 Password = snip
}

FileDaemon {  # this is me
 Name = cas-fd
 FDport = 9102  # where we listen for the director
 WorkingDirectory = /bacula/working
 Pid Directory = /bacula/working
 Heartbeat Interval = 30 seconds
}

Messages {
 Name = Standard
 director = black-dir = all, !skipped
}

 




--
/Marcus Hallberg
Wimlet Consulting AB
Djurgårdsgatan 10
414 62 Göteborg
mobil: 0707-141716
e-post: [EMAIL PROTECTED]
hemsida: www.wimlet.se



---
All the advantages of Linux Managed Hosting--Without the Cost and Risk!
Fully trained technicians. The highest number of Red Hat certifications in
the hosting industry. Fanatical Support. Click to learn more
http://sel.as-us.falkag.net/sel?cmd=lnkkid7521bid$8729dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Consistent backup failures with Windows 2000 clients

2006-05-26 Thread Marcus Hallberg

Now has the director reported as failed it took two hours.

The final output from the director is:

26-May 14:06 edstrom-fd: edstrom.2006-05-22_10.07.42 Fatal error: 
c:\cygwin\home\kern\bacula\k\src\win32\filed\../../filed/backup.c:500 Network 
send error to SD. ERR=Input/output error
26-May 14:06 edstrom-fd: edstrom.2006-05-22_10.07.42 Error: 
c:\cygwin\home\kern\bacula\k\src\win32\lib\../../lib/bnet.c:393 Write error 
sending len to Storage daemon:localhost:9103: ERR=Input/output error
26-May 16:05 neo-dir: edstrom.2006-05-22_10.07.42 Fatal error: Network error 
with FD during Backup: ERR=Connection reset by peer
26-May 16:05 neo-dir: edstrom.2006-05-22_10.07.42 Fatal error: No Job status 
returned from FD.
26-May 16:05 neo-dir: edstrom.2006-05-22_10.07.42 Error: Bacula 1.38.5 
(18Jan06): 26-May-2006 16:05:47
 JobId:  378
 Job:edstrom.2006-05-22_10.07.42
 Backup Level:   Full
 Client: edstrom-fd Windows Server 2003,MVS,NT 5.2.3790
 FileSet:Windows Full System edstrom 2006-05-08 13:32:44
 Pool:   edstrom-full
 Storage:edstrom
 Scheduled time: 22-May-2006 10:07:34
 Start time: 22-May-2006 10:07:44
 End time:   26-May-2006 16:05:47
 Priority:   10
 FD Files Written:   0
 SD Files Written:   0
 FD Bytes Written:   0
 SD Bytes Written:   0
 Rate:   0.0 KB/s
 Software Compression:   None
 Volume name(s): edstrom-full
 Volume Session Id:  1
 Volume Session Time:1148284993
 Last Volume Bytes:  46,996,991,306
 Non-fatal FD errors:0
 SD Errors:  0
 FD termination status:  Error
 SD termination status:  Running
 Termination:*** Backup Error ***


The Network error with FD during Backup: does that indicate that there 
is a problem for the director to talk to the filedaemon or for the 
filedaemon to talk to the storagedaemon?


/marcus


Marcus Hallberg wrote:


Hi I am having the same problem.

In my case it dosen't happen until the end of the backupjob. I am 
trying to backup 62 GB of data and at 47 GB of compressed data this 
occures, which makes it hard to replicate (it takes five days to get 
there...). I have tried two times. When it fails it is still listed as 
running in the diretor but through messages I get:


26-May 14:06 edstrom-fd: edstrom.2006-05-22_10.07.42 Fatal error: 
c:\cygwin\home\kern\bacula\k\src\win32\filed\../../filed/backup.c:500 
Network send error to SD . ERR=Input/output error

26-May 14:06 edstrom-fd: edstrom.2006-05-22_10.07.42 Error:
c:\cygwin\home\kern\bacula\k\src\win32\lib\../../lib/bnet.c:393 Write 
error sending len to Storage daemon:localhost:9103: ERR=Input/output 
error


The client is a Windows Server 2003 which I installed three weeks ago 
with the latest stable bacula client. The Director and storagedaemon 
are both running on a Gentoo Linux x86 with  bacula 1.38.5


Any insight on this problem would be greatly appressiated.

/marcus

Ted Cabeen wrote:


I'm having a lot of trouble getting Windows 2000 clients to backup
with my current bacula setup.  Backups consistently crash with the
following error:
23-May 16:31 cas-fd: CAS.2006-05-23_14.57.24 Fatal error: 
c:\cygwin\home\kern\bacula\k\src\win32\filed\../../filed/backup.c:500 
Network send error to SD. ERR=Input/output error
23-May 16:31 black-dir: CAS.2006-05-23_14.57.24 Error: Bacula 1.38.9 
(02May06): 23-May-2006 16:31:59


I've tried a number of things to resolve this problem, and nothing has
helped.  First I figured it was some sort of network problem.
However, these machines have a solid connection, and a Linux box in
the same datecenter backs up cleanly every time. 
With full debugging turned on, this is the debug message I get on the 
FD:

cas-fd: ../../filed/backup.c:506 Send data to SD len=6884
cas-fd: ../../filed/backup.c:506 Send data to SD len=7048
cas-fd: ../../filed/backup.c:506 Send data to SD len=7047
cas-fd: ../../filed/backup.c:111 end blast_data ok=0
cas-fd: ../../filed/job.c:1266 Error in blast_data.
cas-fd: ../../filed/job.c:1334 End FD msg: 2800 End Job TermCode=102 
JobFiles=6386 ReadBytes=15769107088 JobBytes=3128401861 Errors=0


On the server running the Director and Storage Daemon, I get the
following:
black-sd: block.c:430 binbuf=64503 buf_len=64512
black-dir: backup.c:302 FDStatus=f
black-dir: msgchan.c:306 === End msg_thread. use=2
black-dir: backup.c:363 Enter backup_cleanup 69 E

What does the Error in blast_data error mean?

Here's my bacula-fd.conf file:

Director {
 Name = black-dir
 Password = snip
}

FileDaemon {  # this is me
 Name = cas-fd
 FDport = 9102  # where we listen for the director
 WorkingDirectory = /bacula/working
 Pid Directory = /bacula/working
 Heartbeat Interval = 30 seconds
}

Messages {
 Name = Standard
 director = black-dir = all, !skipped
}

 







--
/Marcus Hallberg
Wimlet Consulting AB

[Bacula-users] Bandwith limiting!

2006-03-06 Thread Marcus Hallberg

Hi!

I need to control the upload speed on the computers thar are being 
backed up by bacula. Preferebly I would like to configure it per host as 
some have better upload speed than others.


The question: How do you handle bandwidth limiting? Is there any method 
build in to bacula (which I haven't found) or do you set up the download 
limitations on the server?


/marcus

--
/Marcus Hallberg
Wimlet Consulting AB
Djurgårdsgatan 10
414 62 Göteborg
mobil: 0707-141716
e-post: [EMAIL PROTECTED]
hemsida: www.wimlet.se



---
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnkkid0944bid$1720dat1642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users