[Bacula-users] Spool block too big

2009-08-31 Thread teksupptom


Martin Simmons wrote:
 
 I think this can only be caused by a bug, so it is probably a good idead to
 upgrade to Bacula 3.0.1 to see if that fixes it.
 
 Are you using simultaneous jobs?  What values does the error give for the
 sizes of the blocks?
 
 __Martin
 
 --
 Enter the BlackBerry Developer Challenge  
 This is your chance to win up to $100,000 in prizes! For a limited time, 
 vendors submitting new applications to BlackBerry App World(TM) will have
 the opportunity to enter the BlackBerry Developer Challenge. See full prize  
 details at: http://p.sf.net/sfu/Challenge
 ___
 Bacula-users mailing list
 Bacula-users  at  lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users



Hi Martin,

We are actually in the process of upgrading to 3.01. We do run simultaneous 
jobs, with a limit of 2 since we have two tape drives.

The error we've seen is similar but not exact in all cases so far, here's an 
example:

Fatal error: spool.c:396 Spool block too big. Max 64512 bytes, got 4288020370

Sometimes the number is much smaller, but often it is very large, like above.

I'm hoping it disappears with Bacula 3.01 :)

Thanks,
Tom

+--
|This was sent by tomisom.s...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Spool block too big

2009-07-17 Thread Martin Simmons
 On Thu, 16 Jul 2009 16:28:28 -0400, teksupptom  said:
 
 Hello,
 
 We've been intermittently having an issue with backups failing due to the
 error Spool block too big. It's happened exactly 10 times since
 4/27/09. It generally happens during large backups (900GB+).
 
 The most recent error happened after the data had been spooled, and was
 being written to tape. These usually occur overnight so I don't always get
 to see what's going on, but this one happened during my normal shift. Prior
 to it happening I had noticed that the SD status showed the correct data
 spool file size, but showed 0 bytes for the attribute spool size. I double
 checked the directory we use to store the attribute spool file (same
 directory where the mail files are kept, but different from where we spool
 the data), and there was a 6GB+ attribute spool file for the job.
 
 Not sure if this is what other people are seeing when this error occurs, but
 I'm hoping it can help in tracking down the source.
 
 We're running Bacula 2.4.4 using PostGreSQL 7.4.19.

I think this can only be caused by a bug, so it is probably a good idead to
upgrade to Bacula 3.0.1 to see if that fixes it.

Are you using simultaneous jobs?  What values does the error give for the
sizes of the blocks?

__Martin

--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Spool block too big

2009-07-16 Thread teksupptom

Hello,

We've been intermittently having an issue with backups failing due to the error 
Spool block too big. It's happened exactly 10 times since 4/27/09. It 
generally happens during large backups (900GB+).

The most recent error happened after the data had been spooled, and was being 
written to tape. These usually occur overnight so I don't always get to see 
what's going on, but this one happened during my normal shift. Prior to it 
happening I had noticed that the SD status showed the correct data spool file 
size, but showed 0 bytes for the attribute spool size. I double checked the 
directory we use to store the attribute spool file (same directory where the 
mail files are kept, but different from where we spool the data), and there was 
a 6GB+ attribute spool file for the job.

Not sure if this is what other people are seeing when this error occurs, but 
I'm hoping it can help in tracking down the source.

We're running Bacula 2.4.4 using PostGreSQL 7.4.19.

+--
|This was sent by tomisom.s...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Spool block too big

2009-07-16 Thread John Drescher
On Thu, Jul 16, 2009 at 4:28 PM,
teksupptombacula-fo...@backupcentral.com wrote:

 Hello,

 We've been intermittently having an issue with backups failing due to the 
 error Spool block too big. It's happened exactly 10 times since 4/27/09. It 
 generally happens during large backups (900GB+).

 The most recent error happened after the data had been spooled, and was being 
 written to tape. These usually occur overnight so I don't always get to see 
 what's going on, but this one happened during my normal shift. Prior to it 
 happening I had noticed that the SD status showed the correct data spool file 
 size, but showed 0 bytes for the attribute spool size. I double checked the 
 directory we use to store the attribute spool file (same directory where the 
 mail files are kept, but different from where we spool the data), and there 
 was a 6GB+ attribute spool file for the job.

 Not sure if this is what other people are seeing when this error occurs, but 
 I'm hoping it can help in tracking down the source.


Have you tried just limiting the spool size to something small. I mean
I use a 5 to 10GB spool even for 3TB jobs.

John

--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Spool block too big

2006-12-27 Thread Kern Sibbald
On Wednesday 27 December 2006 01:50, Brian Minard wrote:
 Hi Kern,
 
 Any additional thoughts on what might be the problem?

Sorry, but I have absolutely no idea.  The only thing that makes any sense is 
that your spool files are getting corrupted or for some reason, Bacula is 
getting confused about the block size, but I don't see why that would happen.  
Many people including myself run multiple simultaneous jobs that use 
spooling.  

There are two other possible sources of problems: 1. it could be a 32/64 bit 
problem if you are running a 64 bit SD.   2. If you give more than one daemon 
the same name in the Name=xxx directive, you will get this kind of behavior. 
3. if you are running multiple SDs all using the same spool directory this 
could happen.

 
 Thanks,
 Brian
 
 On 24-Dec-06, at 9:10 AM, Brian Minard wrote:
 
 
  On 24-Dec-06, at 6:06 AM, Kern Sibbald wrote:
 
  Have you modified the default network buffer size?
 
  All clients have an MTU of 1500.
 
  Using Maximum Network Buffer Size? No.
 
  Checked out the suggestions on the mailing list (http://www.mail-
  archive.com/bacula-users@lists.sourceforge.net/msg01015.html):
 
  1/  only one storage daemon.
  2/ lots of space on the disk. No chance that if filled when this
  problem occurred.
  3/ no.
  4/ don't think so.
  5/ don't know, but unlikely.
  6/  no complaints.
 
  Unfortunately, these make no sense since you did not include the  
  context (I
  have no idea what 1/, 2/, ... are).
 
  Refers to the suggestions at http://www.mail- archive.com/bacula- 
  [EMAIL PROTECTED]/msg01015.html.
 
 
 
 
 --
 Brian Minard
 [EMAIL PROTECTED]
 
 
 
 

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Spool block too big

2006-12-24 Thread Kern Sibbald
On Sunday 24 December 2006 02:32, Brian Minard wrote:
 Hello,
 
 I keep running into a Spool block too big error. I am running  
 FreeBSD 6.2-PRERELEASE #1: Sat Oct 28 16:07:28 EDT 2006 with the  
 bacula port 1.38.11_3. I have run all 9 of the tape testing steps and  
 the problem has never appeared when jobs are not run concurrently.  
 Messages from the storage daemon are:
 
 23-Dec 16:58 client1-sd: Committing spooled data to Volume  
 A007. Despooling 6,615,803,396 bytes ...
 23-Dec 16:58 client1-sd: client2-backup.2006-12-23_15.24.47 Fatal  
 error: spool.c:320 Spool block too big. Max 64512 bytes, got 569964745
 23-Dec 16:58 client2-fd: client2-backup.2006-12-23_15.24.47 Fatal  
 error: job.c:1617 Bad response to Append End command. Wanted 3000 OK  
 end, got [truncated in the logs]
 23-Dec 16:58 client1-dir: client2-backup.2006-12-23_15.24.47 Error:  
 Bacula 1.38.11 (28Jun06): 23-Dec-2006 16:58:55

Have you modified the default network buffer size?

 
 bacula-dir.conf:
 
 Director {
Name = client1-dir
DIRport = 9101
DIRAddress = 10.10.10.12
QueryFile = /usr/local/share/bacula/query.sql
WorkingDirectory = /var/db/bacula
PidDirectory = /var/run
Maximum Concurrent Jobs = 5
Password = password
Messages = Daemon
 }
 
 JobDefs {
Name = WeeklyCycle
Maximum Concurrent Jobs = 5
Type = Backup
Pool = Default
Storage = Exabyte
Messages = Standard
Max Start Delay = 22h
SpoolData = yes
Schedule = WeeklyCycle
FileSet = Full Set
Priority = 1
 }
 
 Job {
JobDefs = WeeklyCycle
Name = client2-backup
Client = client2-fd
Write Bootstrap = client2.bsr
 }
 
 Client {
 
Name = client2-fd
Address = client2
FDPort = 9102
Catalog = Catalog
Password = password1
Maximum Concurrent Jobs = 5
 }
 
 There are 5 clients with basically the same definition. Random spool  
 failures occur on one or two of them from time to time. Occurs during  
 full backups.
 
 Checked out the suggestions on the mailing list (http://www.mail- 
 archive.com/bacula-users@lists.sourceforge.net/msg01015.html):
 
 1/  only one storage daemon.
 2/ lots of space on the disk. No chance that if filled when this  
 problem occurred.
 3/ no.
 4/ don't think so.
 5/ don't know, but unlikely.
 6/  no complaints.

Unfortunately, these make no sense since you did not include the context (I 
have no idea what 1/, 2/, ... are).

 
 TIA,
 Brian
 
 
 
 

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Spool block too big

2006-12-23 Thread Brian Minard

Hello,

I keep running into a Spool block too big error. I am running  
FreeBSD 6.2-PRERELEASE #1: Sat Oct 28 16:07:28 EDT 2006 with the  
bacula port 1.38.11_3. I have run all 9 of the tape testing steps and  
the problem has never appeared when jobs are not run concurrently.  
Messages from the storage daemon are:


23-Dec 16:58 client1-sd: Committing spooled data to Volume  
A007. Despooling 6,615,803,396 bytes ...
23-Dec 16:58 client1-sd: client2-backup.2006-12-23_15.24.47 Fatal  
error: spool.c:320 Spool block too big. Max 64512 bytes, got 569964745
23-Dec 16:58 client2-fd: client2-backup.2006-12-23_15.24.47 Fatal  
error: job.c:1617 Bad response to Append End command. Wanted 3000 OK  
end, got [truncated in the logs]
23-Dec 16:58 client1-dir: client2-backup.2006-12-23_15.24.47 Error:  
Bacula 1.38.11 (28Jun06): 23-Dec-2006 16:58:55


bacula-dir.conf:

Director {
  Name = client1-dir
  DIRport = 9101
  DIRAddress = 10.10.10.12
  QueryFile = /usr/local/share/bacula/query.sql
  WorkingDirectory = /var/db/bacula
  PidDirectory = /var/run
  Maximum Concurrent Jobs = 5
  Password = password
  Messages = Daemon
}

JobDefs {
  Name = WeeklyCycle
  Maximum Concurrent Jobs = 5
  Type = Backup
  Pool = Default
  Storage = Exabyte
  Messages = Standard
  Max Start Delay = 22h
  SpoolData = yes
  Schedule = WeeklyCycle
  FileSet = Full Set
  Priority = 1
}

Job {
  JobDefs = WeeklyCycle
  Name = client2-backup
  Client = client2-fd
  Write Bootstrap = client2.bsr
}

Client {

  Name = client2-fd
  Address = client2
  FDPort = 9102
  Catalog = Catalog
  Password = password1
  Maximum Concurrent Jobs = 5
}

There are 5 clients with basically the same definition. Random spool  
failures occur on one or two of them from time to time. Occurs during  
full backups.


Checked out the suggestions on the mailing list (http://www.mail- 
archive.com/bacula-users@lists.sourceforge.net/msg01015.html):


1/  only one storage daemon.
2/ lots of space on the disk. No chance that if filled when this  
problem occurred.

3/ no.
4/ don't think so.
5/ don't know, but unlikely.
6/  no complaints.

TIA,
Brian



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Spool block too big

2005-05-03 Thread Kern Sibbald
On Tuesday 03 May 2005 03:51, Jeffery P. Humes wrote:
 Any ideas why I would get this error?



 02-May 20:33 kninfratemp-sd: mycastleapp01.2005-05-01_01.05.01 Fatal
 error: spool.c:315 Spool block too big. Max 64512 bytes, got 909259313

 This error seems to happen when full backups happen.

Your spool file got clobbered. 

Possible reasons (hard to be specific when you didn't supply any basic info):

1. You are running two Storage daemons and pointing them to the same working 
directory.

2. The partition on which your spool file resides has filled, and your OS (not 
specified) doesn't return the correct error code during writing.

3. Failing hard disk.

4. Some other process writing into the spool file.

5. Some strange bug in Bacula with multiple jobs or improper Device 
specification.

6. A 32/64 bit configuration problem with Bacula on your OS 
   -- very bad if this is true.
   Try setting your max spool file size to 1.5GB (not a really good solution).
   Run btape. If it complains on startup about 32/64 bits off_t, you have 
   problems that you should resolve in the Bacula build.

-- 
Best regards,

Kern

  (
  /\
  V_V


---
This SF.Net email is sponsored by: NEC IT Guy Games.
Get your fingers limbered up and give it your best shot. 4 great events, 4
opportunities to win big! Highest score wins.NEC IT Guy Games. Play to
win an NEC 61 plasma display. Visit http://www.necitguy.com/?r=20
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Spool block too big

2005-05-02 Thread Jeffery P. Humes




Any ideas why I would get
this error?



02-May 20:33 kninfratemp-sd: mycastleapp01.2005-05-01_01.05.01 Fatal
error: spool.c:315 Spool block too big. Max 64512 bytes, got 909259313

This error seems to happen when full backups happen.

Thanks in advance.

-Jeff Humes