Re: [Bacula-users] Data spooling for migration jobs?

2011-12-01 Thread Pierre Bernhardt
Am 01.12.2011 08:25, schrieb James Harper:
>>
>> Is it possible to have an Spool Data directive for migrating jobs?
>>
> I don't think you can do that. You could migrate to fast disk first then
> to the fast tape. Not as efficient as spooling but if you have lots of
> jobs it wouldn't be that bad.
All is fine. I checked and spooling data is possible in migration jobs.
Thats great and will save my tape drive from often rewinds.

Cheers...
Pierre Bernhardt



--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-01 Thread gary artim
got close to 120 MBs, using 64kb buffer and 20gb maximum file size
using btape...now test with real data...gary

===
blocksize set with mt and in bacula-sd.conf to == 65536
===

[root@genepi1 bacula]# btape -c /etc/bacula/bacula-sd.conf /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:284 Using device: "/dev/nst0" for writing.
01-Dec 13:36 btape JobId 0: 3301 Issuing autochanger "loaded? drive 0" command.
01-Dec 13:36 btape JobId 0: 3302 Autochanger "loaded? drive 0", result
is Slot 12.
btape: btape.c:476 open device "LTO-4" (/dev/nst0): OK
*speed file_size=20 nb_file=10 skip_raw
btape: btape.c:1082 Test with zero data and bacula block structure.
btape: btape.c:960 Begin writing 10 files of 21.47 GB with blocks of
65536 bytes.
+.
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 111.2 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 111.8 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 111.8 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 111.8 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 111.8 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 111.8 MB/s




===
blocksize set with mt and in bacula-sd.conf to == 32768
===
[root@genepi1 bacula]# btape -c /etc/bacula/bacula-sd.conf /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:284 Using device: "/dev/nst0" for writing.
01-Dec 13:12 btape JobId 0: 3301 Issuing autochanger "loaded? drive 0" command.
01-Dec 13:12 btape JobId 0: 3302 Autochanger "loaded? drive 0", result
is Slot 12.
btape: btape.c:476 open device "LTO-4" (/dev/nst0): OK
*speed file_size=20 nb_file=10 skip_raw
btape: btape.c:1082 Test with zero data and bacula block structure.
btape: btape.c:960 Begin writing 10 files of 21.47 GB with blocks of
32768 bytes.
+.
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 94.60 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 95.44 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 95.44 MB/s
+.
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 95.02 MB/s

On Wed, Nov 30, 2011 at 8:02 AM, gary artim  wrote:
> Hi --
>
> Getting about 41.6/MBs and hoping for closer to the max (120MB). I
> tried maximum file sizes of 5, 8, 12GB -- 12GB the best the others
> where about 35/MBs. Any advise welcomed...should I look at max/min
> block sizes?
> most of the data is big, genetics data -- filesizes avg in the 500/MB
> to 3-4/GB -- looking at a growth from 4TB to 15TB in the next 2 years.
>
> run results and bacula-sd.conf and bacula-dir.conf below...
>
> thanks
> -- gary
>
> Run:
> ===
>
>  Build OS:               x86_64-redhat-linux-gnu redhat
>  JobId:                  5
>  Job:                    Prodbackup.2011-11-29_19.32.42_05
>  Backup Level:           Full
>  Client:                 "bacula-fd" 5.0.3 (04Aug10)
> x86_64-redhat-linux-gnu,red
>
>
> hat,
>  FileSet:                "FileSetProd" 2011-11-29 19:32:42
>  Pool:                   "FullProd" (From Job FullPool override)
>  Catalog:                "MyCatalog" (From Client resource)
>  Storage:                "LTO-4" (From Job resource)
>  Scheduled time:         29-Nov-2011 19:32:26
>  Start time:             29-Nov-2011 19:32:45
>  End time:               29-Nov-2011 21:15:53
>  Elapsed time:           1 hour 43 mins 8 secs
>  Priority:               10
>  FD Files Written:       35,588
>  SD Files Written:       35,588
>  FD Bytes Written:       257,543,090,368 (257.5 GB)
>  SD Bytes Written:       257,548,502,159 (257.5 GB)
>  Rate:                   41619.8 KB/s
>  Software Compression:   None
>  VSS:                    no
>  Encryption:             no
>  Accurate:               no
>  Volume name(s):         f03
>  Volume Session Id:      1
>  Volume Session Time:    1322622337
>  Last Volume Bytes:      257,740,342,272 (257.7 GB)
>  Non-fatal FD errors:    0
>  SD Errors:              0
>  FD termination status:  OK
>  SD termination status:  OK
>  Termination:            Backup OK
>
> bacula-sd.conf:
> ==
>
> Autochanger {
>  Name = Autochanger
>  Device = LTO-4
>  Changer Command = "/usr/libexec/bacula/mtx-changer %c %o %S %a %d"
>  Changer Device = /dev/changer
> }
> Device {

Re: [Bacula-users] tuning lto-4

2011-12-01 Thread gary artim
btape getting 89 MBs, so maybe my disk and sql updating is effecting
the speed? note drive has a 16384 blocksize, ran tapeinfo on the
drive...gary

[root@genepi1 bacula]# btape -c /etc/bacula/bacula-sd.conf /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:284 Using device: "/dev/nst0" for writing.
01-Dec 12:29 btape JobId 0: 3301 Issuing autochanger "loaded? drive 0" command.
01-Dec 12:29 btape JobId 0: 3302 Autochanger "loaded? drive 0", result
is Slot 12.
btape: btape.c:476 open device "LTO-4" (/dev/nst0): OK
*speed file_size=3 skip_raw
btape: btape.c:1082 Test with zero data and bacula block structure.
btape: btape.c:960 Begin writing 3 files of 3.221 GB with blocks of
2097152 bytes.
+++4
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=3.221 GB. Write rate = 89.47 MB/s
+++
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=3.221 GB. Write rate = 89.47 MB/s
+++
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=3.221 GB. Write rate = 89.47 MB/s
btape: btape.c:384 Total Volume bytes=9.663 GB. Total Write rate = 89.47 MB/s

btape: btape.c:1094 Test with random data, should give the minimum throughput.
btape: btape.c:960 Begin writing 3 files of 3.221 GB with blocks of
2097152 bytes.
+++
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=3.221 GB. Write rate = 16.02 MB/s
+++
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=3.221 GB. Write rate = 33.90 MB/s
+++
btape: btape.c:608 Wrote 1 EOF to "LTO-4" (/dev/nst0)
btape: btape.c:410 Volume bytes=3.221 GB. Write rate = 44.12 MB/s
btape: btape.c:384 Total Volume bytes=9.663 GB. Total Write rate = 26.18 MB/s


[root@genepi1 bacula]# tapeinfo
Usage: tapeinfo -f 
[root@genepi1 bacula]# tapeinfo -f /dev/changer
Product Type: Medium Changer
Vendor ID: 'OVERLAND'
Product ID: 'NEO Series  '
Revision: '0504'
Attached Changer API: No
SerialNumber: '2B8145'
SCSI ID: 1
SCSI LUN: 1
Ready: yes
[root@genepi1 bacula]# tapeinfo -f /dev/nst0
Product Type: Tape Drive
Vendor ID: 'HP  '
Product ID: 'Ultrium 4-SCSI  '
Revision: 'B12H'
Attached Changer API: No
SerialNumber: 'HU17450M8L'
MinBlock: 1
MaxBlock: 16777215
SCSI ID: 1
SCSI LUN: 0
Ready: yes
BufferedMode: yes
Medium Type: Not Loaded
Density Code: 0x46
BlockSize: 16384
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0x1
DeCompType: 0x1
BOP: yes
Block Position: 0
Partition 0 Remaining Kbytes: 799204
Partition 0 Size in Kbytes: 799204
ActivePartition: 0
EarlyWarningSize: 0
NumPartitions: 0
MaxPartitions: 0



On Thu, Dec 1, 2011 at 10:49 AM,   wrote:
> In the message dated: Thu, 01 Dec 2011 16:27:33 GMT,
> The pithy ruminations from Alan Brown on
>  were:
> => gary artim wrote:
> => > You guys/gals are great, very responsive! I did try
> => > spooling/despooling and my run times shot up.
> =>
> => They will - you're copying everything twice (disk to disk to tape), but
> => this is the only way to achieve fast despooling speeds - if you don't do
> => this then your LTO drive will start to "shoe shine" and speeds drop off
> => rapidly when it happens.
>
> And you increase wear & tear on the drive and media.
>
> =>
> => The trick is to run multiple jobs at once - you have to spool to achieve
> => this anyway or extracting will be a nightmare.
> =>
> => Spooling is a net gain when you're running incrementals.
> =>
>
> Not necessarily. Spooling is a gain if you are measuring the speed
> of writing to tape. Spooling may be a net loss for end-to-end (client
> machine-->spool server-->tape drive) speed.
>
> For backups clients where the total volume being backed up is less than
> the spool size, then there's a very good chance of a performance gain. As
> soon as a job requires multiple rounds of spooling and de-spooling,
> there's a good chance of a performance loss because bacula stops reading
> from the client machine (stops spooling that job) as soon as despooling
> begins. Of course, spooling allows you to run multiple jobs in parallel, a
> clear win over running them in series.
>
>
> See:
>
>        [1] http://copilotco.com/mail-archives/bacula-devel.2007/msg02642.html
>        [2] 
> http://www.bacula.org/git/cgit.cgi/bacula/plain/bacula/projects?h=Branch-5.1
>
>        [3] 
> http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg49366.html
>
>
> => Spooling MUST happen on a fast dedicated drive. You're best off dropping
> => in a fast SSD such as a 64/128Gb OCZ vertex3 or similar to handle it.
>
> Hmm...for LTO4 (large spool files are good), you might want more space
> than that, particularly if you have multiple clients (multiple spool
> files). A more cost-effective option might be several fast drives (10K
> or 15K SAS or SCSI) in RAID-0. It doesn't take very many drives in RAID0 to
> have an aggregate drive throughput that is greater than the bus interface.
>
> =>
> => > I was using 

Re: [Bacula-users] tuning lto-4

2011-12-01 Thread mark . bergman
In the message dated: Thu, 01 Dec 2011 16:27:33 GMT,
The pithy ruminations from Alan Brown on 
 were:
=> gary artim wrote:
=> > You guys/gals are great, very responsive! I did try
=> > spooling/despooling and my run times shot up.
=> 
=> They will - you're copying everything twice (disk to disk to tape), but 
=> this is the only way to achieve fast despooling speeds - if you don't do 
=> this then your LTO drive will start to "shoe shine" and speeds drop off 
=> rapidly when it happens.

And you increase wear & tear on the drive and media.

=> 
=> The trick is to run multiple jobs at once - you have to spool to achieve 
=> this anyway or extracting will be a nightmare.
=> 
=> Spooling is a net gain when you're running incrementals.
=> 

Not necessarily. Spooling is a gain if you are measuring the speed
of writing to tape. Spooling may be a net loss for end-to-end (client
machine-->spool server-->tape drive) speed.

For backups clients where the total volume being backed up is less than
the spool size, then there's a very good chance of a performance gain. As
soon as a job requires multiple rounds of spooling and de-spooling,
there's a good chance of a performance loss because bacula stops reading
from the client machine (stops spooling that job) as soon as despooling
begins. Of course, spooling allows you to run multiple jobs in parallel, a
clear win over running them in series.


See:

[1] http://copilotco.com/mail-archives/bacula-devel.2007/msg02642.html
[2] 
http://www.bacula.org/git/cgit.cgi/bacula/plain/bacula/projects?h=Branch-5.1

[3] 
http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg49366.html


=> Spooling MUST happen on a fast dedicated drive. You're best off dropping 
=> in a fast SSD such as a 64/128Gb OCZ vertex3 or similar to handle it.

Hmm...for LTO4 (large spool files are good), you might want more space
than that, particularly if you have multiple clients (multiple spool
files). A more cost-effective option might be several fast drives (10K
or 15K SAS or SCSI) in RAID-0. It doesn't take very many drives in RAID0 to
have an aggregate drive throughput that is greater than the bus interface.

=> 
=> > I was using a simple
=> > 7200 drive though, no ssd or raid...I assume the performance gain

Yeah, the sustained read speed from a 7.2k RPM drive is lower than the
possible write speed to an LTO-4 drive:


http://www.seagate.com/www/en-us/support/before_you_buy/speed_considerations

=> > happens when your networks multi machines...wearing multiple hats so
=> > will report back on btape next week, unless I get some time. gary
=> 
=> Even on a single host, if the heads are thrashing then spooling will 
=> save time overall. The big advantage is being able to run multiple jobs 
=> so that several are spooling data at the same time one is despooling.

Absolutely. Spooling is a big win for multiple jobs, and for reducing
wear&tear on the tape drive. It may or may not give a performance increase for
any single backup job.

Mark


--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] feature request: exempt administrative connections from concurrency limits

2011-12-01 Thread mark . bergman
In the message dated: Thu, 01 Dec 2011 09:48:59 +0100,
The pithy ruminations from Bruno Friedmann on 
 were:
=> On 12/01/2011 12:20 AM, mark.berg...@uphs.upenn.edu wrote:
=> > Item 1:   Administrative connections to the file daemon should not count 
in the concurrency li
=> mit
=> >   Origin: Mark Bergman 
=> >   Date:   Wed Nov 30 18:03:20 EST 2011
=> >   Status:
=> > 
=> >   What:   
=> > Administrative connections to the file daemon should not count
=> > in the concurrency limit.
=> > 
=> > These connections to the file daemon (ie., "stat dir" or "cancel
=> > dir") are treated as if they were backup connections. This means
=> > that these commands will be refused if the maximum number of
=> > concurrent jobs are running.
=> > 

[SNIP!]

=> 
=> Mark I saw a limit to your request :
=> What about restore ? How would you manage/count them

Hmmm I'd suggest not changing the way that restores are managed.

=> 
=> Example : You have a very long running backup
=> Then a client need very quickly a restore (from another job)
=> If you have limited connexions you will be not able to execute the restore.
=> 

There's always a limited number of connections, whether that limit is
1 or the default of 20. If you have "N" jobs running (where "N" is the
limit) then the next job will not run. For regular jobs (ie., backup or
restore) this isn't a big problem, as the job will be queued (depending
on things like "Max Start Delay"). Administrative commands ("stat dir",
"cancel jobid") just hang if they can't connect to the file daemon...and
it wouldn't make sense to queue those commands until running jobs
finish. Administrative commands should be executed as soon as possible.

=> But with your proposal, how restore should be counted ?

My proposal doesn't change the way restores are handled at all.

It might be worth submitting another feature request, such as allowing
"N" concurrent backup jobs plus "R" concurrent restore jobs (and if
"R = 0", then the behavior is unchanged from the current version of
bacula, giving backward compatibility).

=> Or will you tell to the customers that restore will only happen in several 
hours after the backu
=> p ?

Actually, that's what I often tell users. We've got 2 physical tape
drives. If they are both in use, then there is no way for a restore job
to pre-empt the running backups.

Unless this has changed in recent versions, Bacula concurrently runs
jobs of only one priority level at a time. This means that a critical
'restore' job (with priority '1') will wait until the unimportant backup
(priority '10') that is currently running finishes, even if there are many
unused tape drives.  If you set the restore job to the same priority as
the running backup, it could run simultaneously...but any other backups
could also run, possibly filling all tape drives.


=> 
=> Sorry, I'm just asking questions

No problem...they're good questions.

Mark

=> 
=> -- 
=> 
=> Bruno Friedmann
=> Ioda-Net Sàrl www.ioda-net.ch
=> 
=> openSUSE Member & Ambassador
=> GPG KEY : D5C9B751C4653227
=> irc: tigerfoot
=> 

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] New version of Bacula-Web coming soon ...

2011-12-01 Thread bacula-dev
Dear all,

I'm proud to announce you that a new version of Bacula-Web is coming by next 
few days.

This new version will include

 - Improvements with configuration, database connectivity and application 
exceptions
 - Translations update and improvements
 - Cleaned dashboard
 - New custom filters in job reports
 - etc ...

All the details will be included in the release note.

Best regards

Davide

Bacula-Web project site: http://bacula-web.dflc.ch
--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-01 Thread Alan Brown
gary artim wrote:
> You guys/gals are great, very responsive! I did try
> spooling/despooling and my run times shot up.

They will - you're copying everything twice (disk to disk to tape), but 
this is the only way to achieve fast despooling speeds - if you don't do 
this then your LTO drive will start to "shoe shine" and speeds drop off 
rapidly when it happens.

The trick is to run multiple jobs at once - you have to spool to achieve 
this anyway or extracting will be a nightmare.

Spooling is a net gain when you're running incrementals.

Spooling MUST happen on a fast dedicated drive. You're best off dropping 
in a fast SSD such as a 64/128Gb OCZ vertex3 or similar to handle it.

> I was using a simple
> 7200 drive though, no ssd or raid...I assume the performance gain
> happens when your networks multi machines...wearing multiple hats so
> will report back on btape next week, unless I get some time. gary

Even on a single host, if the heads are thrashing then spooling will 
save time overall. The big advantage is being able to run multiple jobs 
so that several are spooling data at the same time one is despooling.

> On Thu, Dec 1, 2011 at 8:00 AM, Alan Brown  wrote:
>> gary artim wrote:
>>> thank much! will try testing with btape.
>> Please let us know the results
>>
>>> btw, I ran with 20GB maximum
>>> file size/2MB max block (see bacula-sd.conf below) and got these
>>> results, 20MB/s increase, ran 20 minutes faster, got 50MBs --
>> You should be seeing 120Mb/s or thereabouts.
>>
>> If you're spooling/despooling then you'll see lower overall speeds of
>> course. What counts is the despooling speed.
>>
>> How much ram have you got and what are you using to connect the LTO4 drives
>> up?
>>
>>
>>
>>
>>
> 




--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-01 Thread Brian Debelius
I believe (its been a while since I have needed to change my 
configuration) that my LTO-3 drive does not do hardware compression on 
blocks over 512K.  I am using 256K blocks right now, and I did not see 
any improvement above that.  I am using spooling on a pair of striped 
hard disks, and despooling happens at 65-80MB/s


On 12/1/2011 10:50 AM, gary artim wrote:
> thank much! will try testing with btape. btw, I ran with 20GB maximum
> file size/2MB max block (see bacula-sd.conf below) and got these
> results, 20MB/s increase, ran 20 minutes faster, got 50MBs -- now if I
> can just double the speed I could backup 15TB in about 45/hrs. I don't
> have that much data yet, but I'm hovering at 2TB and looking to expand
> sharply over time. I'm not doing any networking, it just straight from
> a raid 5 to a autochanger/lto-4. gary
>
>Build OS:   x86_64-redhat-linux-gnu redhat
>JobId:  6
>Job:Prodbackup.2011-11-30_18.49.24_06
>Backup Level:   Full
>Client: "bacula-fd" 5.0.3 (04Aug10)
> x86_64-redhat-linux-gnu,redhat,
>FileSet:"FileSetProd" 2011-11-30 15:23:58
>Pool:   "FullProd" (From Job FullPool override)
>Catalog:"MyCatalog" (From Client resource)
>Storage:"LTO-4" (From Job resource)
>Scheduled time: 30-Nov-2011 18:49:15
>Start time: 30-Nov-2011 18:49:26
>End time:   30-Nov-2011 20:14:56
>Elapsed time:   1 hour 25 mins 30 secs
>Priority:   10
>FD Files Written:35,588
>SD Files Written:35,588
>FD Bytes Written:257,543,092,723 (257.5 GB)
>SD Bytes Written:257,548,504,514 (257.5 GB)
>Rate:   50203.3 KB/s
>Software Compression:   None
>VSS:no
>Encryption: no
>Accurate:   no
>Volume name(s): f2
>Volume Session Id:   2
>Volume Session Time:1322707293
>Last Volume Bytes:   257,600,822,272 (257.6 GB)
>Non-fatal FD errors:0
>SD Errors:  0
>FD termination status:  OK
>SD termination status:  OK
>Termination:Backup OK
>
> bacula-sd.conf:
> Device {
>Name = LTO-4
>Media Type = LTO-4
>Archive Device = /dev/nst0
>AutomaticMount = yes;   # when device opened, read it
>AlwaysOpen = yes;
>RemovableMedia = yes;
>RandomAccess = no;
>#Maximum File Size = 12GB
>Maximum File Size = 20GB
>#Maximum Network Buffer Size = 65536
>Maximum block size = 2M
>#Spool Directory = /db/bacula/spool/LTO4
>#Maximum Spool Size = 200G
>#Maximum Job Spool Size = 150G
>Autochanger = yes
>Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
>Alert Command = "sh -c 'smartctl -H -l error %c'"
> }
>
>
>
> On Wed, Nov 30, 2011 at 11:48 PM, Andrea Conti  wrote:
>> On 30/11/11 19.43, gary artim wrote:
>>> Thanks much, I'll try today the block size change first. Then try the
>>> spooling. Dont have any unused disk, but may have to try on a shared
>>> drive.
>>> The "maximum file size" should be okay? g.
>> Choosing a max file size is mainly a tradeoff between write performance
>> (as the drive will stop and restart at the end of each file to write an
>> EOF mark) and restore performance (as the drive can only seek to a file
>> mark and then sequentially read through the file until the relevant data
>> bocks are found).
>>
>> I usually set maximum file size so that there are 2-3 filemarks per tape
>> wrap (3GB for LTO3, 5GB for LTO4), but if you don't plan to do regular
>> restores, or if you always restore the whole contents of a volume, 12GB
>> is fine.
>>
>> Anyway, with the figures you're citing your problem is *not* maximum
>> file size.
>>
>> Try to assess tape performance alone with btape test (which has a
>> "speed" command); you can try different block sizes and configuration
>> and see which one gives the best results.
>>
>> Doing so will give you a clear indication on whether your bottleneck is
>> in tape or disk throughput.
>>
>> andrea
>>
>> --
>> All the data continuously generated in your IT infrastructure
>> contains a definitive record of customers, application performance,
>> security threats, fraudulent activity, and more. Splunk takes this
>> data and makes sense of it. IT sense. And common sense.
>> http://p.sf.net/sfu/splunk-novd2d
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
> --
> All the data continuously generated in your IT infrastructure
> contains a definitive record of customers, application performance,
> security threats, fraudul

Re: [Bacula-users] tuning lto-4

2011-12-01 Thread gary artim
You guys/gals are great, very responsive! I did try
spooling/despooling and my run times shot up. I was using a simple
7200 drive though, no ssd or raid...I assume the performance gain
happens when your networks multi machines...wearing multiple hats so
will report back on btape next week, unless I get some time. gary

On Thu, Dec 1, 2011 at 8:00 AM, Alan Brown  wrote:
> gary artim wrote:
>>
>> thank much! will try testing with btape.
>
> Please let us know the results
>
>> btw, I ran with 20GB maximum
>> file size/2MB max block (see bacula-sd.conf below) and got these
>> results, 20MB/s increase, ran 20 minutes faster, got 50MBs --
>
> You should be seeing 120Mb/s or thereabouts.
>
> If you're spooling/despooling then you'll see lower overall speeds of
> course. What counts is the despooling speed.
>
> How much ram have you got and what are you using to connect the LTO4 drives
> up?
>
>
>
>
>

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-01 Thread Alan Brown
gary artim wrote:
> thank much! will try testing with btape. 

Please let us know the results

> btw, I ran with 20GB maximum
> file size/2MB max block (see bacula-sd.conf below) and got these
> results, 20MB/s increase, ran 20 minutes faster, got 50MBs -- 

You should be seeing 120Mb/s or thereabouts.

If you're spooling/despooling then you'll see lower overall speeds of 
course. What counts is the despooling speed.

How much ram have you got and what are you using to connect the LTO4 
drives up?





--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-01 Thread gary artim
thank much! will try testing with btape. btw, I ran with 20GB maximum
file size/2MB max block (see bacula-sd.conf below) and got these
results, 20MB/s increase, ran 20 minutes faster, got 50MBs -- now if I
can just double the speed I could backup 15TB in about 45/hrs. I don't
have that much data yet, but I'm hovering at 2TB and looking to expand
sharply over time. I'm not doing any networking, it just straight from
a raid 5 to a autochanger/lto-4. gary

  Build OS:   x86_64-redhat-linux-gnu redhat
  JobId:  6
  Job:Prodbackup.2011-11-30_18.49.24_06
  Backup Level:   Full
  Client: "bacula-fd" 5.0.3 (04Aug10)
x86_64-redhat-linux-gnu,redhat,
  FileSet:"FileSetProd" 2011-11-30 15:23:58
  Pool:   "FullProd" (From Job FullPool override)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"LTO-4" (From Job resource)
  Scheduled time: 30-Nov-2011 18:49:15
  Start time: 30-Nov-2011 18:49:26
  End time:   30-Nov-2011 20:14:56
  Elapsed time:   1 hour 25 mins 30 secs
  Priority:   10
  FD Files Written:   35,588
  SD Files Written:   35,588
  FD Bytes Written:   257,543,092,723 (257.5 GB)
  SD Bytes Written:   257,548,504,514 (257.5 GB)
  Rate:   50203.3 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): f2
  Volume Session Id:  2
  Volume Session Time:1322707293
  Last Volume Bytes:  257,600,822,272 (257.6 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

bacula-sd.conf:
Device {
  Name = LTO-4
  Media Type = LTO-4
  Archive Device = /dev/nst0
  AutomaticMount = yes;   # when device opened, read it
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  #Maximum File Size = 12GB
  Maximum File Size = 20GB
  #Maximum Network Buffer Size = 65536
  Maximum block size = 2M
  #Spool Directory = /db/bacula/spool/LTO4
  #Maximum Spool Size = 200G
  #Maximum Job Spool Size = 150G
  Autochanger = yes
  Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
  Alert Command = "sh -c 'smartctl -H -l error %c'"
}



On Wed, Nov 30, 2011 at 11:48 PM, Andrea Conti  wrote:
> On 30/11/11 19.43, gary artim wrote:
>> Thanks much, I'll try today the block size change first. Then try the
>> spooling. Dont have any unused disk, but may have to try on a shared
>> drive.
>> The "maximum file size" should be okay? g.
>
> Choosing a max file size is mainly a tradeoff between write performance
> (as the drive will stop and restart at the end of each file to write an
> EOF mark) and restore performance (as the drive can only seek to a file
> mark and then sequentially read through the file until the relevant data
> bocks are found).
>
> I usually set maximum file size so that there are 2-3 filemarks per tape
> wrap (3GB for LTO3, 5GB for LTO4), but if you don't plan to do regular
> restores, or if you always restore the whole contents of a volume, 12GB
> is fine.
>
> Anyway, with the figures you're citing your problem is *not* maximum
> file size.
>
> Try to assess tape performance alone with btape test (which has a
> "speed" command); you can try different block sizes and configuration
> and see which one gives the best results.
>
> Doing so will give you a clear indication on whether your bottleneck is
> in tape or disk throughput.
>
> andrea
>
> --
> All the data continuously generated in your IT infrastructure
> contains a definitive record of customers, application performance,
> security threats, fraudulent activity, and more. Splunk takes this
> data and makes sense of it. IT sense. And common sense.
> http://p.sf.net/sfu/splunk-novd2d
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [Bacula-devel] possible 5.2.2 bug (incrementals being promoted to fulls)

2011-12-01 Thread Stephen Thompson


I agree, it's unlikely a 'new' bug, but rather the restarting of my 
director during the upgrade that caused the problem to exhibit itself.

Here is what happened in more detail.

A week before the upgrade/director restart, the conf files for a 
significant number of jobs (~100) were changed and a "reload" of the 
director, and on the day they were changed we manually ran Fulls for 
each modified job, which were successfully completed as Fulls.  Then on 
subsequent evenings (every day for a week) scheduled Incrementals ran 
successfully, and as Incrementals.

When we upgrade to 5.2.2, we of course stopped our old director and 
started up the new one.  That evening the 12 out of the ~100 jobs 
mentioned above had their scheduled Incrementals promoted to Fulls, and 
yes the message in the log says:

> No prior or suitable Full backup found in catalog. Doing FULL backup.

However, this is not actually the case.  There is a successful FULL a 
week old for each of the 12 jobs that were promoted, and the other jobs 
in the ~100 that were changed did not promote to a FULL.

The dates on the conf files shows that they have not changed since the 
Full backups were made a week ago.

Again, we've been using bacula for years now, have some degree of 
expertise with it, and we've never seen this before.

Very strange...
Stephen







On 11/30/11 12:48 PM, Kern Sibbald wrote:
> Hello,
>
> Most likely you edited the .conf file and modified the
> FileSet. If that is the case, listing all the FileSets recorded
> in the database will show multiple copies of the FileSet
> record with different hashes.
>
> In most cases, other than changing the FileSet, Bacula
> clearly indicates why it is upgrading a level. In the case
> of a FileSet change, it prints a notice saying something
> like a valid Full could not be found.
>
> The probability that there is a new bug introduced between
> 5.2.1 and 5.2.2 is probably about 0.0001% since there were very
> few coding changes except for bug fixes.
>
> Regards,
> Kern
>
> On 11/30/2011 07:55 PM, Stephen Thompson wrote:
>>
>> FYI
>>
>> Not sure if anyone's seen or reported this, but I upgraded from 5.2.1 to
>> 5.2.2 yesterday and during my backups last night, several jobs were
>> promoted from Incremental to Full, even though their job configurations
>> had not changed and they did have a valid Full backup from last week.
>>
>> I have never seen this happen before with bacula in general or my
>> configuration in particular, so I thought it might be possible that a
>> bug was introduced into 5.2.2.
>>
>> thanks,
>> Stephen

-- 
Stephen Thompson   Berkeley Seismological Laboratory
step...@seismo.berkeley.edu215 McCone Hall # 4760
404.538.7077 (phone)   University of California, Berkeley
510.643.5811 (fax) Berkeley, CA 94720-4760

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtualfull

2011-12-01 Thread Thomas Mueller
On 01.12.2011 13:42, Miikael Havelock Nilson wrote:
> Hello,
>
>
> I have a small question. When virtualfull is made will data pass trought the 
> director? Question is in the case the storage is on low bandwith will data 
> move from storage to director and back to storage or storage to storage?
>

data is copied only on storage daemon.

- Thomas


--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VolStatus changes from APPEND to USED after a restore

2011-12-01 Thread Dietz Pröpper
Carsten Pache:
> >> A few days ago I had to restore some files. After the files were
> >> restored successfully, the VolStatus (shown by "list volumes") of the
> >> two tapes that were needed during restore changed from "Append" to
> >> "Used". Is this an expected behaviour?
> > 
> > Do you hava a maximum use time configured? AFAIK, in that case, the
> > volume status gets updated the next time the respective pool gets
> > touched, in your case by running a restore.
> 
> There is a "Volume Use Duration = 1 d" statement in my bacula-dir.conf -
> can that be the reason?

I think so.

> If yes: Why does VolStatus not change from "Append" to "Used"
> automatically after one day (1 d)? Or rather how do I force Bacula
> (5.0.3 in my case) to change the VolStatus?

Because that is only triggered if the pool gets touched somehow, ie. by 
means of a new backup or (in your case) with that restore.

But there is a workaround, say "status dir days=nnn" in bconsole, this 
should trigger a recalc. I'm not completely sure about the semantics of
days=nnn, but I think, it works in that way that bacula tries to evaluate 
tape usage for the amount of days, and if it goes over the next regular use 
date for the respective pool, the state gets recalculated.
I.e, if you do a backup to the pool in question every seven days, then 
running the above the time the use duration has expired with days=7 (or 
bigger) should do the trick.

regards,
Dietz


signature.asc
Description: This is a digitally signed message part.
--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Virtualfull

2011-12-01 Thread Miikael Havelock Nilson
Hello, 


I have a small question. When virtualfull is made will data pass trought the 
director? Question is in the case the storage is on low bandwith will data move 
from storage to director and back to storage or storage to storage?

Miikael


--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Manually recycling tape without errors?

2011-12-01 Thread Adrian Bridgett
Hi, I'm seeing what I think is suboptimal behaviour during some 
testing,  maybe one of the Bacula guru's can help/explain what's going 
on here.  I think it might be since the behaviour I want only happens 
when a job first requests a tape (and what I'm seeking is a way to get 
it to request _again_).

I believe that I've setup recycling and pruning correctly, however I'm 
delibrately setting volumes to "full" to force errors and it's during 
the subsequent manual recycling that I'm seeing problems.

If I try and manually recycle a tape it never seems to actually _use_ it 
without failing a job.  Full logs are below, but in summary, I'm 
entering "purge volume=06L3" (not my labelling scheme) and bacula 
will be in this state:

Device status:
Autochanger "Autochanger" with devices:
"LTO-3" (/dev/nst0)
Device "LTO-3" (/dev/nst0) is mounted with:
 Volume:  06L3
 Pool:DailyPool
 Media type:  LTO-3
 Device is BLOCKED waiting to create a volume for:
Pool:DailyPool
Media type:  LTO-3
 Slot 1 is loaded in drive 0.
 Total Bytes Read=0 Blocks Read=0 Bytes/block=0
 Positioned at File=1 Block=0

No matter what I do (spent quite some time reading, googling, testing) I 
don't seem to work around this.  The manual seems to say that I'll have 
to relabel the tape, however I don't want to relabel the tape to a 
different name (as that then won't tie in with the barcode) and I can't 
relabel to the same label.   I even tried the older "delete volume, 
label" approach - both putting the tape in the Scratch pool and also the 
DailyPool.

It's as if I need to "kick" Bacula so that it says "aha, a tape I can 
recycle" however the only thing I've found is to bounce the storage 
daemon (which then causes the jobs to fail).

I'm almost certainly missing something but I'd appreciate your help.  
Director and storage daemon configs below. (NB: timescales are _very_ 
short for testing)

Many thanks,

Adrian

bacula-dir.conf
Director {# define myself
   Name = bacula-dir
   DIRport = 9101# where we listen for UA connections
   QueryFile = "/etc/bacula/scripts/query.sql"
   WorkingDirectory = "/var/lib/bacula"
   PidDirectory = "/var/run/bacula"
   Maximum Concurrent Jobs = 1
   Password = ""
   Messages = Daemon
   #DirAddress = 127.0.0.1
   DirAddress = 0.0.0.0
}

JobDefs {
   Name = "DefaultJob"
   Type = Backup
   Level = Incremental
   FileSet = "Full Set"
   Schedule = "DefaultSchedule"
   Storage = DefaultStorage
   Messages = Standard
   Pool = DailyPool
   Priority = 10
   Write Bootstrap = "/var/lib/bacula/bootstraps/%c.bsr"
   SpoolData = yes
   Spool Size = 1g
   RunScript {
 RunsWhen = Before
 FailJobOnError = Yes
 Command = "/usr/local/sbin/backup_box"
   }
}


# Backup the catalog database (after the nightly save)
Job {
   Name = "BackupCatalog"
   JobDefs = "DefaultJob"
   Level = Full
   FileSet="Catalog"
   Client = localhost
   Schedule = "CatalogSchedule"
   # This creates an ASCII copy of the catalog
   # Arguments to make_catalog_backup.pl are:
   #  make_catalog_backup.pl 
   RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl 
DefaultCatalog"
   # This deletes the copy of the catalog
   RunAfterJob  = "/etc/bacula/scripts/delete_catalog_backup"
   Write Bootstrap = "/var/lib/bacula/bootstraps/%n.bsr"
   Priority = 11   # run after main backup
   RunScript {
   }
}

#
# Standard Restore template, to be changed by Console program
#  Only one such job is needed for all Jobs/Clients/Storage ...
#
Job {
   Name = "RestoreFiles"
   Type = Restore
   Client= localhost
   FileSet="Full Set"
   Storage = DefaultStorage
   Pool = DailyPool
   Messages = Standard
   Where = /nonexistant/path/to/file/archive/dir/bacula-restores
}


# List of files to be backed up
FileSet {
   Name = "Full Set"
   Include {
 Options {
   signature = MD5
   aclsupport = yes
   xattrsupport = yes
   noatime = yes
   onefs = no
 }
 File = /
   }

   Exclude {
 File = /dev
 File = /lib/init/rw
 File = /media
 File = /mnt
 File = /proc
 File = /sys
 File = /tmp
 File = /var/backups/backuppc
 File = /var/cache
 File = /var/db
 File = /var/lib/bacula
 File = /var/lib/backuppc
 File = /var/lib/defoma
 File = /var/log/nagios2/retention.dat
 File = /var/log/ntpstats
 File = /var/log/sysstat
 File = /var/spool/tethereal
 File = /var/run
 File = /var/tmp
   }
}

#
# When to do the backups
Schedule {
   Name = "DefaultSchedule"
   #Run = Level=Full Pool=MonthlyPool 1st sat at 03:05
   #Run = Level=Full Pool=WeeklyPool 2nd-5th sat at 03:05
   #Run = Level=Incremental Pool=DailyPool sun-fri at 03:05
   #Run = Level=Full Pool=MonthlyPool 1st sat at 03:05
   Run = Level=Full Pool=WeeklyPool hourly
   Run = Level=Incremental Pool=DailyPool hourly at 0:05
   Run = Level=Incremental Pool=DailyPool 

Re: [Bacula-users] VolStatus changes from APPEND to USED after a restore

2011-12-01 Thread Carsten Pache
>> A few days ago I had to restore some files. After the files were
>> restored successfully, the VolStatus (shown by "list volumes") of the
>> two tapes that were needed during restore changed from "Append" to
>> "Used". Is this an expected behaviour?

> Do you hava a maximum use time configured? AFAIK, in that case, the 
> volume status gets updated the next time the respective pool gets touched, 
> in your case by running a restore.

There is a "Volume Use Duration = 1 d" statement in my bacula-dir.conf - can 
that be the reason?

If yes: Why does VolStatus not change from "Append" to "Used" automatically 
after one day (1 d)? Or rather how do I force Bacula (5.0.3 in my case) to 
change the VolStatus?

Regards
Carsten Pache



--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Restore client

2011-12-01 Thread Viacheslav Biriukov
Hello.

I want always restore files to the one Client by default. How can I do this?

Thanks.

-- 
Viacheslav Biriukov
BR
http://biriukov.com
--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] VolStatus changes from APPEND to USED after a restore - is this expected behaviour?

2011-12-01 Thread Dietz Pröpper
Carsten Pache:
> A few days ago I had to restore some files. After the files were
> restored successfully, the VolStatus (shown by "list volumes") of the
> two tapes that were needed during restore changed from "Append" to
> "Used". Is this an expected behaviour?

Do you hava a maximum use time configured? AFAIK, in that case, the volume 
status gets updated the next time the respective pool gets touched, in your 
case by running a restore.

regards,
Dietz


signature.asc
Description: This is a digitally signed message part.
--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] VolStatus changes from APPEND to USED after a restore - is this expected behaviour?

2011-12-01 Thread Carsten Pache
A few days ago I had to restore some files. After the files were restored 
successfully, the VolStatus (shown by "list volumes") of the two tapes that 
were needed during restore changed from "Append" to "Used". Is this an expected 
behaviour?

Regards
Carsten Pache



--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] feature request: exempt administrative connections from concurrency limits

2011-12-01 Thread Bruno Friedmann
On 12/01/2011 12:20 AM, mark.berg...@uphs.upenn.edu wrote:
> Item 1:   Administrative connections to the file daemon should not count in 
> the concurrency limit
>   Origin: Mark Bergman 
>   Date:   Wed Nov 30 18:03:20 EST 2011
>   Status:
> 
>   What:   
>Administrative connections to the file daemon should not count
>in the concurrency limit.
> 
>These connections to the file daemon (ie., "stat dir" or "cancel
>dir") are treated as if they were backup connections. This means
>that these commands will be refused if the maximum number of
>concurrent jobs are running.
> 
>   Why:
>The file daemon may be configued with a low concurrency
>value to deliberately prevent multiple backups from running
>simultaneously. Use of a low value is especially important
>in an enterprise setting, where a bacula client may be a file
>server with large disk volumes that should not be backed up
>concurrently to avoid I/O contention on the file server.
> 
>If "Maximum Concurrent Jobs" is set to "1", then all other
>connections to the file daemon will be refused, including
>administrative commands.
> 
>Setting the concurrency value to another low value (ie., 2,
>3, etc.) both defeats the purpose of limiting I/O contention
>and actually makes the problem worse. As soon as the maximum
>number of backups (the concurrency limit) are running, then
>it becomes impossible to cancel any of the jobs--exactly at a
>time when I/O or network contention become a problem and some
>of the jobs should be canceled.
> 
> 
>   Notes: A similar issue may exist for the other concurrency
>limits applied to the director and storage daemon.
> 
> 
> Mark Bergman  voice: 215-662-7310
> mark.berg...@uphs.upenn.edu fax: 215-614-0266
> System Administrator Section of Biomedical Image Analysis
> Department of RadiologyUniversity of Pennsylvania
>   PGP Key: https://www.rad.upenn.edu/sbia/bergman 
> 

Mark I saw a limit to your request :
What about restore ? How would you manage/count them

Example : You have a very long running backup
Then a client need very quickly a restore (from another job)
If you have limited connexions you will be not able to execute the restore.

But with your proposal, how restore should be counted ?
Or will you tell to the customers that restore will only happen in several 
hours after the backup ?

Sorry, I'm just asking questions

-- 

Bruno Friedmann
Ioda-Net Sàrl www.ioda-net.ch

openSUSE Member & Ambassador
GPG KEY : D5C9B751C4653227
irc: tigerfoot

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users