Re: [Bacula-users] Dell PowerVault 124T tape drive

2011-04-13 Thread Peter Zenge
> -Original Message-
> From: Steve Ellis [mailto:el...@brouhaha.com]
> Sent: Wednesday, April 13, 2011 9:13 AM
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Dell PowerVault 124T tape drive
> 
> On 4/13/2011 4:48 AM, Steffen Fritz wrote:
> > Hey folks,
> >
> >
> > something strange is happening within my bacula configuration. Every
> help is much appreciated!
> >
> > 1. This is, what bconsole-->  status tells me about my tape drive. No
> pool?
> >
> > Device "Drive-1" (/dev/nst0) is mounted with:
> >  Volume:  sonntag
> >  Pool:*unknown*
> >  Media type:  LTO-4
> >  Drive 0 status unknown.
> >  Total Bytes Read=0 Blocks Read=0 Bytes/block=0
> >  Positioned at File=0 Block=0
> > 
> >
> > Used Volume status:
> > sonntag on device "Drive-1" (/dev/nst0)
> >  Reader=0 writers=0 devres=0 volinuse=0
> If you are using a bacula earlier than 5.0.3 (I believe), then if there
> is a tape change during a backup, this can happen--when I've seen this
> issue, the tape was actually in a pool, it was (mostly) a display
> issue--however, it manifested that subsequent jobs couldn't start due
> to
> this, until the running job (that triggered the tape change) finished.
> In 5.0.3, I've heard some confirmation that this issue has been
> fixed--certainly I haven't seen the issue myself since switching to
> 5.0.3, but I never saw it very often earlier--so my datapoints are
> incomplete.
> 
> -se

I saw it consistently in 5.0.2, never again once I moved to 5.0.3.  I consider 
it fixed.


--
Forrester Wave Report - Recovery time is now measured in hours and minutes
not days. Key insights are discussed in the 2010 Forrester Wave Report as
part of an in-depth evaluation of disaster recovery service providers.
Forrester found the best-in-class provider in terms of services and vision.
Read this report now!  http://p.sf.net/sfu/ibm-webcastpromo
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD Losing Track of Pool

2011-01-24 Thread Peter Zenge
> -Original Message-
> From: Peter Zenge [mailto:pze...@ilinc.com]
> Sent: Thursday, January 20, 2011 10:59 AM
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] SD Losing Track of Pool
> 
> > -Original Message-
> > From: Steve Ellis [mailto:el...@brouhaha.com]
> > Sent: Thursday, January 20, 2011 10:39 AM
> > To: bacula-users@lists.sourceforge.net
> > Subject: Re: [Bacula-users] SD Losing Track of Pool
> >
> > On 1/20/2011 7:18 AM, Peter Zenge wrote:
> > >>
> > >>> Second, in the Device Status section at the bottom, the pool of
> LF-
> > F-
> > >> 0239 is
> > >>> listed as "*unknown*"; similarly, under "Jobs waiting to reserve
> a
> > >> drive",
> > >>> each job wants the correct pool, but the current pool is listed
> as
> > >> "".
> > >>
> > > Admittedly I confused the issue by posting an example with two
> Pools
> > involved.  Even in that example though, there were jobs using the
> same
> > pool as the mounted volume, and they wouldn't run until the 2 current
> > jobs were done (which presumably allowed the SD to re-mount the same
> > volume, set the current mounted pool correctly, and then 4 jobs were
> > able to write to that volume concurrently, as designed.
> > >
> > > I saw this issue two other times that day; each time the SD changed
> > the mounted pool from "LF-Inc" to "*unknown*" and that brought
> > concurrency to a screeching halt.
> > >
> > > Certainly I could bypass this issue by having a dedicated volume
> and
> > device for each backup client, but I have over 50 clients right now
> and
> > it seems like that should be unnecessary.  Is that what other people
> > who write to disk volumes do?
> > I've been seeing this issue myself--it only seems to show up for me
> if
> > a
> > volume change happens during a running backup.  Once that happens,
> > parallelism using that device is lost.  For me this doesn't happen
> too
> > often, as I don't have that many parallel jobs, and most of my
> backups
> > are to LTO3, so volume changes don't happen all that often either.
> > However, it is annoying.
> >
> > I thought I had seen something that suggested to me that this issue
> > might be fixed in 5.0.3, I've recently switched to 5.0.3, but haven't
> > seen any pro or con results yet.
> >
> > On a somewhat related note, it seemed to me that during despooling,
> all
> > other spooling jobs stop spooling--this might be intentional, I
> > suppose,
> > but I think my disk subsystem would be fast enough to keep up one
> > despool to LTO3, while other jobs could continue to spool--I could
> > certainly understand if no other job using the same device was
> allowed
> > to start despooling during a despool, but that isn't what I observe.
> >
> > If my observations are correct, it would be nice if this was a
> > configurable choice (with faster tape drives, few disk subsystems
> would
> > be able to handle a despool and spooling at the same time)--some of
> my
> > jobs stall long enough when this happens to allow some of my desktop
> > backup clients to go to standby--which means those jobs will fail (my
> > backup strategy uses Wake-on-LAN to wake them up in the first place).
> > I
> > certainly could spread my jobs out more in time, if necessary, to
> > prevent this, but I like for the backups to happen at night when no
> one
> > is likely to be using the systems for anything else.  I guess another
> > option would be to launch a keepalive WoL script when a job starts,
> and
> > arrange that the keepalive program be killed when the job completes.
> >
> > -se
> >
> 
> 
> Agree about the volume change.  In fact I'm running a backup right now
> that should force a volume change in a couple of hours, and I'm
> watching the SD status to see if the mounted pool becomes unknown
> around that time.  I have certainly noticed that long-running jobs seem
> to cause this issue, and it occurred to me that long-running jobs also
> have a higher chance of spanning volumes.
> 
> If that's what I see, then I will upgrade to 5.0.3.  I can do that
> pretty quickly, and will report back...
> 
> 

Steve, I can confirm that it is the volume change that causes this issue.  
Luckily I can also confirm that it is fixed in 5.0.3.  Shaved 18 hours off my 
backup window this past weekend!

I should have upgraded to 5.0.3 before bothering the list.  Thanks to everyone 
who responded.


--
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD Losing Track of Pool

2011-01-20 Thread Peter Zenge
> -Original Message-
> From: Martin Simmons [mailto:mar...@lispworks.com]
> Sent: Thursday, January 20, 2011 10:47 AM
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] SD Losing Track of Pool
> 
> >>>>> On Thu, 20 Jan 2011 08:18:35 -0700, Peter Zenge said:
> >
> > Admittedly I confused the issue by posting an example with two Pools
> > involved.  Even in that example though, there were jobs using the
> same pool
> > as the mounted volume, and they wouldn't run until the 2 current jobs
> were
> > done (which presumably allowed the SD to re-mount the same volume,
> set the
> > current mounted pool correctly, and then 4 jobs were able to write to
> that
> > volume concurrently, as designed.
> >
> > I saw this issue two other times that day; each time the SD changed
> the
> > mounted pool from "LF-Inc" to "*unknown*" and that brought
> concurrency to a
> > screeching halt.
> 
> Sorry, I see what you mean now -- 18040 should be running.  Did it run
> eventually, without intervention?
> 
> I can't see why the pool name has been set to unknown.
> 
> __Martin
> 


It did run eventually and without intervention.  And while running the SD did 
show the correct pool.  My problem is that without concurrency I don't get 
efficient use of my available bandwidth, and my backup window (already measured 
in days) is longer than it otherwise needs to be even though the same amount of 
data is backed up.


--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD Losing Track of Pool

2011-01-20 Thread Peter Zenge
> -Original Message-
> From: Steve Ellis [mailto:el...@brouhaha.com]
> Sent: Thursday, January 20, 2011 10:39 AM
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] SD Losing Track of Pool
> 
> On 1/20/2011 7:18 AM, Peter Zenge wrote:
> >>
> >>> Second, in the Device Status section at the bottom, the pool of LF-
> F-
> >> 0239 is
> >>> listed as "*unknown*"; similarly, under "Jobs waiting to reserve a
> >> drive",
> >>> each job wants the correct pool, but the current pool is listed as
> >> "".
> >>
> > Admittedly I confused the issue by posting an example with two Pools
> involved.  Even in that example though, there were jobs using the same
> pool as the mounted volume, and they wouldn't run until the 2 current
> jobs were done (which presumably allowed the SD to re-mount the same
> volume, set the current mounted pool correctly, and then 4 jobs were
> able to write to that volume concurrently, as designed.
> >
> > I saw this issue two other times that day; each time the SD changed
> the mounted pool from "LF-Inc" to "*unknown*" and that brought
> concurrency to a screeching halt.
> >
> > Certainly I could bypass this issue by having a dedicated volume and
> device for each backup client, but I have over 50 clients right now and
> it seems like that should be unnecessary.  Is that what other people
> who write to disk volumes do?
> I've been seeing this issue myself--it only seems to show up for me if
> a
> volume change happens during a running backup.  Once that happens,
> parallelism using that device is lost.  For me this doesn't happen too
> often, as I don't have that many parallel jobs, and most of my backups
> are to LTO3, so volume changes don't happen all that often either.
> However, it is annoying.
> 
> I thought I had seen something that suggested to me that this issue
> might be fixed in 5.0.3, I've recently switched to 5.0.3, but haven't
> seen any pro or con results yet.
> 
> On a somewhat related note, it seemed to me that during despooling, all
> other spooling jobs stop spooling--this might be intentional, I
> suppose,
> but I think my disk subsystem would be fast enough to keep up one
> despool to LTO3, while other jobs could continue to spool--I could
> certainly understand if no other job using the same device was allowed
> to start despooling during a despool, but that isn't what I observe.
> 
> If my observations are correct, it would be nice if this was a
> configurable choice (with faster tape drives, few disk subsystems would
> be able to handle a despool and spooling at the same time)--some of my
> jobs stall long enough when this happens to allow some of my desktop
> backup clients to go to standby--which means those jobs will fail (my
> backup strategy uses Wake-on-LAN to wake them up in the first place).
> I
> certainly could spread my jobs out more in time, if necessary, to
> prevent this, but I like for the backups to happen at night when no one
> is likely to be using the systems for anything else.  I guess another
> option would be to launch a keepalive WoL script when a job starts, and
> arrange that the keepalive program be killed when the job completes.
> 
> -se
> 


Agree about the volume change.  In fact I'm running a backup right now that 
should force a volume change in a couple of hours, and I'm watching the SD 
status to see if the mounted pool becomes unknown around that time.  I have 
certainly noticed that long-running jobs seem to cause this issue, and it 
occurred to me that long-running jobs also have a higher chance of spanning 
volumes.

If that's what I see, then I will upgrade to 5.0.3.  I can do that pretty 
quickly, and will report back...



--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SD Losing Track of Pool

2011-01-20 Thread Peter Zenge
> From: Martin Simmons [mailto:mar...@lispworks.com]
> Sent: Thursday, January 20, 2011 4:28 AM
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] SD Losing Track of Pool
> 
> >>>>> On Tue, 18 Jan 2011 08:48:56 -0700, Peter Zenge said:
> >
> > A couple days ago somebody made a comment that using pool overrides
> in a
> > schedule was deprecated.  I've been using them for years, but I've
> been
> > seeing a strange problem recently that I'm thinking might be related.
> >
> > I'm running 5.0.2 on Debian, separate Dir/Mysql and SD systems, using
> files
> > on an array.  I'm backing up several TB a week, but over a slow
> 25Mbps link,
> > so some of my full jobs run for a very long time.  Concurrency is
> key.  I
> > normally run 4 jobs at a time on my SD, and I spool (yes, probably
> > unnecessary, but because the data is coming in so slowly, I feel
> better
> > about writing it to volumes in big chunks.)
> >
> > Right now I have one job actively running, with 4 more waiting on the
> SD.
> > As I mentioned before, usually 4 are running concurrently, but I
> frequently
> > see less than 4 but have never really dug into it.  In the output
> below,
> > note that the SD is running 4 (actually 5!) jobs, but only one is
> actually
> > writing to the spool.  Two things jump out at me here: First, of the
> 5
> > running jobs, two are correctly noted as being for LF-Full, and 3 for
> LF-Inc
> > (pool for Full backups and pool for Incremental backups
> respectively).
> > However, all 5 show the same volume (LF-F-0239, which is only in the
> LF-Full
> > pool, and is currently being written to by the correctly-running
> job).
> > Second, in the Device Status section at the bottom, the pool of LF-F-
> 0239 is
> > listed as "*unknown*"; similarly, under "Jobs waiting to reserve a
> drive",
> > each job wants the correct pool, but the current pool is listed as
> "".
> 
> The reporting of pools in the SD might be a little wrong, because it
> doesn't
> really have that information, but I think the fundamental problem is
> that you
> only have one SD device.  That is limiting concurrency because an SD
> device
> can only mount one volume at a time (even for file devices).
> 
> __Martin
> 

Admittedly I confused the issue by posting an example with two Pools involved.  
Even in that example though, there were jobs using the same pool as the mounted 
volume, and they wouldn't run until the 2 current jobs were done (which 
presumably allowed the SD to re-mount the same volume, set the current mounted 
pool correctly, and then 4 jobs were able to write to that volume concurrently, 
as designed.

I saw this issue two other times that day; each time the SD changed the mounted 
pool from "LF-Inc" to "*unknown*" and that brought concurrency to a screeching 
halt.

Certainly I could bypass this issue by having a dedicated volume and device for 
each backup client, but I have over 50 clients right now and it seems like that 
should be unnecessary.  Is that what other people who write to disk volumes do?

--
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] SD Losing Track of Pool

2011-01-18 Thread Peter Zenge
A couple days ago somebody made a comment that using pool overrides in a 
schedule was deprecated.  I've been using them for years, but I've been seeing 
a strange problem recently that I'm thinking might be related.

I'm running 5.0.2 on Debian, separate Dir/Mysql and SD systems, using files on 
an array.  I'm backing up several TB a week, but over a slow 25Mbps link, so 
some of my full jobs run for a very long time.  Concurrency is key.  I normally 
run 4 jobs at a time on my SD, and I spool (yes, probably unnecessary, but 
because the data is coming in so slowly, I feel better about writing it to 
volumes in big chunks.)

Right now I have one job actively running, with 4 more waiting on the SD.  As I 
mentioned before, usually 4 are running concurrently, but I frequently see less 
than 4 but have never really dug into it.  In the output below, note that the 
SD is running 4 (actually 5!) jobs, but only one is actually writing to the 
spool.  Two things jump out at me here: First, of the 5 running jobs, two are 
correctly noted as being for LF-Full, and 3 for LF-Inc (pool for Full backups 
and pool for Incremental backups respectively).  However, all 5 show the same 
volume (LF-F-0239, which is only in the LF-Full pool, and is currently being 
written to by the correctly-running job).  Second, in the Device Status section 
at the bottom, the pool of LF-F-0239 is listed as "*unknown*"; similarly, under 
"Jobs waiting to reserve a drive", each job wants the correct pool, but the 
current pool is listed as "".

Hopefully this is enough information to make sense of.  I tried to cut out 
everything I thought was unnecessary.  Thanks

Some console output follows:

*stat dir
bacula-dir Version: 5.0.2 (28 April 2010) i686-pc-linux-gnu debian 5.0.4
Daemon started 28-Dec-10 14:21, 444 Jobs run since started.
 Heap: heap=1,093,632 smbytes=688,548 max_bytes=1,225,799 bufs=3,052 
max_bufs=5,841

Scheduled Jobs:
Level  Type Pri  Scheduled  Name   Volume
===
IncrementalBackup10  18-Jan-11 20:15fs4-fd-fullLF-I-0237
IncrementalBackup10  18-Jan-11 20:15openfiler1-pvr-1   LF-I-0237
IncrementalBackup10  18-Jan-11 20:15file-server2-fd-full LF-I-0237
IncrementalBackup10  18-Jan-11 20:15phx-dc2-fd-fullLF-I-0237
--other jobs omitted-

JobId Level   Name   Status
==
 18038 Fulloraclerac1-fd-full.2011-01-17_08.25.16_44 is running
 18040 Fullmailserverx-fd-full.2011-01-17_08.25.46_46 is waiting on Storage 
LocalFiles
 18041 Increme  fs4-fd-full.2011-01-17_20.15.00_48 is waiting on Storage 
LocalFiles
 18042 Increme  cacti-fd-full.2011-01-17_20.15.00_49 is waiting on Storage 
LocalFiles
 18043 Increme  acu-leap-test-fd-full.2011-01-17_20.15.00_50 is waiting on 
Storage LocalFiles
 18044 Fulldns3-fd-full.2011-01-17_20.15.00_51 is waiting execution
 18045 Increme  dns4-fd-full.2011-01-17_20.15.00_52 is waiting on max Storage 
jobs
 18046 Increme  pcontroller1-fd-full.2011-01-17_20.15.00_53 is waiting on max 
Storage jobs
--other jobs omitted-


*stat storage=LocalFiles
Connecting to Storage daemon LocalFiles at baculasd.hq.ilinc.com:9103

baculasd-sd Version: 5.0.2 (28 April 2010) x86_64-unknown-linux-gnu debian 5.0.7
Daemon started 11-Jan-11 09:19, 125 Jobs run since started.
 Heap: heap=1,458,176 smbytes=907,450 max_bytes=1,295,252 bufs=236 max_bufs=303
Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8

Running Jobs:
Writing: Full Backup job oraclerac1-fd-full JobId=18038 Volume="LF-F-0239"
pool="LF-Full" device="LocalFiles" (/data/bacula)
spooling=1 despooling=0 despool_wait=0
Files=99,312 Bytes=36,245,783,238 Bytes/sec=533,984
FDReadSeqNo=2,236,739 in_msg=1764881 out_msg=5 fd=5
Writing: Full Backup job mailserverx-fd-full JobId=18040 Volume="LF-F-0239"
pool="LF-Full" device="LocalFiles" (/data/bacula)
spooling=0 despooling=0 despool_wait=0
Files=0 Bytes=0 Bytes/sec=0
FDSocket closed
Writing: Incremental Backup job fs4-fd-full JobId=18041 Volume="LF-F-0239"
pool="LF-Inc" device="LocalFiles" (/data/bacula)
spooling=0 despooling=0 despool_wait=0
Files=0 Bytes=0 Bytes/sec=0
FDSocket closed
Writing: Incremental Backup job cacti-fd-full JobId=18042 Volume="LF-F-0239"
pool="LF-Inc" device="LocalFiles" (/data/bacula)
spooling=0 despooling=0 despool_wait=0
Files=0 Bytes=0 Bytes/sec=0
FDSocket closed
Writing: Incremental Backup job acu-leap-test-fd-full JobId=18043 
Volume="LF-F-0239"
pool="LF-Inc" device="LocalFiles" (/data/bacula)
spooling=0 despooling=0 despool_wait=0
Files=0 Bytes=0 Bytes/sec=0
FDSocket closed


Jobs waiting to reserve a drive:
   3608 JobId=18040 wants Pool="LF-Full" but have Pool="" nreserve=0 on drive 
"LocalFiles" (/data/bacula).
   3608 JobId=18041 wants Pool="LF-In

Re: [Bacula-users] Bacula FD on openfiler dist

2010-11-09 Thread Peter Zenge

From: Eduardo Sieber [mailto:sie...@gmail.com]
Sent: Tuesday, November 09, 2010 2:07 AM
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Bacula FD on openfiler dist

Hello people!

I need a little help. I Have a open filer NAS server 
(http://www.openfiler.com/) and I wanna setup a bacula fd on it.
Someone have this scenario?

I Didnt found any documentation about...


Thank you guys!





Openfiler 2.3 (the current one) actually includes a really old Bacula FD, 2.36 
or so if I remember correctly.  Too old for encryption, which was a requirement 
for us.  So this is what I did about a year ago to update it (not positive this 
is a complete list, but it should be enough to get you really close).  Note 
that you REALLY don't want to do this on the production machine, as OpenFiler 
is an appliance and meant to be run with exactly the software packages that it 
comes with, no more, no less.


1.   Install a second OpenFiler with the same architecture (x86_64 in my 
case; I used a VM to make it easy)

2.   Run "conary updateall" , assuming you've done that on your production 
box

3.   Install the necessary dev files:

conary update gcc

conary update gcc-c++

conary update glibc:devel

conary update zlib:devel

conary update openssl:devel

4.   Tell system where the Kerberos libs are:

export CPPFLAGS=" -I/usr/kerberos/include"

5.   Build static client of bacula:

./configure -enable-static-client-only -enable-client-only -with-openssl 
-disable-libtool

make ; make install

6.   Copy the resulting binary onto your production machine.  I created a 
new folder for it to keep it separate from the old one.  Also need to create 
/var/bacula and /var/bacula/working

7.   Start it in /etc/rc.local with something like the following:

/etc/bacula/bacula-fd -v -c /etc/bacula/bacula-fd.conf (make sure to substitute 
your install dir from 6)



Been running strong for a year with absolutely no issues.  Good luck.
--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book "Blueprint to a 
Billion" shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup fails when creating a large backup

2010-11-08 Thread Peter Zenge
> From: Primoz Kolaric [mailto:primoz.kola...@sinergise.com]
> Sent: Monday, November 08, 2010 4:39 PM
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Backup fails when creating a large backup
> 
> 
> > > Nop, there's no nat. It's a linux based router/firewall.
> >
> > If the job is always failing at about the same elapsed time, it's
> probably the connection being dropped by the firewall.
> >
> > Search the docs for heartbeat and consider adding one or both of
> those settings.  There is one for the FD and one for the SD.
> >
> As i mentioned, i have added the heartbeat on FD and SD. It seemed to
> help on previous backup (it's a monthly backup) but now it failed
> again.
> 
> 

For what it's worth, I recently split my DIR and SD boxes with a firewall (and 
a very slow link, as it turns out...).  I was having all kinds of timeout 
issues, usually with the connection from DIR->SD while data was being sent from 
Client->SD (for many hours).  Heartbeats did not help, but changing the TCP 
Keepalive timeouts (as mentioned in the FAQ; 
http://wiki.bacula.org/doku.php?id=faq ) worked great.  I only set it on DIR 
and SD boxes, no need to set it on Clients.  Changing it on the SD keeps my 
Client->SD connections alive, while DIR keeps DIR->Client connection alive; 
both ends keep DIR->SD alive.  Some of my jobs trickle in for 48 hours, which 
is certainly not ideal, but we needed offsite disk-based backups of large 
files...

Servers are Debian 5 (Lenny) running 5.0.2, firewall was Juniper Netscreen.

--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G. Thomson, author of the best-selling book "Blueprint to a 
Billion" shares his insights and actions to help propel your 
business during the next growth cycle. Listen Now!
http://p.sf.net/sfu/SAP-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Dell PV-124T with Ultrium TD4, Hardware or Software compression?

2010-08-13 Thread Peter Zenge
> From: Phil Stracchino [mailto:ala...@metrocast.net]
> Sent: Friday, August 13, 2010 7:51 AM
> To: bacula-users@lists.sourceforge.net
> Subject: Re: [Bacula-users] Dell PV-124T with Ultrium TD4, Hardware or
> Software compression?
> 
> On 08/13/10 04:10, Dietz Pröpper wrote:
> > IMHO there are two problems with hardware compression:
> > 1. Data mix: The compression algorithms tend to work quite well on
> > compressable stuff, but can't cope very well with precompressed
> stuff, i.e.
> > encrypted data or media files. On an old DLT drive (but modern
> hardware
> > should perform in a similar fashion), I get around 7MB/s with
> "normal" data
> > and around 3MB/s with precrompessed stuff. The raw tape write rate is
> > somewhere around 4MB/s. And even worse - due to the fact that the
> > compression blurs precompressed data, it also takes noticeable more
> tape
> > space.
> > 2. Vendors: I've seen it more than once that tape vendors managed to
> break
> > their own compression, which means that a replacement tape drive two
> years
> > younger than it's predecessor can no longer read the compressed tape.
> > Compatibility between vendors, the same.
> > So, if the compression algorithm is not defined in the tape drive's
> > standard then it's no good idea to even think about using the tape's
> > hardware compression.
> 
> Neither of these issues is applicable to LTO.  The compression
> algorithm
> (which is a pretty good one) is defined in the LTO specification, and
> the drive compresses data block-by-block, doing a trial compression of
> each data block and writing whichever is the smaller of the compressed
> and uncompressed version of that block to tape, flagging individual
> blocks as compressed or uncompressed.
> 
> --
>   Phil Stracchino, CDK#2 DoD#299792458 ICBM: 43.5607, -71.355
>   ala...@caerllewys.net   ala...@metrocast.net   p...@co.ordinate.org
>  Renaissance Man, Unix ronin, Perl hacker, Free Stater
>  It's not the years, it's the mileage.
> 

Remember also that if you are trying to minimize tape/disk/other backup media 
space used, and using encryption, you will need to use software compression.  
The FD compresses before encrypting; once encrypted, as noted above, the data 
is no longer compressible...

--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Possibility of parallelising encryption?

2010-03-28 Thread Peter Zenge
And of course it's even worse if you have compressible data.  Since it's 
uncompressible once encrypted, you can't let the tape drive handle it... So in 
our case, on a quad-core server, we see a single core saturated apparently 
doing both the compress and encrypt routines, while 3 cores idle.  Leads to 
much slower backup times than it would otherwise be capable of.  Even having 
one core compress and another encrypt would be more efficient in our case.

From: Richard Scobie [rich...@sauce.co.nz]
Sent: Sunday, March 28, 2010 11:26 AM
To: bacula-users
Subject: [Bacula-users] Possibility of parallelising encryption?

I have a 2.8GHz Core i7 machine backing up uncompressable data spooled
onto an 8 drive RAID5, to LTO-4 tape.

Our requirements now dictate that data encryption must be used on the
tapes and having configured this, it seems that one core is saturated
encrypting the data and the result is that tape write speed is now about
50% slower than when encryption is not used.

Would it be possible to optimise this task by perhaps reading data in
"chunks", which in turn can be encrypted by a core each, before being
recombined and written out to tape?

I'd use the hardware encryption (which presumably has no performance
impact), that is an option on this autochanger, except they want $2500
for it...

Regards,

Richard

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula to the Cloud

2010-03-11 Thread Peter Zenge
Following up on my own post, I had a little free time the other day and decided 
to investigate whether this was feasible.  Setting up the necessary services on 
Amazon was trivial, including access control and block storage.  I tried s3fs 
first, and it worked, but it felt like there was way too much i/o going on for 
that kind of data (which is pretty much what I expected).  Then I tried putting 
my bacula-sd on an EC2 node, writing to files on EBS, and it worked great 
(spooling first to the "local" drive on EC2).  Throughput however was somewhat 
less than I was hoping for, approx. 25% of what I get locally to spool and then 
to tape.  However, I found that there was NO performance penalty for running 
two jobs concurrently.  I didn't try larger numbers, but my guess is you can 
run a large number of concurrent jobs to get a pretty good effective 
throughput, assuming you have lots of clients with similar data sizes.

Our problem is that 80% of our data is on one client, and it would take 130 
hours to do a full backup, and our backup window simply isn't that long.  Then 
I thought I could break the FileSets into smaller pieces and run multiple 
backup jobs in parallel (and I'm assuming that my client is not the 
bottleneck).  However, it wouldn't run more than one job on that client 
concurrently.  Since I can run multiple clients concurrently, I'm pretty sure 
my bacula-dir.conf and bacula-sd.conf settings are correct, and my 
bacula-fd.conf specifies "Maximum Concurrent Jobs = 20"... Any other reason why 
I couldn't run say 5 parallel jobs with different filesets off the same client?

From: Peter Zenge [mailto:pze...@ilinc.com]
Sent: Tuesday, March 02, 2010 2:57 PM
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Bacula to the Cloud

Hello, 2 year Bacula user but first-time poster.  I'm currently dumping about 
1.6TB to LTO2 tapes every week and I'm looking to migrate to a new storage 
medium.

The obvious answer, I think, is a direct-attached disk array (which I would be 
able to put in a remote gigabit-attached datacenter before too long).  However, 
I'm wondering if anyone is currently doing large (or what seem to me to be 
large) backups to the cloud in some way?  Assuming I have a gigabit connection 
to the Internet from my datacenter, I'm wondering how feasible it would be to 
either use something like Amazon S3 with s3fs (I'm guessing way too much 
overhead to be efficient), or a bacula-SD implementation on an EC2 node, using 
Elastic Block Store (EBS) as "local" disk, and VPN (Amazon VPC) between my 
datacenter and the SD.

Substitute your favorite cloud provider for Amazon above; I don't use any right 
now so not tied to any particular provider.  It just seems like Amazon has all 
the necessary pieces today.

To do this, and keep customers comfortable with the idea of data in the cloud, 
we would need to encrypt, so I'm also wondering if it would be possible for the 
SD to encrypt the backup volume, rather than the FD encrypt the data before 
sending it to SD (which is what we do now)?  Easier to manage if we just 
handled encryption in one place for all clients.

I would love to hear what other people are either doing with Bacula and the 
cloud, or why you have decided not to.

Thanks

Peter Zenge
Pzenge .at. ilinc .dot. com


--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula to the Cloud

2010-03-02 Thread Peter Zenge
Hello, 2 year Bacula user but first-time poster.  I'm currently dumping about 
1.6TB to LTO2 tapes every week and I'm looking to migrate to a new storage 
medium.

The obvious answer, I think, is a direct-attached disk array (which I would be 
able to put in a remote gigabit-attached datacenter before too long).  However, 
I'm wondering if anyone is currently doing large (or what seem to me to be 
large) backups to the cloud in some way?  Assuming I have a gigabit connection 
to the Internet from my datacenter, I'm wondering how feasible it would be to 
either use something like Amazon S3 with s3fs (I'm guessing way too much 
overhead to be efficient), or a bacula-SD implementation on an EC2 node, using 
Elastic Block Store (EBS) as "local" disk, and VPN (Amazon VPC) between my 
datacenter and the SD.

Substitute your favorite cloud provider for Amazon above; I don't use any right 
now so not tied to any particular provider.  It just seems like Amazon has all 
the necessary pieces today.

To do this, and keep customers comfortable with the idea of data in the cloud, 
we would need to encrypt, so I'm also wondering if it would be possible for the 
SD to encrypt the backup volume, rather than the FD encrypt the data before 
sending it to SD (which is what we do now)?  Easier to manage if we just 
handled encryption in one place for all clients.

I would love to hear what other people are either doing with Bacula and the 
cloud, or why you have decided not to.

Thanks

Peter Zenge
Pzenge .at. ilinc .dot. com


--
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] file daemon support for Windows 2003 64-bit

2008-02-22 Thread Peter Zenge
Can anyone confirm whether the file daemon (client) runs on Windows
64-bit, and if so, what if any restrictions there are?

 

 

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users