[Bacula-users] Windows client firewalling problem

2009-12-07 Thread Kevin Keane
I recently upgraded my bacula from 2.4 to 3.0. Almost everything works 
beautifully, except for one Windows 2008 Standard client machine (64 version). 
This machine is running the 64-bit version of winbacula 3.0.2. The directory 
and SD both are on other machines on the same subnet. The Windows server uses 
Microsoft's Windows Advanced firewall.

The error message is:

07-Dec 01:36 akechi-denki-fd JobId 435: Fatal error: Authorization key rejected 
by Storage daemon.
Please see 
http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION00376
 for help.
07-Dec 01:36 akechi-denki-fd JobId 435: Fatal error: Failed to authenticate 
Storage daemon.
07-Dec 01:36 nctechcenter-dir JobId 435: Fatal error: Bad response to Storage 
command: wanted 2000 OK storage
, got 2902 Bad storage

I added both inbound and outbound rules to the Windows firewall to allow all 
connections to and from bacula-fd.exe . The firewall log also shows that the 
connections from the director to port 9102, and from the fd to the SD's port 
9103 are successful. I do not see any blocked connections listed at all.

Yet I found that the problem goes away when I turn off the Windows firewall.

So I am trying to find out what else the firewall might be doing to interfere 
with bacula, what other rules I might need.

Thanks!

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Very slow interactive restore

2009-12-07 Thread Christoph Litauer
Christoph Litauer schrieb:
 Arno Lehmann schrieb:
 Hi,

 24.11.2009 08:59, Christoph Litauer wrote:
 Christoph Litauer schrieb:
 Jesper Krogh schrieb:
 Christoph Litauer wrote:
 Thanks! One last question (hopefully): How big is /var/lib/mysql/ibdata1?
 282GB on ext3

 Dear Jesper,

 in the meantime I made a test setup - not successfull 'til now regarding
 the performance. What I forgot to ask: What mysql-DB version are you
 running?

 And another demand, please:

 Could you - or someone else - please select any JobId and execute the
 following (my)sql-statement:

 mysqlEXPLAIN SELECT Path.Path, Filename.Name, File.FileIndex,
 File.JobId, File.LStat
 FROM (
 SELECT max(FileId) as FileId, PathId, FilenameId
 FROM (
 SELECT FileId, PathId, FilenameId
 FROM File
 WHERE JobId IN (insert your JobId here)
 ) AS F GROUP BY PathId, FilenameId
 ) AS Temp JOIN Filename ON (Filename.FilenameId = Temp.FilenameId) JOIN
 Path ON (Path.PathId = Temp.PathId) JOIN File ON (File.FileId =
 Temp.FileId) WHERE File.FileIndex  0 ORDER BY JobId, FileIndex ASC

 Please post the result. Thanks in advance!
 Sure...

 mysql EXPLAIN SELECT Path.Path, Filename.Name, File.FileIndex, File.JobId, 
 File.LStat FROM ( SELECT max(FileId) as FileId, PathId, FilenameId FROM ( 
 SELECT FileId, PathId, FilenameId FROM File WHERE JobId IN (11902)) AS F 
 GROUP BY PathId, FilenameId ) AS Temp JOIN Filename ON (Filename.FilenameId 
 = Temp.FilenameId) JOIN Path ON (Path.PathId = Temp.PathId) JOIN File ON 
 (File.FileId =Temp.FileId) WHERE File.FileIndex  0 ORDER BY JobId, 
 FileIndex ASC;
 ++-+++---+-+-+-+---+-+
 | id | select_type | table  | type   | possible_keys | key | 
 key_len | ref | rows  | Extra   |
 ++-+++---+-+-+-+---+-+
 |  1 | PRIMARY | derived2 | ALL| NULL  | NULL| NULL   
  | NULL| 60905 | Using temporary; Using filesort |
 |  1 | PRIMARY | Path   | eq_ref | PRIMARY   | PRIMARY | 4  
  | Temp.PathId | 1 | |
 |  1 | PRIMARY | Filename   | eq_ref | PRIMARY   | PRIMARY | 4  
  | Temp.FilenameId | 1 | |
 |  1 | PRIMARY | File   | eq_ref | PRIMARY   | PRIMARY | 8  
  | Temp.FileId | 1 | Using where |
 |  2 | DERIVED | derived3 | ALL| NULL  | NULL| NULL   
  | NULL| 60905 | Using temporary; Using filesort |
 |  3 | DERIVED | File   | ref| JobId,JobId_2 | JobId_2 | 4  
  | | 52471 | |
 ++-+++---+-+-+-+---+-+
 6 rows in set (6.99 secs)
 This is a MyISAM catalog with 14776513 Files, 1163114 FileNames, and 
 198492 Paths. Machine is a Dual-Core Opteron with 2GB RAM and a decent 
 disk subsystem. MySQL is not exactly configured for maximum performance.
 
 Thanks a lot Arno. May I ask you too, how long an interactive restore of
 a big filesystem takes to build the directory tree?
 

Seems as if I found the reason: I had been running version 3.0.2 which
uses a comlicated sql statement as the above to build the directory
tree. Version 3.0.3 uses more but simpler sql queries ...

I read the release notes for this version once again but couldn't find
any hint regarding that fix.

-- 
Kind regards
Christoph

Christoph Litauer  lita...@uni-koblenz.de
Uni Koblenz, Computing Center, http://www.uni-koblenz.de/~litauer
Postfach 201602, 56016 Koblenz Fon: +49 261 287-1311, Fax: -100 1311
PGP-Fingerprint: F39C E314 2650 650D 8092 9514 3A56 FBD8 79E3 27B2


--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] 287GB data, 100MB/day, 2 weeks restore. Howto setup

2009-12-07 Thread Niklas Hagman
Hi
Hi. I have a customer that has 287 GB of data that's needs to be
backuped. The change of this data is probably around 100MB per day. The
customer wants to be able to restore files 2 weeks back in time. How do
I set this up so it requires so little space as possible?

I was thinking about using this new Virtual Backup (Vbackup) feature
that exist from 3.0 version.
One full backup and 13 incremental backups will always exist. When a new
incremental backup is added, the oldest incremental is later merged with
the full backup by a script same day. Is this possible?

I guess I will get into problem if bacula suddenly decides to do a full
backup because something in the config file has changed. Well well..

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula Problem Storage File

2009-12-07 Thread Oliver Knittel

Hello Together,

I get the folloing message when I ask for the Status of Storage File

Fatal error: bsock.c:135 Unable to connect to Storage daemon on
backupsrv.ke-si.intern:9103. ERR=Connection refused

DNS is correct Sever Name is checkt many times FQN is in hosts too. Password
is proofed. 

When I use telnet i get refused too. Do anybody has got an Idea?

Tankx for your help 

Oli
-- 
View this message in context: 
http://old.nabble.com/Bacula-Problem-Storage-File-tp26657347p26657347.html
Sent from the Bacula - Users mailing list archive at Nabble.com.


--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Problem Storage File

2009-12-07 Thread John Drescher
 I get the folloing message when I ask for the Status of Storage File

 Fatal error: bsock.c:135 Unable to connect to Storage daemon on
 backupsrv.ke-si.intern:9103. ERR=Connection refused

 DNS is correct Sever Name is checkt many times FQN is in hosts too. Password
 is proofed.

 When I use telnet i get refused too. Do anybody has got an Idea?


My first question is do you have 127.0.0.1 or localhost in your bacula-dir.conf?

John

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Problem Storage File

2009-12-07 Thread John Drescher
On Mon, Dec 7, 2009 at 9:56 AM, John Drescher dresche...@gmail.com wrote:
 I get the folloing message when I ask for the Status of Storage File

 Fatal error: bsock.c:135 Unable to connect to Storage daemon on
 backupsrv.ke-si.intern:9103. ERR=Connection refused

 DNS is correct Sever Name is checkt many times FQN is in hosts too. Password
 is proofed.

 When I use telnet i get refused too. Do anybody has got an Idea?


 My first question is do you have 127.0.0.1 or localhost in your 
 bacula-dir.conf?

Also bacula-sd.conf?

John

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Problem Storage File

2009-12-07 Thread Uwe Schuerkamp
On Mon, Dec 07, 2009 at 06:33:59AM -0800, Oliver Knittel wrote:
 
 Hello Together,
 
 I get the folloing message when I ask for the Status of Storage File
 
 Fatal error: bsock.c:135 Unable to connect to Storage daemon on
 backupsrv.ke-si.intern:9103. ERR=Connection refused
 
 DNS is correct Sever Name is checkt many times FQN is in hosts too. Password
 is proofed. 
 
 When I use telnet i get refused too. Do anybody has got an Idea?
 
 Tankx for your help 
 
 Oli

could your storage daemon be listening on a different ip? the fact you
don't get a connection via telnet is a giveaway of sorts... 

Uwe 


-- 
uwe.schuerk...@nionex.net fon: [+49] 5242.91 - 4740, fax:-69 72
Hauptsitz: Avenwedder Str. 55, D-33311 Gütersloh, Germany
Registergericht Gütersloh HRB 4196, Geschäftsführer: H. Gosewehr, D. Suda
NIONEX ist ein Unternehmen der DirectGroup Germany www.directgroupgermany.de

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Problem Storage File

2009-12-07 Thread Kevin Keane
Since telnet is refused, too, odds are that either there is a firewall 
involved, or the FQDN resolves to the wrong host.

BTW, telnet failing is no longer always a reliable indicator of whether the 
firewall is configured correctly, since many firewalls have different rules on 
a per-application basis. So telnet may fail and bacula succeeds, or vice versa.

 -Original Message-
 From: Oliver Knittel [mailto:o...@systemhaus-kec.de]
 Sent: Monday, December 07, 2009 6:34 AM
 To: bacula-users@lists.sourceforge.net
 Subject: [Bacula-users] Bacula Problem Storage File
 
 
 Hello Together,
 
 I get the folloing message when I ask for the Status of Storage File
 
 Fatal error: bsock.c:135 Unable to connect to Storage daemon on
 backupsrv.ke-si.intern:9103. ERR=Connection refused
 
 DNS is correct Sever Name is checkt many times FQN is in hosts too.
 Password
 is proofed.
 
 When I use telnet i get refused too. Do anybody has got an Idea?
 
 Tankx for your help
 
 Oli
 --
 View this message in context: http://old.nabble.com/Bacula-Problem-
 Storage-File-tp26657347p26657347.html
 Sent from the Bacula - Users mailing list archive at Nabble.com.
 
 
 ---
 ---
 Join us December 9, 2009 for the Red Hat Virtual Experience,
 a free event focused on virtualization and cloud computing.
 Attend in-depth sessions from your desk. Your couch. Anywhere.
 http://p.sf.net/sfu/redhat-sfdev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] [SPAM] Re: [SPAM] Re: [SPAM] Bacula TimeMachine type SOHOconfig?

2009-12-07 Thread Simon J Mudd
timo-n...@tee-en.net (Timo Neuvonen) writes:

 Simon J Mudd sjm...@pobox.com kirjoitti viestissä 

...

  Yes, but that's what I'm trying to avoid. I realise that I MUST have 
  sufficient
  space really for at least 2 full backups plus some extra for incrementals
  but I don't want to worry about the details. Therefore I want to configure
 
 You said you don't want to worry about the details. However, one such very 
 strong detail is the schedule you already have specified, it says to run a 
 full backup once a month. Required retention time is closely related to 
 this, and needs to be specified too.

Again, I think you're missing the point. You are right, in a business
environment you do want to decide to do X full backups every certain
period of time, X incrementals etc. and then you need to do some
calculations to work out how much disk space you need for this. This
value of course changes and you may later need to add more storage or
tapes or whatever to accommodate these changes.

Think of the normal HOME user who may have an interest in Bacula to
backup data. He has a unix PC with disks occupying say 100GB of
space. So he buys himself a 1TB external USB disk and wants to use
that for backups. If it's dedicated he'll want to use ALL the space
for backups and keep as much as he can. So he's likely to want to do
perhaps a single weekly or monthly backup followed by incrementals in
between. Exactly how many backups he keeps is relatively unimportant.

And for this type of scenario bacula is tricky (from what I can see)
to setup. I've had multiple problems (due to misconfiguration) of bacula
not labelling new disk devices in the pool and also when the disk starts
to fill up of not removing the oldest backups.

I'm not a backup administrator and have plenty of other distractions
which prevent me properly working out how to get bacula running properly.
That's why I suggested a recipe for the type of configuration I suggest
might be extremely useful.

 Since now you haven't specified the volume retention, Bacula uses its 
 internal default which is one year, 365 days. You have to specify a shorter 
 volume retention time if you want to be able to recycle the volumes sooner. 

But I dont' want retention to depend on time, but disk usage.

...

 Btw, you can use list media command to see the status of the existing 
 volumes.

so while you can define how many volumes to have and their sizes you can't
get bacula to purge based on these values?

...

  the pool to auto purge if it fills up. New full or incremental backups
  will create new volumes as needed, and the older ones will get purged.
 
 Actually, Bacula will recycle the existing volumes, that is, discard the old 
 data in the volume, and use the same recycled volume again. So the volume 
 name won't change (unless this is possible due to some very new Bacula 
 feature).

That's fine. In the end I don't care howe the volumes are labelled, or if 
new ones are created or existing ones are reused.

 Within reasonable limits (reasonable amount of disk space available), this 
 should be possible with Bacula.

So it sounds part of my problem has been to misunderstand the precise terms
used in Bacula. It sounds like I don't want to purge the disk volumes, but
to recycle them. So how do I configure this:

- A fixed number of disk volumes of a predetermined size which will
be recycled when no more space is left? Ideally the recycling in this simple
case would be based on a FIFO type principal.

Simon

--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Problem Storage File

2009-12-07 Thread Bruno Friedmann
Oliver Knittel wrote:
 Hello Together,
 
 I get the folloing message when I ask for the Status of Storage File
 
 Fatal error: bsock.c:135 Unable to connect to Storage daemon on
 backupsrv.ke-si.intern:9103. ERR=Connection refused
 
 DNS is correct Sever Name is checkt many times FQN is in hosts too. Password
 is proofed. 
 
 When I use telnet i get refused too. Do anybody has got an Idea?
 
 Tankx for your help 
 
 Oli

To add some lines to other good advise :

- Mixed version of dir / sd , don't think so
- Recents update of openssl without a cold restart (running process using old 
deleted libs)


-- 

 Bruno Friedmann


--
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [SPAM] Re: [SPAM] Re: [SPAM] Bacula TimeMachinetype SOHOconfig?

2009-12-07 Thread Timo Neuvonen

Simon J Mudd sjm...@pobox.com kirjoitti viestissä 
news:m3fx7mkdv3@mad06.wl0.org...
 timo-n...@tee-en.net (Timo Neuvonen) writes:

 Simon J Mudd sjm...@pobox.com kirjoitti viestissä

 ...

  Yes, but that's what I'm trying to avoid. I realise that I MUST have
  sufficient
  space really for at least 2 full backups plus some extra for 
  incrementals
  but I don't want to worry about the details. Therefore I want to 
  configure

 You said you don't want to worry about the details. However, one such 
 very
 strong detail is the schedule you already have specified, it says to run 
 a
 full backup once a month. Required retention time is closely related to
 this, and needs to be specified too.

 Again, I think you're missing the point. You are right, in a business
 environment you do want to decide to do X full backups every certain
 period of time, X incrementals etc. and then you need to do some
 calculations to work out how much disk space you need for this. This
 value of course changes and you may later need to add more storage or
 tapes or whatever to accommodate these changes.

 Think of the normal HOME user who may have an interest in Bacula to
 backup data. He has a unix PC with disks occupying say 100GB of
 space. So he buys himself a 1TB external USB disk and wants to use
 that for backups. If it's dedicated he'll want to use ALL the space
 for backups and keep as much as he can. So he's likely to want to do
 perhaps a single weekly or monthly backup followed by incrementals in
 between. Exactly how many backups he keeps is relatively unimportant.

 And for this type of scenario bacula is tricky (from what I can see)
 to setup. I've had multiple problems (due to misconfiguration) of bacula
 not labelling new disk devices in the pool and also when the disk starts
 to fill up of not removing the oldest backups.

 I'm not a backup administrator and have plenty of other distractions
 which prevent me properly working out how to get bacula running properly.
 That's why I suggested a recipe for the type of configuration I suggest
 might be extremely useful.

 Since now you haven't specified the volume retention, Bacula uses its
 internal default which is one year, 365 days. You have to specify a 
 shorter
 volume retention time if you want to be able to recycle the volumes 
 sooner.

 But I dont' want retention to depend on time, but disk usage.




Bacula can use all disk space you allow it to use, that is controlled with 
volume size and maximum number of volumes, that you had set to reasonable 
values in the configuration. The volume retention time is just a minimum 
time limit; if your disk space will allow it, the old data in un-recycled 
volumes will still be available there after much longer time (in theory, 
forever). I think this is what you wanted, so I can't see any actual problem 
there. But if you absolutely don't want to change the default volume 
retention time to something that would fit to your application, there isn't 
much else to do, I think. Explicitly specifying the volume retention time is 
the only way to make Bacula recycle the volumes in less than a year, since 
365 days is Bacula's internal default.



 ...

 Btw, you can use list media command to see the status of the existing
 volumes.

 so while you can define how many volumes to have and their sizes you can't
 get bacula to purge based on these values?

 ...

  the pool to auto purge if it fills up. New full or incremental 
  backups
  will create new volumes as needed, and the older ones will get purged.

 Actually, Bacula will recycle the existing volumes, that is, discard the 
 old
 data in the volume, and use the same recycled volume again. So the 
 volume
 name won't change (unless this is possible due to some very new Bacula
 feature).

 That's fine. In the end I don't care howe the volumes are labelled, or if
 new ones are created or existing ones are reused.

 Within reasonable limits (reasonable amount of disk space available), 
 this
 should be possible with Bacula.

 So it sounds part of my problem has been to misunderstand the precise 
 terms
 used in Bacula. It sounds like I don't want to purge the disk volumes, but
 to recycle them. So how do I configure this:

 - A fixed number of disk volumes of a predetermined size which will
 be recycled when no more space is left? Ideally the recycling in this 
 simple
 case would be based on a FIFO type principal.



If you don't want to have _any_ minimum time limit for volume retention, 
just set it to one second, which propably is the shortest value you can 
specify.

In theory, this can result in a situation that if your one full backup would 
consume more space than is designated for backup use, and recycling of the 
first volume used for that backup would then happen before that backup is 
finished. But if you prefer this, instead of seeing an error message in this 
obvious case of malfunctioning, go for it.

Seriously, a more reasonable value might be one 

Re: [Bacula-users] estimating time remaining on a backup

2009-12-07 Thread Alex Chekholko
On Thu, 3 Dec 2009 13:46:02 +
Gavin McCullagh gavin.mccull...@gcd.ie wrote:

 Hi,
 
 I started a full backup last night of a tired old Windows-based Dell NAS.
 It's very slow.  The filesystem is full and super-fragmented.  I also have
 compression turned on which makes the cpu work rather hard and slows things
 down even further.  1.5MB/sec :-(
 

Doesn't that tell you all you need to know?  Knowing this rate: 1.5MB/s
and the total amount of data in the full backup will tell you the total
time, no?

-- 
Alex Chekholko   ch...@pcbi.upenn.edu

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 287GB data, 100MB/day, 2 weeks restore. Howto setup

2009-12-07 Thread James Harper
 Hi. I have a customer that has 287 GB of data that's needs to be
 backuped. The change of this data is probably around 100MB per day.
The
 customer wants to be able to restore files 2 weeks back in time. How
do
 I set this up so it requires so little space as possible?
 
 I was thinking about using this new Virtual Backup (Vbackup) feature
 that exist from 3.0 version.
 One full backup and 13 incremental backups will always exist. When a
new
 incremental backup is added, the oldest incremental is later merged
with
 the full backup by a script same day. Is this possible?

It's possible, but while the new vbackup is being synchronised, you will
need the space to hold the full backup you are using as the 'base', and
the virtual full backup you are building, so the space requirements will
be similar to just doing another full backup. 100MB per day isn't much
compared to 287GB so I don't know that the virtual full buys you that
much.

In my setup which has similar requirements, I run a full backup once a
week and incremental backups every few hours during the day, and retain
the older backups for 15 days. This means I have 2 full backups at any
point in time, and 3 backups for a few days while the expired full
hasn't been overwritten yet.

I just use a permanently attached USB disk to hold all the backups, so
space isn't really an issue - disks are cheap so more can be added if
required.

Also, every night I synthesize a virtual full backup to tape to be taken
offsite for DR purposes.

If this is a Windows system, then bear in mind that the normal VSS
backup of MSSQL Server isn't going to do what you expect when you do an
incremental backup, and even less so if you are doing Virtual Full
backups.

James

(btw, USB disks suck these days - the disks are so much faster than USB
can handle that it seems like a waste. I'm starting to use eSATA instead
when possible)


--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] SOT: Better strategy for backup PostgreSQL DB

2009-12-07 Thread ReynierPM
Hi there:
I'm trying to establishing

I'm trying to establish the rules and guidelines to be followed by a 
Data Center (DC) on topics of backup and restore information. The 
software that I have proposed to perform these tasks is Bacula[1] of 
course. This is already configured and backing up some services (SaS) 
available on DC. However, the issue of Databases with PostgreSQL has 
caused me concern and have given me the task of finding possible ways to 
store information (structure  contents) of the BD.

So far I have investigated two ways of saving the contents:
1. WAL files with Point In Time Recovery (PITR) (the better but more 
expensive)
2. Make DUMP of the tables from the DB

With the first I think is the best, resolve all or nearly all, as the 
PostgreSQL Manual [2] and this website I found [3] said the advantages are:

- The backups not need to be consistent: you need a copy of the files of 
the cluster (imagine that relates to the content of the directory where 
the files are stored on the BD) and the WAL files
- A DUMP (dump) full BD is not necessary
- Incremental
- Continuous
- Point In Time Recovery: you can restore the DB to a point of time

But it also has disadvantages THE FOLLOWING:
- Additional Complexity
- The need for storage capacity
- Improved write and access the hard disk IO which may impact on the 
performance of the server
- Works in the cluster of full BD

The second, which is not the best but it's not bad;), I resolved the 
issue of saving the structure and contents of the BD but you need to 
consume additional resources every time you perform a backup as it 
should to dump BD all files then make copies of those files and also not 
let me do PITR.

Taking into account the previously expressed what option yours would 
recommend taking?

When using the first I have a little problem and I do not have enabled 
the WAL files on my server so the BD file. Wal there, what would be the 
best strategy to follow then? Generate full dump of the DB so far from 
that dump and start generating the file. Wal? I find documentation 
related to the issue of enabling the. Wal? What maximum amount of space 
needed to be 10 for BD at the moment?

Waiting for your comments

[1] www.bacula.org
[2] http://www.postgresql.org/docs/8.2/static/continuous-archiving.html
[3] http://www.wzdftpd.net/trac/wiki/Misc/PostgreSQL/BackupPITR

-- 
Cheers
ReynierPM

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SOT: Better strategy for backup PostgreSQL DB

2009-12-07 Thread Dan Langille
ReynierPM wrote:
 Hi there:
 I'm trying to establishing
 
 I'm trying to establish the rules and guidelines to be followed by a 
 Data Center (DC) on topics of backup and restore information. The 
 software that I have proposed to perform these tasks is Bacula[1] of 
 course. This is already configured and backing up some services (SaS) 
 available on DC. However, the issue of Databases with PostgreSQL has 
 caused me concern and have given me the task of finding possible ways to 
 store information (structure  contents) of the BD.

BD?  Is that database (DB)?  I will assume it is.

 So far I have investigated two ways of saving the contents:
 1. WAL files with Point In Time Recovery (PITR) (the better but more 
 expensive)
 2. Make DUMP of the tables from the DB
 
 With the first I think is the best, resolve all or nearly all, as the 
 PostgreSQL Manual [2] and this website I found [3] said the advantages are:
 
 - The backups not need to be consistent: you need a copy of the files of 
 the cluster (imagine that relates to the content of the directory where 
 the files are stored on the BD) and the WAL files
 - A DUMP (dump) full BD is not necessary
 - Incremental
 - Continuous
 - Point In Time Recovery: you can restore the DB to a point of time
 
 But it also has disadvantages THE FOLLOWING:
 - Additional Complexity
 - The need for storage capacity
 - Improved write and access the hard disk IO which may impact on the 
 performance of the server
 - Works in the cluster of full BD
 
 The second, which is not the best but it's not bad;), I resolved the 
 issue of saving the structure and contents of the BD but you need to 
 consume additional resources every time you perform a backup as it 
 should to dump BD all files then make copies of those files and also not 
 let me do PITR.
 
 Taking into account the previously expressed what option yours would 
 recommend taking?
 
 When using the first I have a little problem and I do not have enabled 
 the WAL files on my server so the BD file. Wal there, what would be the 
 best strategy to follow then? Generate full dump of the DB so far from 
 that dump and start generating the file. Wal? I find documentation 
 related to the issue of enabling the. Wal? What maximum amount of space 
 needed to be 10 for BD at the moment?

Your choice of PITR or pg_dump pretty much depends upon your time to 
rebuild the DB.  If that takes 20 minutes and you're happy with that, go 
with pg_dump.

 
 Waiting for your comments
 
 [1] www.bacula.org
 [2] http://www.postgresql.org/docs/8.2/static/continuous-archiving.html
 [3] http://www.wzdftpd.net/trac/wiki/Misc/PostgreSQL/BackupPITR
 


--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Problem Storage File

2009-12-07 Thread Oliver Knittel

Everything didnt work. now I install the Serve new and keep my config files.
I will tell the result soon to you. Via telnet it was posible to get on the
comp. now I changed the name from backuupsrv to backup. 



Oliver Knittel wrote:
 
 Hello Together,
 
 I get the folloing message when I ask for the Status of Storage File
 
 Fatal error: bsock.c:135 Unable to connect to Storage daemon on
 backupsrv.ke-si.intern:9103. ERR=Connection refused
 
 DNS is correct Sever Name is checkt many times FQN is in hosts too.
 Password is proofed. 
 
 When I use telnet i get refused too. Do anybody has got an Idea?
 
 Tankx for your help 
 
 Oli
 

-- 
View this message in context: 
http://old.nabble.com/Bacula-Problem-Storage-File-tp26657347p26689324.html
Sent from the Bacula - Users mailing list archive at Nabble.com.


--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 287GB data, 100MB/day, 2 weeks restore. Howto setup

2009-12-07 Thread Niklas Hagman
Thank you James for that answer. As I expected, 2 full backups seems to
be needed to exist to be able to have 2 weeks possibility to restore.

But what about this:
(F = full, I= incremental. Day 1 to day 13.)
F+I+I+I+I+I+I+I+I+I+I+I+I
Then merge the oldest incremental with the full one, creating something
like this:
F+I+F+I+I+I+I+I+I+I+I+I+I+I
I can then remove the first F+I without loosing my 2 weeks possibility
to restore.

Is this possible?

I have played around with run job= level=VirtualFull and have notice
that it always create a full backup last in the chain. Like this
F+I+I+I+I+I+I+I+I+I+I+I+I+F. Maybe it exist some parameters so I can
control this more?


On 12/08/2009 12:12 AM, James Harper wrote:
 Hi. I have a customer that has 287 GB of data that's needs to be
 backuped. The change of this data is probably around 100MB per day.
 The
 customer wants to be able to restore files 2 weeks back in time. How
 do
 I set this up so it requires so little space as possible?

 I was thinking about using this new Virtual Backup (Vbackup) feature
 that exist from 3.0 version.
 One full backup and 13 incremental backups will always exist. When a
 new
 incremental backup is added, the oldest incremental is later merged
 with
 the full backup by a script same day. Is this possible?
 
 It's possible, but while the new vbackup is being synchronised, you will
 need the space to hold the full backup you are using as the 'base', and
 the virtual full backup you are building, so the space requirements will
 be similar to just doing another full backup. 100MB per day isn't much
 compared to 287GB so I don't know that the virtual full buys you that
 much.
 
 In my setup which has similar requirements, I run a full backup once a
 week and incremental backups every few hours during the day, and retain
 the older backups for 15 days. This means I have 2 full backups at any
 point in time, and 3 backups for a few days while the expired full
 hasn't been overwritten yet.
 
 I just use a permanently attached USB disk to hold all the backups, so
 space isn't really an issue - disks are cheap so more can be added if
 required.
 
 Also, every night I synthesize a virtual full backup to tape to be taken
 offsite for DR purposes.
 
 If this is a Windows system, then bear in mind that the normal VSS
 backup of MSSQL Server isn't going to do what you expect when you do an
 incremental backup, and even less so if you are doing Virtual Full
 backups.
 
 James
 
 (btw, USB disks suck these days - the disks are so much faster than USB
 can handle that it seems like a waste. I'm starting to use eSATA instead
 when possible)
 

--
Return on Information:
Google Enterprise Search pays you back
Get the facts.
http://p.sf.net/sfu/google-dev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users