Re: [Bacula-users] Capabilities of Bacula 7.0.5

2015-10-19 Thread Radosław Korzeniewski
Hello,

2015-10-18 17:40 GMT+02:00 Compdoc :

> >If you buy the specific VM backup plugin yes. Including block level
> incremental backups of the VM.
>
> Is there a page to find the prices of these plugins? I looked but couldn't
> find any, and I never buy from companies that don't list prices. When
> online businesses make you contact them for prices, it means you can't
> afford them...
>

Well, did you try to find prices at IBM webpage for its products? When you
click a BUY then you need to ask for a quotation. No prices available on
the website. The Bacula Systems is not an online business.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] waiting for client to connect to storage file

2015-10-19 Thread Radosław Korzeniewski
Hello,

2015-10-19 4:25 GMT+02:00 Tim Dunphy :

> Hey guys,
>
>  I've got a new problem on my bacula setup. Not sure what changed
> recently, but now for some reason when I go to backup any client I get this
> message when I check st dir:
>
> Running Jobs:
> Console connected at 19-Oct-15 01:03
>  JobId  Type Level Files Bytes  Name  Status
> ==
>  5  Back Full  0 0  d*b1.jokefire.com
>   is waiting for Client db1.jokefire.com
>  to connect to Storage File*
> 
>
>
This message means that a client (bacula-fd) is unable to connect to the
storage (bacula-sd). So it is probably that you bacula-sd is firewalled or
something (a wrong Address in Storage resource, etc.). Check if you can
connect from any of your clients to the backup server on 9103 (it is a SD
port).

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bconsole format dates

2015-10-19 Thread Wanderlei Huttel
I've already knew that was possible to set environment variables defaults,
but the way that I used, doesn't worked.

I had to use "locale -a" to discovery what locales I have installed:
root@bacula:/tmp# locale -a
C
C.UTF-8
POSIX
pt_BR.utf8

And insert the line in the beggining /etc/init.d/bacula-dir
export LC_ALL=C.UTF-8


Thanks Ana and Jarif
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Tell bacula not to repeat backup when target moved

2015-10-19 Thread bdam
This has come up again, and I wanted to post the answer here to help future 
searchers. The real problem was bacula can't back up symlinks, yet the data 
behind them could physically move across disks etc but in reality remain 
unchanged.

What I didn't know then is that linux can mount one folder to another using 
"mount -o bind". So, you can create a kind of virtual filesystem where the 
branches all point to where you'd otherwise have symlinks, but it still appears 
to bacula as single filesystem which it can then backup with no trouble, since 
there are no symlinks. The only thing to bear in mind is your FileSet Options 
needs to have the "onefs=no" entry (the default is "yes") otherwise it will not 
traverse the new "virtual" filesystem fully.

Also, be sure to "move" the branches if you do any maintenance after the Full 
backup has been performed rather than "copy" or it will think the data has 
changed. Its a neat system.

+--
|This was sent by bill.dam...@yahoo.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Capabilities of Bacula 7.0.5

2015-10-19 Thread Compdoc
>Well, did you try to find prices at IBM webpage for its products? 

I used to work for IBM. I know i cant afford them. Now I service computers for 
small and medium size businesses, and I look for alternatives to yearly support 
subscriptions because in the end, I find the answers to their problems. It's 
been many years since I've needed paid support. Many years.

Really, who doesn't want clear, concise pricing while avoiding salesmen who can 
set prices based on what they think you can afford to pay? And who doesn't want 
anonymity? I spend a lot money with Amazon because i can see their prices and 
there's no waiting...

Anyway, everyone wants to see bacula do well, including me, but adopting the 
business models of huge corporations seems like the antithesis of open source, 
and possibly marks the end of a great open source project. It's sad, is all. 
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] "column "volabytes" does not exist"

2015-10-19 Thread Mattinson David
root@pisces:/usr/local/share/bacula# ./update_postgresql_tables

Looks like you are running it as root.  Do it as the bacula user/role, normally 
called bacula

su - bacula
/usr/local/share/bacula/ update_postgresql_tables

Or use the -c su option if your security doesn't allow above.

Dave
This email and any attachments to it may be confidential and are
intended solely for the use of the individual to whom it is 
addressed. If you are not the intended recipient of this email,
you must neither take any action based upon its contents, nor 
copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. QinetiQ may 
monitor email traffic data and also the content of email for 
the purposes of security. QinetiQ Limited (Registered in England
& Wales: Company Number: 3796233) Registered office: Cody Technology 
Park, Ively Road, Farnborough, Hampshire, GU14 0LX  http://www.qinetiq.com.

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] waiting for client to connect to storage file

2015-10-19 Thread Tim Dunphy
>
>
> This message means that a client (bacula-fd) is unable to connect to the
> storage (bacula-sd). So it is probably that you bacula-sd is firewalled or
> something (a wrong Address in Storage resource, etc.). Check if you can
> connect from any of your clients to the backup server on 9103 (it is a SD
> port).



That was it!! Thanks. Not sure why I keep forgetting this. I'll make a note
to myself next time.

Thanks,
Tim


On Mon, Oct 19, 2015 at 3:40 AM, Radosław Korzeniewski <
rados...@korzeniewski.net> wrote:

> Hello,
>
> 2015-10-19 4:25 GMT+02:00 Tim Dunphy :
>
>> Hey guys,
>>
>>  I've got a new problem on my bacula setup. Not sure what changed
>> recently, but now for some reason when I go to backup any client I get this
>> message when I check st dir:
>>
>> Running Jobs:
>> Console connected at 19-Oct-15 01:03
>>  JobId  Type Level Files Bytes  Name  Status
>> ==
>>  5  Back Full  0 0  d*b1.jokefire.com
>>   is waiting for Client db1.jokefire.com
>>  to connect to Storage File*
>> 
>>
>>
> This message means that a client (bacula-fd) is unable to connect to the
> storage (bacula-sd). So it is probably that you bacula-sd is firewalled or
> something (a wrong Address in Storage resource, etc.). Check if you can
> connect from any of your clients to the backup server on 9103 (it is a SD
> port).
>
> best regards
> --
> Radosław Korzeniewski
> rados...@korzeniewski.net
>



-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Capabilities of Bacula 7.0.5

2015-10-19 Thread Kern Sibbald
On 10/19/2015 05:15 AM, Compdoc wrote:
> >Well, did you try to find prices at IBM webpage for its products? 
>
> I used to work for IBM. I know i cant afford them. Now I service
> computers for small and medium size businesses, and I look for
> alternatives to yearly support subscriptions because in the end, I
> find the answers to their problems. It's been many years since I've
> needed paid support. Many years.
>
> Really, who doesn't want clear, concise pricing while avoiding
> salesmen who can set prices based on what they think you can afford to
> pay? And who doesn't want anonymity? I spend a lot money with Amazon
> because i can see their prices and there's no waiting...
>
> Anyway, everyone wants to see bacula do well, including me, but
> adopting the business models of huge corporations seems like the
> antithesis of open source, and possibly marks the end of a great open
> source project. It's sad, is all.

Your point of view seems to me to be a bit too pessimistic.  Bacula
Systems is a company that (for the moment) is competing with big
companies in the enterprise market such as EMC, Symantec, IBM, HP, and
...  Bacula Systems is doing quite well in that market and has been
consistently growing at about 65% in revenues each year since its
creation, consequently the future of Bacula Systems looks quite bright.
The rather standard way of dealing with pricing seems to be more an
advantage rather than a hindrance at least in the Enterprise market.

Then to extrapolate a positive Bacula Systems corporate financial future
to the "end of a great open source project" is a pretty big leap that
does not appear to me to be reasonable.  In addition, as long a Bacula
Systems continues to create new code for the Bacula Enterprise version,
the community version will continue to grow and evolve with the
migration of Enterprise code to the community.  Even without Bacula
Systems, though it would be harder, Bacula community can do quite well.

Best regards,
Kern

> -- 
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
>
> --
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula bug on status schedule?

2015-10-19 Thread Wanderlei Huttel
Hi Guys

In manual says:
Date-time-specification determines when the Job is to be run. The
specification is a repetition, and as a default Bacula is set to run a job
at the beginning of the hour of every hour of every day of every week of
every month of every year.

I used the schedule example of manual:

Schedule {
   Name = "TenMinutes"
   Run = Level=Full hourly at 0:05
   Run = Level=Full hourly at 0:15
   Run = Level=Full hourly at 0:25
   Run = Level=Full hourly at 0:35
   Run = Level=Full hourly at 0:45
   Run = Level=Full hourly at 0:55
}


I expected schedule hour by hour, but It doesn't.

*status schedule job=Backup_Machine_0001 days=1

Scheduled Jobs:
Level  Type Pri  Scheduled  Job Name   Schedule
=
Full   Backup10  Mon 19-Oct 00:05   Backup_Machine001  Schedule_
TenMinutes
Full   Backup10  Mon 19-Oct 00:15   Backup_Machine001  Schedule_
TenMinutes
Full   Backup10  Mon 19-Oct 00:25   Backup_Machine001  Schedule_
TenMinutes
Full   Backup10  Mon 19-Oct 00:35   Backup_Machine001  Schedule_
TenMinutes
Full   Backup10  Mon 19-Oct 00:45   Backup_Machine001  Schedule_
TenMinutes
Full   Backup10  Mon 19-Oct 00:55   Backup_Machine001  Schedule_
TenMinutes



Is it a bug?

Thanks Wanderlei
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] waiting for client to connect to storage file

2015-10-19 Thread Heitor Faria
>> This message means that a client (bacula-fd) is unable to connect to the 
>> storage
>> (bacula-sd). So it is probably that you bacula-sd is firewalled or something 
>> (a
>> wrong Address in Storage resource, etc.). Check if you can connect from any 
>> of
>> your clients to the backup server on 9103 (it is a SD port).
> That was it!! Thanks. Not sure why I keep forgetting this. I'll make a note to
> myself next time.

> Thanks,
> Tim

I had this crazy idea where "status client" should also test the connection 
from it to the storage, or maybe a status client-storage. 
But I look at the status code and could not go further. It is in Mantis anyway. 

Regards, 
=== 
Heitor Medrado de Faria - LPIC-III | ITIL-F | Bacula Systems Certified 
Administrator II 
Do you need Bacula training? http://bacula.us/video-classes/ 
I do Bacula training and deploy in any city of the world. More information: 
http://bacula.us/ 
+55 61 8268-4220 
Site: http://bacula.us FB: heitor.faria 
=== 

> On Mon, Oct 19, 2015 at 3:40 AM, Radosław Korzeniewski <
> rados...@korzeniewski.net > wrote:

>> Hello,

>> 2015-10-19 4:25 GMT+02:00 Tim Dunphy < bluethu...@gmail.com > :

>>> Hey guys,
>>> I've got a new problem on my bacula setup. Not sure what changed recently, 
>>> but
>>> now for some reason when I go to backup any client I get this message when I
>>> check st dir:

>>> Running Jobs:
>>> Console connected at 19-Oct-15 01:03
>>> JobId Type Level Files Bytes Name Status
>>> ==
>>> 5 Back Full 0 0 d b1.jokefire.com is waiting for Client db1.jokefire.com to
>>> connect to Storage File
>>> 

>> This message means that a client (bacula-fd) is unable to connect to the 
>> storage
>> (bacula-sd). So it is probably that you bacula-sd is firewalled or something 
>> (a
>> wrong Address in Storage resource, etc.). Check if you can connect from any 
>> of
>> your clients to the backup server on 9103 (it is a SD port).

>> best regards
>> --
>> Radosław Korzeniewski
>> rados...@korzeniewski.net

> --
> GPG me!!

> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B

> --

> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula bug on status schedule?

2015-10-19 Thread Ana Emília M . Arruda
Hello Wanderlei,

It seems that status schedule is not showing the hourly schedule correctly.
But the schedule is working as expected, the jobs are being run every ten
minutes as defined in the schedule resource.

Also, a "show schedule=TenMinutes" shows the values as it is configured.

So, could you open a bug report in mantis for this?

Best regards,
Ana

On Mon, Oct 19, 2015 at 10:19 AM, Wanderlei Huttel <
wanderleihut...@gmail.com> wrote:

> Hi Guys
>
> In manual says:
> Date-time-specification determines when the Job is to be run. The
> specification is a repetition, and as a default Bacula is set to run a job
> at the beginning of the hour of every hour of every day of every week of
> every month of every year.
>
> I used the schedule example of manual:
>
> Schedule {
>Name = "TenMinutes"
>Run = Level=Full hourly at 0:05
>Run = Level=Full hourly at 0:15
>Run = Level=Full hourly at 0:25
>Run = Level=Full hourly at 0:35
>Run = Level=Full hourly at 0:45
>Run = Level=Full hourly at 0:55
> }
>
>
> I expected schedule hour by hour, but It doesn't.
>
> *status schedule job=Backup_Machine_0001 days=1
>
> Scheduled Jobs:
> Level  Type Pri  Scheduled  Job Name   Schedule
>
> =
> Full   Backup10  Mon 19-Oct 00:05   Backup_Machine001
>  Schedule_TenMinutes
> Full   Backup10  Mon 19-Oct 00:15   Backup_Machine001
> Schedule_TenMinutes
> Full   Backup10  Mon 19-Oct 00:25   Backup_Machine001
> Schedule_TenMinutes
> Full   Backup10  Mon 19-Oct 00:35   Backup_Machine001
> Schedule_TenMinutes
> Full   Backup10  Mon 19-Oct 00:45   Backup_Machine001
> Schedule_TenMinutes
> Full   Backup10  Mon 19-Oct 00:55   Backup_Machine001
> Schedule_TenMinutes
> 
>
>
> Is it a bug?
>
> Thanks Wanderlei
>
>
> --
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup failure

2015-10-19 Thread Thing
*list joblog jobid=62
++
|
LogText
|
++
| warlocke-dir JobId 62: shell command: run BeforeJob "/etc/bacula/scripts/
make_catalog_backup.pl MyCatalog"
 |
| warlocke-dir JobId 62: Start Backup JobId 62,
Job=BackupCatalog.2015-10-19_23.10.00_39
  |
| warlocke-dir JobId 62: Using Device "FileStorage"
   |
| warlocke-sd JobId 62: Volume "Local-0001" previously written, moving to
end of data.
|
| warlocke-sd JobId 62: Ready to append to end of Volume "Local-0001"
size=5908984169
 |
| warlocke-fd JobId 62:  Could not stat "/var/lib/bacula/.sql": ERR=No
such file or directory
 |
| warlocke-sd JobId 62: Job write elapsed time = 00:00:01, Transfer rate =
0  Bytes/second
|
| warlocke-dir JobId 62: Bacula warlocke-dir 5.2.6 (21Feb12):
  Build OS:   arm-unknown-linux-gnueabihf debian 7.0
  JobId:  62
  Job:BackupCatalog.2015-10-19_23.10.00_39
  Backup Level:   Full
  Client: "warlocke-fd" 5.2.6 (21Feb12)
arm-unknown-linux-gnueabihf,debian,7.0
  FileSet:"Catalog" 2015-09-27 23:10:00
  Pool:   "File" (From Job resource)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"File" (From Job resource)
  Scheduled time: 19-Oct-2015 23:10:00
  Start time: 19-Oct-2015 23:10:11
  End time:   19-Oct-2015 23:10:12
  Elapsed time:   1 sec
  Priority:   11
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s):
  Volume Session Id:  31
  Volume Session Time:1444076755
  Last Volume Bytes:  5,908,984,571 (5.908 GB)
  Non-fatal FD errors:1
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK -- with warnings

 |
| warlocke-dir JobId 62: Begin pruning Jobs older than 6 months .
 |
| warlocke-dir JobId 62: No Jobs found to prune.
  |
| warlocke-dir JobId 62: Begin pruning Files.
 |
| warlocke-dir JobId 62: No Files found to prune.
 |
| warlocke-dir JobId 62: End auto prune.

 |
| warlocke-dir JobId 62: shell command: run AfterJob
"/etc/bacula/scripts/delete_catalog_backup"
  |
| warlocke-dir JobId 62: Error: Runscript: AfterJob returned non-zero
status=200. ERR=Permission denied
 |
++
+---+---+-+--+---+--+--+---+
| JobId | Name  | StartTime   | Type | Level | JobFiles |
JobBytes | JobStatus |
+---+---+-+--+---+--+--+---+
|62 | BackupCatalog | 2015-10-19 23:10:11 | B| F |0
|0 | T |
+---+---+-+--+---+--+--+---+
*


On 20 October 2015 at 10:04, Ana Emília M. Arruda 
wrote:

> Hello Thing,
>
> Could you post the log for any of your BackupCatalog job that runs into
> error? From bconsole:
>
> list joblog jobid=62
>
> Best regards,
> Ana
>
> On Mon, Oct 19, 2015 at 5:52 PM, Thing  wrote:
>
>> Hi,
>>
>>
>> I have a catalogue backup error.
>>
>>
>> =
>>
>> Enter a period to cancel a command.
>> *status
>> Status available for:
>>  1: Director
>>  2: Storage
>>  3: Client
>>  4: All
>> Select daemon type for status (1-4): 1
>> warlocke-dir Version: 5.2.6 (21 February 2012)
>> arm-unknown-linux-gnueabihf debian 7.0
>> Daemon started 06-Oct-15 09:26. Jobs: run=32, running=0 mode=0,0
>>  Heap: heap=405,504 smbytes=92,543 max_bytes=9,929,949 bufs=288
>> max_bufs=1,227
>>
>> Scheduled Jobs:
>> Level  Type Pri  Scheduled  Name   Volume
>>
>> ===
>> IncrementalBackup10  20-Oct-15 23:05BackupLocalFiles
>> Local-0001
>> Full   Backup11  20-Oct-15 23:10BackupCatalog
>> Local-0001
>> 
>>
>> Running Jobs:
>> Console connected at 09-Oct-15 15:49
>> Console connected at 20-Oct-15 09:21
>> Console connected at 20-Oct-15 09:28
>> No 

Re: [Bacula-users] 350TB backup

2015-10-19 Thread Dimitri Maziuk
On 10/19/2015 03:53 PM, Thing wrote:
> Hi,
> 
> Is anyone backing total volumes of this order?  and if so, what sort of
> scaling, design, hardware?

I take it, that's the size of your filesystems? Not the estimated size
of the backup set (i.e. all cycles in retention period)?

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup failure

2015-10-19 Thread John Drescher
> Does the catalogue failure matter? I assume it does
>

Having a current copy of the catalog can save you a lot of time if
your catalog database gets corrupt or totally lost in a disaster
situation.

John

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] 350TB backup

2015-10-19 Thread Thing
Hi,

Is anyone backing total volumes of this order?  and if so, what sort of
scaling, design, hardware?


regards

Steven
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] backup failure

2015-10-19 Thread Thing
Hi,


I have a catalogue backup error.


=

Enter a period to cancel a command.
*status
Status available for:
 1: Director
 2: Storage
 3: Client
 4: All
Select daemon type for status (1-4): 1
warlocke-dir Version: 5.2.6 (21 February 2012) arm-unknown-linux-gnueabihf
debian 7.0
Daemon started 06-Oct-15 09:26. Jobs: run=32, running=0 mode=0,0
 Heap: heap=405,504 smbytes=92,543 max_bytes=9,929,949 bufs=288
max_bufs=1,227

Scheduled Jobs:
Level  Type Pri  Scheduled  Name   Volume
===
IncrementalBackup10  20-Oct-15 23:05BackupLocalFiles
Local-0001
Full   Backup11  20-Oct-15 23:10BackupCatalog
Local-0001


Running Jobs:
Console connected at 09-Oct-15 15:49
Console connected at 20-Oct-15 09:21
Console connected at 20-Oct-15 09:28
No Jobs running.


Terminated Jobs:
 JobId  LevelFiles  Bytes   Status   FinishedName

54  Incr86648.24 M  OK   16-Oct-15 23:06
BackupLocalFiles
55  Full  0 0   Error16-Oct-15 23:10 BackupCatalog
56  Incr52137.50 M  OK   17-Oct-15 23:06
BackupLocalFiles
57  Full  0 0   Error17-Oct-15 23:10 BackupCatalog
58  Diff  6,082185.8 M  OK   18-Oct-15 23:09
BackupLocalFiles
59  Full  0 0   Error18-Oct-15 23:10 BackupCatalog
60   92,7202.814 G  OK   19-Oct-15 16:00
RestoreLocalFiles
61  Incr  5,17662.35 M  OK   19-Oct-15 23:08
BackupLocalFiles
62  Full  0 0   Error19-Oct-15 23:10 BackupCatalog
630 0   Error20-Oct-15 09:23
RestoreLocalFiles


*




Does the catalogue failure matter? I assume it does


Backups seem OK, just doing a test restore.




regards

Steven
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup failure

2015-10-19 Thread Ana Emília M . Arruda
Hello Thing,

Could you post the log for any of your BackupCatalog job that runs into
error? From bconsole:

list joblog jobid=62

Best regards,
Ana

On Mon, Oct 19, 2015 at 5:52 PM, Thing  wrote:

> Hi,
>
>
> I have a catalogue backup error.
>
>
> =
>
> Enter a period to cancel a command.
> *status
> Status available for:
>  1: Director
>  2: Storage
>  3: Client
>  4: All
> Select daemon type for status (1-4): 1
> warlocke-dir Version: 5.2.6 (21 February 2012) arm-unknown-linux-gnueabihf
> debian 7.0
> Daemon started 06-Oct-15 09:26. Jobs: run=32, running=0 mode=0,0
>  Heap: heap=405,504 smbytes=92,543 max_bytes=9,929,949 bufs=288
> max_bufs=1,227
>
> Scheduled Jobs:
> Level  Type Pri  Scheduled  Name   Volume
>
> ===
> IncrementalBackup10  20-Oct-15 23:05BackupLocalFiles
> Local-0001
> Full   Backup11  20-Oct-15 23:10BackupCatalog
> Local-0001
> 
>
> Running Jobs:
> Console connected at 09-Oct-15 15:49
> Console connected at 20-Oct-15 09:21
> Console connected at 20-Oct-15 09:28
> No Jobs running.
> 
>
> Terminated Jobs:
>  JobId  LevelFiles  Bytes   Status   FinishedName
> 
> 54  Incr86648.24 M  OK   16-Oct-15 23:06
> BackupLocalFiles
> 55  Full  0 0   Error16-Oct-15 23:10 BackupCatalog
> 56  Incr52137.50 M  OK   17-Oct-15 23:06
> BackupLocalFiles
> 57  Full  0 0   Error17-Oct-15 23:10 BackupCatalog
> 58  Diff  6,082185.8 M  OK   18-Oct-15 23:09
> BackupLocalFiles
> 59  Full  0 0   Error18-Oct-15 23:10 BackupCatalog
> 60   92,7202.814 G  OK   19-Oct-15 16:00
> RestoreLocalFiles
> 61  Incr  5,17662.35 M  OK   19-Oct-15 23:08
> BackupLocalFiles
> 62  Full  0 0   Error19-Oct-15 23:10 BackupCatalog
> 630 0   Error20-Oct-15 09:23
> RestoreLocalFiles
>
> 
> *
>
> 
>
>
> Does the catalogue failure matter? I assume it does
>
>
> Backups seem OK, just doing a test restore.
>
>
>
>
> regards
>
> Steven
>
>
> --
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula schedule

2015-10-19 Thread More, Ankush
Thanks Ana for quick response.
I will check.

Thank you,
Ankush More

From: Ana Emília M. Arruda [mailto:emiliaarr...@gmail.com]
Sent: Monday, October 19, 2015 10:50 PM
To: More, Ankush
Cc: bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] Bacula schedule

Hello Ankush,

On Mon, Oct 19, 2015 at 1:22 PM, More, Ankush 
> wrote:
Team,

Below is schedule configure in dir file. Bacula version 7.x
Backup is not happening as per  schedule , Notice full and differential backup 
happing same day.
is there wrong in schedule?

Schedule {
  Name = "Billable4"

​​
Run = Full 1st-3rd sun at 15:00

​this will run 1st, 2nd and 3rd sundays​

  Run = Incremental mon-thu at 22:00

​​
Run = Differential 2nd-4th sun at 15:00

​this will run 2nd, 3rd and 4th sundays​

​So, they are coinciding in 2nd and ​3rd sundays. I suppose you want to run 
them interchangeably:

​
Run = Full 1st,3rd sun at 15:00
​
Run = Differential 2nd,4th sun at 15:00

If you have a 5th sunday, no job will run ok? Maybe you wanna run the full on 
your 5th sunday:

​
Run = Full 1st,3rd, 5th sun at 15:00

The values are comma seppareted.



Thank you,
Ankush More

​Best regards,
Ana​



--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 350TB backup

2015-10-19 Thread Robert M. Candey

  
  
We have 80TB of heliophysics data that we mirror
  with rsync daily to another storage server for fast switchover. We
  use Bacula to make quarterly full backups to LTO-5 tapes that we
  send to another building (and annually to an Iron Mountain
  facility with 10 year retention), and incremental and differential
  backups in between to another tape pool.  We split the full
  backups into 5 jobs by parts of the directory hierarchy in order
  to keep the backups under a week long, with the 5th job being
  everything not included in the specific directories of the first 4
  jobs, and the PostgreSQL catalog.  It made a difference to run the
  backups from a separate server with dedicated spool RAID array,
  48GB RAM, and Fibre-Channel to the tape library, with the servers
  connected through a 10GbE Ethernet switch. We'll soon be getting
  80TB more data a year and so are getting a LTO-7 library and
  putting the mirrored storage in separate buildings (GlusterFS on
  top of ZFS).  I'm also thinking of using SSDs for the spool area;
  does anyone have recommendations on that?
  
  Robert Candey



 Original Message 
  Subject: Re: [Bacula-users] 350TB backup
  From: Thing 
  To: Bacula-users@lists.sourceforge.net
  
  Cc: "Bacula-users@lists.sourceforge.net"
  
  Date: Mon Oct 19 2015 17:49:23 GMT-0400 (EDT)



  
  Multiple NFS file systems on a NAS array.  500TB total, 350TB used.
Research data, much of it rarely accessed, after 1 year things like climate
data up to 30 years old, probably highly compressible.   Suspect multiple
bacula backup instances to distribute the load? Growth about 30tb a year.

On 20 October 2015 at 10:08, Dimitri Maziuk  wrote:


  
On 10/19/2015 03:53 PM, Thing wrote:


  Hi,

Is anyone backing total volumes of this order?  and if so, what sort of
scaling, design, hardware?



I take it, that's the size of your filesystems? Not the estimated size
of the backup set (i.e. all cycles in retention period)?

--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



--

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users







Multiple NFS file systems on a NAS array.  500TB
  total, 350TB used. Research data, much of it rarely accessed,
  after 1 year things like climate data up to 30 years old,
  probably highly compressible.   Suspect multiple bacula backup
  instances to distribute the load? Growth about 30tb a year.


  On 20 October 2015 at 10:08, Dimitri
Maziuk 
wrote:
On 10/19/2015 03:53 PM, Thing wrote:
> Hi,
>
> Is anyone backing total volumes of this order?  and
if so, what sort of
> scaling, design, hardware?

  I take it, that's the size of your filesystems? Not
  the estimated size
  of the backup set (i.e. all cycles in retention period)?
  
  --
  Dimitri Maziuk
  Programmer/sysadmin
  BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
  

--
  
  ___
  Bacula-users mailing list
  Bacula-users@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bacula-users
  

  
  




--




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

  


  


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bconsole Reload Sometimes Doesn't Load New Job Resource Definitions

2015-10-19 Thread Heitor Faria
> Hello,

> We make a backup product that uses Bacula as the backend. Our systems run 
> Ubuntu
> v12.04 (kernel v3.12.17) with Bacula v5.2.12. Our product sets up the job
> resource definitions based on the user's choices, it validates the
> configuration with `bacula-dir -t`, and then it issues the bconsole `reload`
> command to load the new job resource. On a particular customer system, we find
> that the new job resource has not been loaded after the `reload` command has
> completed. If we give the `run` command, the new job is not in the list of 
> jobs
> that can be executed. If we restart bacula-dir, then the job resource is 
> loaded
> and it can be run. We haven't been able to figure out why `reload` isn't doing
> what we expect, so I hope that you can offer some advice. I have attached the
> Director configuration file from the affected system (bacula-dir.conf) and the
> file where we put the job resource definitions (jobs.conf), which is imported
> by the main configuration file.

> I've found a few relevant points in researching this problem online. The
> bconsole documentation seems to discourage you from using `reload`:

> "While it is possible to reload the Director's configuration on the fly, even
> while jobs are executing, this is a complex operation and not without side
> effects. Accordingly, if you have to reload the Director's configuration while
> Bacula is running, it is advisable to restart the Director at the next
> convenient opportunity."

> http://www.bacula.org/5.2.x-manuals/en/console/console/Bacula_Console.html

> Indeed, I have found that restarting the Director causes the new job resource 
> to
> be loaded, but this is not a good solution for us because we often have many
> Bacula jobs running simultaneously; the "convenient" times when the Director
> could be restarted will be too infrequent.

> I also found a closed bug ticket from a user who describes practically the 
> same
> problem:

> http://bugs.bacula.org/view.php?id=1573

Hello Rich: is there any message after using the reload command? (message 
command) 
Can you post the the output of a show jobs after the reload command? 
Ps.: Bacula is two major releases ahead and I think this hypothetical bug would 
only be fixed if reproduced in 7.2.0 version, but I'm not a developer. 

> Thank you in advance for your help.

> Regards,
> Rich Otero
> Director, Technical Support and Professional Services
> EditShare
> rot...@editshare.com
> 617-782-0479

Regards, 
=== 
Heitor Medrado de Faria - LPIC-III | ITIL-F | Bacula Systems Certified 
Administrator II 
Do you need Bacula training? http://bacula.us/video-classes/ 
I do Bacula training and deploy in any city of the world. More information: 
http://bacula.us/ 
+55 61 8268-4220 
Site: http://bacula.us FB: heitor.faria 
=== 
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 350TB backup

2015-10-19 Thread Thing
Multiple NFS file systems on a NAS array.  500TB total, 350TB used.
Research data, much of it rarely accessed, after 1 year things like climate
data up to 30 years old, probably highly compressible.   Suspect multiple
bacula backup instances to distribute the load? Growth about 30tb a year.

On 20 October 2015 at 10:08, Dimitri Maziuk  wrote:

> On 10/19/2015 03:53 PM, Thing wrote:
> > Hi,
> >
> > Is anyone backing total volumes of this order?  and if so, what sort of
> > scaling, design, hardware?
>
> I take it, that's the size of your filesystems? Not the estimated size
> of the backup set (i.e. all cycles in retention period)?
>
> --
> Dimitri Maziuk
> Programmer/sysadmin
> BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
>
>
>
> --
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup failure

2015-10-19 Thread Ana Emília M . Arruda
​Hello Thing,

Please check the FileSet for your BackupCatalog job. ​The job cannot find
the MySQL dump (bacula.sql) at your working directory /var/lib/bacula.

Your FileSet must include this file:

FileSet {
  Name = "CatalogFileSet"
  Include {
Options {
  signature = MD5
}
File = /var/lib/bacula/bacula.sql
  }
}

Also, the delete_catalog_backup script do not have the necessary
permissions to run. If you´re running the script as "bacula" user for
example, you must set the necessary permissions for this user to run the
script.

| warlocke-dir JobId 62: shell command: run AfterJob
"/etc/bacula/scripts/delete_catalog_backup"
  |
| warlocke-dir JobId 62: Error: Runscript: AfterJob returned non-zero
status=200. ERR=Permission denied

Best regards,
Ana


On Mon, Oct 19, 2015 at 6:42 PM, Thing  wrote:

> *list joblog jobid=62
>
> ++
> |
> LogText
> |
>
> ++
> | warlocke-dir JobId 62: shell command: run BeforeJob "/etc/bacula/scripts/
> make_catalog_backup.pl MyCatalog"
>  |
> | warlocke-dir JobId 62: Start Backup JobId 62,
> Job=BackupCatalog.2015-10-19_23.10.00_39
>   |
> | warlocke-dir JobId 62: Using Device "FileStorage"
>|
> | warlocke-sd JobId 62: Volume "Local-0001" previously written, moving to
> end of data.
> |
> | warlocke-sd JobId 62: Ready to append to end of Volume "Local-0001"
> size=5908984169
>  |
> | warlocke-fd JobId 62:  Could not stat "/var/lib/bacula/.sql": ERR=No
> such file or directory
>  |
> | warlocke-sd JobId 62: Job write elapsed time = 00:00:01, Transfer rate =
> 0  Bytes/second
> |
> | warlocke-dir JobId 62: Bacula warlocke-dir 5.2.6 (21Feb12):
>   Build OS:   arm-unknown-linux-gnueabihf debian 7.0
>   JobId:  62
>   Job:BackupCatalog.2015-10-19_23.10.00_39
>   Backup Level:   Full
>   Client: "warlocke-fd" 5.2.6 (21Feb12)
> arm-unknown-linux-gnueabihf,debian,7.0
>   FileSet:"Catalog" 2015-09-27 23:10:00
>   Pool:   "File" (From Job resource)
>   Catalog:"MyCatalog" (From Client resource)
>   Storage:"File" (From Job resource)
>   Scheduled time: 19-Oct-2015 23:10:00
>   Start time: 19-Oct-2015 23:10:11
>   End time:   19-Oct-2015 23:10:12
>   Elapsed time:   1 sec
>   Priority:   11
>   FD Files Written:   0
>   SD Files Written:   0
>   FD Bytes Written:   0 (0 B)
>   SD Bytes Written:   0 (0 B)
>   Rate:   0.0 KB/s
>   Software Compression:   None
>   VSS:no
>   Encryption: no
>   Accurate:   no
>   Volume name(s):
>   Volume Session Id:  31
>   Volume Session Time:1444076755
>   Last Volume Bytes:  5,908,984,571 (5.908 GB)
>   Non-fatal FD errors:1
>   SD Errors:  0
>   FD termination status:  OK
>   SD termination status:  OK
>   Termination:Backup OK -- with warnings
>
>  |
> | warlocke-dir JobId 62: Begin pruning Jobs older than 6 months .
>  |
> | warlocke-dir JobId 62: No Jobs found to prune.
>   |
> | warlocke-dir JobId 62: Begin pruning Files.
>  |
> | warlocke-dir JobId 62: No Files found to prune.
>  |
> | warlocke-dir JobId 62: End auto prune.
>
>  |
> | warlocke-dir JobId 62: shell command: run AfterJob
> "/etc/bacula/scripts/delete_catalog_backup"
>   |
> | warlocke-dir JobId 62: Error: Runscript: AfterJob returned non-zero
> status=200. ERR=Permission denied
>  |
>
> ++
>
> +---+---+-+--+---+--+--+---+
> | JobId | Name  | StartTime   | Type | Level | JobFiles |
> JobBytes | JobStatus |
>
> +---+---+-+--+---+--+--+---+
> |62 | BackupCatalog | 2015-10-19 23:10:11 | B| F |0
> |0 | T |
>
> +---+---+-+--+---+--+--+---+
> *
>
>
> On 20 October 2015 at 10:04, Ana Emília M. Arruda 
> wrote:
>
>> Hello Thing,
>>
>> Could you post the log for any of your BackupCatalog job that runs into
>> error? From bconsole:
>>
>> list joblog jobid=62
>>
>> Best regards,
>> Ana
>>
>> On Mon, Oct 19, 2015 at 

Re: [Bacula-users] backup failure

2015-10-19 Thread Thing
8><
62  Full  0 0   Error19-Oct-15 23:10 BackupCatalog
630 0   Error20-Oct-15 09:23
RestoreLocalFiles
64  108,7853.198 G  OK   20-Oct-15 09:45
RestoreLocalFiles
65  Full  138.68 M  Error20-Oct-15 11:12 BackupCatalog


*list joblog jobid=65
++
|
LogText
|
++
| warlocke-dir JobId 65: shell command: run BeforeJob "/etc/bacula/scripts/
make_catalog_backup.pl MyCatalog"
 |
| warlocke-dir JobId 65: Start Backup JobId 65,
Job=BackupCatalog.2015-10-20_11.11.52_04
  |
| warlocke-dir JobId 65: Using Device "FileStorage"
   |
| warlocke-sd JobId 65: Volume "Local-0001" previously written, moving to
end of data.
|
| warlocke-sd JobId 65: Ready to append to end of Volume "Local-0001"
size=5908984571
 |
| warlocke-sd JobId 65: Job write elapsed time = 00:00:01, Transfer rate =
38.68 M Bytes/second
   |
| warlocke-dir JobId 65: Bacula warlocke-dir 5.2.6 (21Feb12):
  Build OS:   arm-unknown-linux-gnueabihf debian 7.0
  JobId:  65
  Job:BackupCatalog.2015-10-20_11.11.52_04
  Backup Level:   Full
  Client: "warlocke-fd" 5.2.6 (21Feb12)
arm-unknown-linux-gnueabihf,debian,7.0
  FileSet:"Catalog" 2015-10-20 11:11:52
  Pool:   "File" (From Job resource)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"File" (From Job resource)
  Scheduled time: 20-Oct-2015 11:11:50
  Start time: 20-Oct-2015 11:12:04
  End time:   20-Oct-2015 11:12:05
  Elapsed time:   1 sec
  Priority:   11
  FD Files Written:   1
  SD Files Written:   1
  FD Bytes Written:   38,689,048 (38.68 MB)
  SD Bytes Written:   38,689,163 (38.68 MB)
  Rate:   38689.0 KB/s
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): Local-0001
  Volume Session Id:  33
  Volume Session Time:1444076755
  Last Volume Bytes:  5,947,702,852 (5.947 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

 |
| warlocke-dir JobId 65: Begin pruning Jobs older than 6 months .
 |
| warlocke-dir JobId 65: No Jobs found to prune.
  |
| warlocke-dir JobId 65: Begin pruning Files.
 |
| warlocke-dir JobId 65: No Files found to prune.
 |
| warlocke-dir JobId 65: End auto prune.

 |
| warlocke-dir JobId 65: shell command: run AfterJob
"/etc/bacula/scripts/delete_catalog_backup"
  |
| warlocke-dir JobId 65: Error: Runscript: AfterJob returned non-zero
status=200. ERR=Permission denied
 |
++
+---+---+-+--+---+--++---+
| JobId | Name  | StartTime   | Type | Level | JobFiles |
JobBytes   | JobStatus |
+---+---+-+--+---+--++---+
|65 | BackupCatalog | 2015-10-20 11:12:04 | B| F |1 |
38,689,048 | T |
+---+---+-+--+---+--++---+
*
8><-

This looks better?



On 20 October 2015 at 11:11, Thing  wrote:

> ah,
>
> 8><
> # This is the backup of the catalog
> FileSet {
>   Name = "Catalog"
>   Include {
> Options {
>   signature = MD5
> }
> File = "/var/lib/bacula/.sql"
>   }
> }
>
> # Client (File Services) to backup
> =
> 8><-
>
> so .sql should be bacula.sql I assume?
>
>
> permissions.hmm, will take a look
>
> On 20 October 2015 at 10:59, Ana Emília M. Arruda 
> wrote:
>
>> ​Hello Thing,
>>
>> Please check the FileSet for your BackupCatalog job. ​The job cannot find
>> the MySQL dump (bacula.sql) at your working directory /var/lib/bacula.
>>
>> Your FileSet must include this file:
>>
>> FileSet {
>>   Name = "CatalogFileSet"
>>   Include {
>> Options {
>>   signature = MD5
>> }
>> File = /var/lib/bacula/bacula.sql
>>   }
>> }
>>
>> Also, the delete_catalog_backup script do not have the necessary
>> permissions to run. If you´re running the script as "bacula" user for
>> example, you 

[Bacula-users] verify error with LTO hardware encryption

2015-10-19 Thread Mark D. Strohm
Hello-

Is there a trick to using Bacula with LTO hardware encryption enabled?

With drive encryption turned on, verify jobs are hitting an I/O error reading 
the first record of a tape file.

On a test job that went to files 78, 79 and 80, the error looks like this:

19-Oct 14:03 ccnback-sd JobId 988: Ready to read from volume "CCNB12" on tape 
device "Magnum-224-LTO4" (/dev/tape/by-id/scsi-1IBM_ULTRIUM-TD4_1310025811-nst).
19-Oct 14:03 ccnback-sd JobId 988: Forward spacing Volume "CCNB12" to 
file:block 78:0.
19-Oct 14:19 ccnback-sd JobId 988: Error: block.c:429 Read error on fd=5 at 
file:blk 79:0 on device "Magnum-224-LTO4" 
(/dev/tape/by-id/scsi-1IBM_ULTRIUM-TD4_1310025811-nst). ERR=Input/output error.
19-Oct 14:19 ccnback-sd JobId 988: End of Volume at file 79 on device 
"Magnum-224-LTO4" (/dev/tape/by-id/scsi-1IBM_ULTRIUM-TD4_1310025811-nst), 
Volume “CCNB12"

dd can read the data from both files.  The boundary is on the phrase "varius 
sed feugiat”.  The end of file 78, and the start of 79 are:

"varius sed "

"~\274\301\214^@^@\374^@^@^A.\300BB02^@^@^@^AV%X\226^@^@^@3\377\377\377\376^@^@\313\233feugiat”

I’m using Bacula 7.0.5 with an LTO-4 drive and stenc 1.0.7 to control 
encryption.

Any advice would be appreciated.

Thank you very much.
Mark


--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula bug on status schedule?

2015-10-19 Thread Wanderlei Huttel
Hi Ana

That's what I thought!
I haven't done any backup test but I noticed that scheduled was OK using
command "show schedule=Ten Minutes".

Bug report opened
http://bugs.bacula.org/view.php?id=2178

Best Regards
Wanderlei






2015-10-19 14:11 GMT-02:00 Ana Emília M. Arruda :

> Hello Wanderlei,
>
> It seems that status schedule is not showing the hourly schedule
> correctly. But the schedule is working as expected, the jobs are being run
> every ten minutes as defined in the schedule resource.
>
> Also, a "show schedule=TenMinutes" shows the values as it is configured.
>
> So, could you open a bug report in mantis for this?
>
> Best regards,
> Ana
>
> On Mon, Oct 19, 2015 at 10:19 AM, Wanderlei Huttel <
> wanderleihut...@gmail.com> wrote:
>
>> Hi Guys
>>
>> In manual says:
>> Date-time-specification determines when the Job is to be run. The
>> specification is a repetition, and as a default Bacula is set to run a job
>> at the beginning of the hour of every hour of every day of every week of
>> every month of every year.
>>
>> I used the schedule example of manual:
>>
>> Schedule {
>>Name = "TenMinutes"
>>Run = Level=Full hourly at 0:05
>>Run = Level=Full hourly at 0:15
>>Run = Level=Full hourly at 0:25
>>Run = Level=Full hourly at 0:35
>>Run = Level=Full hourly at 0:45
>>Run = Level=Full hourly at 0:55
>> }
>>
>>
>> I expected schedule hour by hour, but It doesn't.
>>
>> *status schedule job=Backup_Machine_0001 days=1
>>
>> Scheduled Jobs:
>> Level  Type Pri  Scheduled  Job Name
>> Schedule
>>
>> =
>> Full   Backup10  Mon 19-Oct 00:05   Backup_Machine001
>>  Schedule_TenMinutes
>> Full   Backup10  Mon 19-Oct 00:15   Backup_Machine001
>> Schedule_TenMinutes
>> Full   Backup10  Mon 19-Oct 00:25   Backup_Machine001
>> Schedule_TenMinutes
>> Full   Backup10  Mon 19-Oct 00:35   Backup_Machine001
>> Schedule_TenMinutes
>> Full   Backup10  Mon 19-Oct 00:45   Backup_Machine001
>> Schedule_TenMinutes
>> Full   Backup10  Mon 19-Oct 00:55   Backup_Machine001
>> Schedule_TenMinutes
>> 
>>
>>
>> Is it a bug?
>>
>> Thanks Wanderlei
>>
>>
>> --
>>
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>>
>
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup failure

2015-10-19 Thread Thing
ah,

8><
# This is the backup of the catalog
FileSet {
  Name = "Catalog"
  Include {
Options {
  signature = MD5
}
File = "/var/lib/bacula/.sql"
  }
}

# Client (File Services) to backup
=
8><-

so .sql should be bacula.sql I assume?


permissions.hmm, will take a look

On 20 October 2015 at 10:59, Ana Emília M. Arruda 
wrote:

> ​Hello Thing,
>
> Please check the FileSet for your BackupCatalog job. ​The job cannot find
> the MySQL dump (bacula.sql) at your working directory /var/lib/bacula.
>
> Your FileSet must include this file:
>
> FileSet {
>   Name = "CatalogFileSet"
>   Include {
> Options {
>   signature = MD5
> }
> File = /var/lib/bacula/bacula.sql
>   }
> }
>
> Also, the delete_catalog_backup script do not have the necessary
> permissions to run. If you´re running the script as "bacula" user for
> example, you must set the necessary permissions for this user to run the
> script.
>
> | warlocke-dir JobId 62: shell command: run AfterJob
> "/etc/bacula/scripts/delete_catalog_backup"
>   |
> | warlocke-dir JobId 62: Error: Runscript: AfterJob returned non-zero
> status=200. ERR=Permission denied
>
> Best regards,
> Ana
>
>
> On Mon, Oct 19, 2015 at 6:42 PM, Thing  wrote:
>
>> *list joblog jobid=62
>>
>> ++
>> |
>> LogText
>> |
>>
>> ++
>> | warlocke-dir JobId 62: shell command: run BeforeJob
>> "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog"
>>  |
>> | warlocke-dir JobId 62: Start Backup JobId 62,
>> Job=BackupCatalog.2015-10-19_23.10.00_39
>>   |
>> | warlocke-dir JobId 62: Using Device "FileStorage"
>>|
>> | warlocke-sd JobId 62: Volume "Local-0001" previously written, moving to
>> end of data.
>> |
>> | warlocke-sd JobId 62: Ready to append to end of Volume "Local-0001"
>> size=5908984169
>>  |
>> | warlocke-fd JobId 62:  Could not stat "/var/lib/bacula/.sql":
>> ERR=No such file or directory
>>  |
>> | warlocke-sd JobId 62: Job write elapsed time = 00:00:01, Transfer rate
>> = 0  Bytes/second
>> |
>> | warlocke-dir JobId 62: Bacula warlocke-dir 5.2.6 (21Feb12):
>>   Build OS:   arm-unknown-linux-gnueabihf debian 7.0
>>   JobId:  62
>>   Job:BackupCatalog.2015-10-19_23.10.00_39
>>   Backup Level:   Full
>>   Client: "warlocke-fd" 5.2.6 (21Feb12)
>> arm-unknown-linux-gnueabihf,debian,7.0
>>   FileSet:"Catalog" 2015-09-27 23:10:00
>>   Pool:   "File" (From Job resource)
>>   Catalog:"MyCatalog" (From Client resource)
>>   Storage:"File" (From Job resource)
>>   Scheduled time: 19-Oct-2015 23:10:00
>>   Start time: 19-Oct-2015 23:10:11
>>   End time:   19-Oct-2015 23:10:12
>>   Elapsed time:   1 sec
>>   Priority:   11
>>   FD Files Written:   0
>>   SD Files Written:   0
>>   FD Bytes Written:   0 (0 B)
>>   SD Bytes Written:   0 (0 B)
>>   Rate:   0.0 KB/s
>>   Software Compression:   None
>>   VSS:no
>>   Encryption: no
>>   Accurate:   no
>>   Volume name(s):
>>   Volume Session Id:  31
>>   Volume Session Time:1444076755
>>   Last Volume Bytes:  5,908,984,571 (5.908 GB)
>>   Non-fatal FD errors:1
>>   SD Errors:  0
>>   FD termination status:  OK
>>   SD termination status:  OK
>>   Termination:Backup OK -- with warnings
>>
>>  |
>> | warlocke-dir JobId 62: Begin pruning Jobs older than 6 months .
>>  |
>> | warlocke-dir JobId 62: No Jobs found to prune.
>>   |
>> | warlocke-dir JobId 62: Begin pruning Files.
>>  |
>> | warlocke-dir JobId 62: No Files found to prune.
>>  |
>> | warlocke-dir JobId 62: End auto prune.
>>
>>  |
>> | warlocke-dir JobId 62: shell command: run AfterJob
>> "/etc/bacula/scripts/delete_catalog_backup"
>>   |
>> | warlocke-dir JobId 62: Error: Runscript: AfterJob returned non-zero
>> status=200. ERR=Permission denied
>>  |
>>
>> ++
>>
>> +---+---+-+--+---+--+--+---+
>> | JobId | Name  | StartTime   | Type | Level | JobFiles |
>> JobBytes | JobStatus |
>>
>> 

Re: [Bacula-users] 350TB backup

2015-10-19 Thread Dimitri Maziuk
On 10/19/2015 04:49 PM, Thing wrote:
> Multiple NFS file systems on a NAS array.  500TB total, 350TB used.
> Research data, much of it rarely accessed, after 1 year things like climate
> data up to 30 years old, probably highly compressible.   Suspect multiple
> bacula backup instances to distribute the load? Growth about 30tb a year.

Basically it's what Alan said. With multiple systems you get more
hardware failures and more time spent maintaining your fileset
definitions (they always remember that a new dataset is absolutely
critical and must be backed up after they accidentally delete it).

This is the point where a big NetApp or AmpliStor with a year's worth of
snapshots and an archival solution for those "after 1 year things"
starts to look very cheap at the price.

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu



signature.asc
Description: OpenPGP digital signature
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 350TB backup

2015-10-19 Thread Thing
thanks,



On 20 October 2015 at 11:02, Alan Brown  wrote:

> On 19/10/15 22:08, Dimitri Maziuk wrote:
>
> On 10/19/2015 03:53 PM, Thing wrote:
>
> Hi,
>
> Is anyone backing total volumes of this order?  and if so, what sort of
> scaling, design, hardware?
>
> I take it, that's the size of your filesystems? Not the estimated size
> of the backup set (i.e. all cycles in retention period)?
>
>
>
>
> Assuming it is,
>
> Yes. about 700TB and still growing.
>
> Keeping the individual filesets to 1Tb so that tape run isn't excessive.
>
> Largish changer  - I'm about to retiire a 500-slot neo8000 with 7 LTO5
> drives in favour of a 120-slot Scalar i500 with 6 LTO6s.
>
> If you don't have enough slots you'll be feeding it multiple times during
> long weekends (we can easily peak at 20 tapes/day if multiple fulls get
> kicked off).
> If you don't have enough drives you won't keep up, let alone cope with the
> inevitable drive failures and 2 day turnaround for a replacement. You
> absolutely must have at least 1 more drive than you think you need to cope
> with the backup load. Apart from anything else it means you can run urgent
> restores without interrupting backups in progress.
>
> Large data safes. You'll need something like a Phoenix FS1903, probably a
> couple (these hold about 800 LTOs apiece) and a strong floor for them to
> sit on.
>
> The tapes, safes and changer should all sit in close proximity in a
> temperature-controlled _clean_ environment, preferably in their own room,
> which is accessed as infrequently as possible. Dust kills drives and human
> skin is one of the worst contaminants because it's greasy with most other
> dust types being abrasive. Consider an air scrubber and clean-room
> "flypaper" sticky sheets on the door threshold.
>
> Large (200Gb+), high performance SSD for spool. Consumer drives become a
> bottleneck.
>
> Something similar (raid1) for database, 500Gb or so.
>
> Postgresql - just works. Mysql doesn't scale this large very well - It
> will work but you'll be constantly fighting with it.
>
> LOTS of ram for the DB box. I have 48Gb in a 5year old machine. It's due
> for an upgrade, but just about anything newer than 5 years with a E5 CPU or
> better will do the job nicely.
>
> 10Gb/s connectivity. You can fudge it with LACP on 1Gb/s but it becomes a
> bottleneck. Ditto on the fileservers themselves.
>
> A decent network switch. Huawei 6800 series are nicely specced (1TB/s
> throughput) and run rings around equivalently priced Cisco/Juniper kit -
> which mostly all use the same Broadcom Trident2/2+/3 chipset anyway.
>
>
> We run 14 month retention on the backup cycle, with a full every 3 months,
> nightly incrementals and 4-weekly differentials. Rapidly changing data in
> smaller sets gets monthly full backups. Thankfully this is science data, as
> financial stuff may need to be retained up to 7 years.
>
> The most common restore is for accidental deletions but we've had to pull
> a few fileset restores over the years - usually because someone cheaped out
> and didn't RAID their box on the basis "its easy to rebuild".
> It never is unless it's a cookie cutter - which they never are after a
> week of operation - and it's less disruptive to change a dead drive in a
> raidset anyway (this can be done hot on Linux systems using mdraid).
>
> There's only ever been one major central store restore and that was a
> runaway rm -rf. Unfortunately one group has a 200TB system which is beyond
> warranty but not being replaced because of budgets. It's being driven hard
> and sooner or later it's going to drop its bundle. I'm not looking forward
> to that day.
>
> Regarding the data safes: People say "Iron Mountain", but backups are not
> archives. You're going to cycle the tapes and retrieving them is much
> easier if they're local. A good fire safe will survive an intense fire for
> 60 minutes and a 10 metre drop (simulating building collapse) with the
> insides not going above 50C, but it's best to site your safes where they're
> least likely to get that kind of experience and pipe the data to them and
> the tape library.
>
> Your single biggest hurdle is getting enough budget for the job.
> Management usually won't spend enough on decent storage systems and they'll
> heavily resist spending on backup systems. "Raid is not backup" usually
> doesn't sink in unless they've been burned a few times.
>
>
>
> --
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bconsole Reload Sometimes Doesn't Load New Job Resource Definitions

2015-10-19 Thread Rich Otero
Hello,

We make a backup product that uses Bacula as the backend. Our systems run
Ubuntu v12.04 (kernel v3.12.17) with Bacula v5.2.12. Our product sets up
the job resource definitions based on the user's choices, it validates the
configuration with `bacula-dir -t`, and then it issues the bconsole
`reload` command to load the new job resource. On a particular customer
system, we find that the new job resource has not been loaded after the
`reload` command has completed. If we give the `run` command, the new job
is not in the list of jobs that can be executed. If we restart bacula-dir,
then the job resource is loaded and it can be run. We haven't been able to
figure out why `reload` isn't doing what we expect, so I hope that you can
offer some advice. I have attached the Director configuration file from the
affected system (bacula-dir.conf) and the file where we put the job
resource definitions (jobs.conf), which is imported by the main
configuration file.

I've found a few relevant points in researching this problem online. The
bconsole documentation seems to discourage you from using `reload`:

"While it is possible to reload the Director's configuration on the fly,
even while jobs are executing, this is a complex operation and not without
side effects. Accordingly, if you have to reload the Director's
configuration while Bacula is running, it is advisable to restart the
Director at the next convenient opportunity."

http://www.bacula.org/5.2.x-manuals/en/console/console/Bacula_Console.html

Indeed, I have found that restarting the Director causes the new job
resource to be loaded, but this is not a good solution for us because we
often have many Bacula jobs running simultaneously; the "convenient" times
when the Director could be restarted will be too infrequent.

I also found a closed bug ticket from a user who describes practically the
same problem:

http://bugs.bacula.org/view.php?id=1573

Thank you in advance for your help.

Regards,
Rich Otero
Director, Technical Support and Professional Services
EditShare
rot...@editshare.com
617-782-0479


jobs.conf
Description: Binary data


bacula-dir.conf
Description: Binary data
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 350TB backup

2015-10-19 Thread Alan Brown
On 19/10/15 22:08, Dimitri Maziuk wrote:
> On 10/19/2015 03:53 PM, Thing wrote:
>> Hi,
>>
>> Is anyone backing total volumes of this order?  and if so, what sort of
>> scaling, design, hardware?
> I take it, that's the size of your filesystems? Not the estimated size
> of the backup set (i.e. all cycles in retention period)?
>
>

Assuming it is,

Yes. about 700TB and still growing.

Keeping the individual filesets to 1Tb so that tape run isn't excessive.

Largish changer  - I'm about to retiire a 500-slot neo8000 with 7 LTO5
drives in favour of a 120-slot Scalar i500 with 6 LTO6s.

If you don't have enough slots you'll be feeding it multiple times
during long weekends (we can easily peak at 20 tapes/day if multiple
fulls get kicked off).
If you don't have enough drives you won't keep up, let alone cope with
the inevitable drive failures and 2 day turnaround for a replacement.
You absolutely must have at least 1 more drive than you think you need
to cope with the backup load. Apart from anything else it means you can
run urgent restores without interrupting backups in progress.

Large data safes. You'll need something like a Phoenix FS1903, probably
a couple (these hold about 800 LTOs apiece) and a strong floor for them
to sit on.

The tapes, safes and changer should all sit in close proximity in a
temperature-controlled _clean_ environment, preferably in their own
room, which is accessed as infrequently as possible. Dust kills drives
and human skin is one of the worst contaminants because it's greasy with
most other dust types being abrasive. Consider an air scrubber and
clean-room "flypaper" sticky sheets on the door threshold.

Large (200Gb+), high performance SSD for spool. Consumer drives become a
bottleneck.

Something similar (raid1) for database, 500Gb or so.

Postgresql - just works. Mysql doesn't scale this large very well - It
will work but you'll be constantly fighting with it.

LOTS of ram for the DB box. I have 48Gb in a 5year old machine. It's due
for an upgrade, but just about anything newer than 5 years with a E5 CPU
or better will do the job nicely.

10Gb/s connectivity. You can fudge it with LACP on 1Gb/s but it becomes
a bottleneck. Ditto on the fileservers themselves.

A decent network switch. Huawei 6800 series are nicely specced (1TB/s
throughput) and run rings around equivalently priced Cisco/Juniper kit -
which mostly all use the same Broadcom Trident2/2+/3 chipset anyway.


We run 14 month retention on the backup cycle, with a full every 3
months, nightly incrementals and 4-weekly differentials. Rapidly
changing data in smaller sets gets monthly full backups. Thankfully this
is science data, as financial stuff may need to be retained up to 7 years.

The most common restore is for accidental deletions but we've had to
pull a few fileset restores over the years - usually because someone
cheaped out and didn't RAID their box on the basis "its easy to rebuild".
It never is unless it's a cookie cutter - which they never are after a
week of operation - and it's less disruptive to change a dead drive in a
raidset anyway (this can be done hot on Linux systems using mdraid).

There's only ever been one major central store restore and that was a
runaway rm -rf. Unfortunately one group has a 200TB system which is
beyond warranty but not being replaced because of budgets. It's being
driven hard and sooner or later it's going to drop its bundle. I'm not
looking forward to that day.

Regarding the data safes: People say "Iron Mountain", but backups are
not archives. You're going to cycle the tapes and retrieving them is
much easier if they're local. A good fire safe will survive an intense
fire for 60 minutes and a 10 metre drop (simulating building collapse)
with the insides not going above 50C, but it's best to site your safes
where they're least likely to get that kind of experience and pipe the
data to them and the tape library.

Your single biggest hurdle is getting enough budget for the job.
Management usually won't spend enough on decent storage systems and
they'll heavily resist spending on backup systems. "Raid is not backup"
usually doesn't sink in unless they've been burned a few times.



--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backup failure

2015-10-19 Thread Ana Emília M . Arruda
Yes. You have backup of your bacula.sql.

You just need to delete the bacula.sql from your host filesystem. The
bellow should solve this:

chmod +rx /etc/bacula/scripts/delete_catalog_backup

Best regards,
Ana

On Mon, Oct 19, 2015 at 7:24 PM, Thing  wrote:

> 8><
> 62  Full  0 0   Error19-Oct-15 23:10 BackupCatalog
> 630 0   Error20-Oct-15 09:23
> RestoreLocalFiles
> 64  108,7853.198 G  OK   20-Oct-15 09:45
> RestoreLocalFiles
> 65  Full  138.68 M  Error20-Oct-15 11:12 BackupCatalog
>
> 
> *list joblog jobid=65
>
> ++
> |
> LogText
> |
>
> ++
> | warlocke-dir JobId 65: shell command: run BeforeJob "/etc/bacula/scripts/
> make_catalog_backup.pl MyCatalog"
>  |
> | warlocke-dir JobId 65: Start Backup JobId 65,
> Job=BackupCatalog.2015-10-20_11.11.52_04
>   |
> | warlocke-dir JobId 65: Using Device "FileStorage"
>|
> | warlocke-sd JobId 65: Volume "Local-0001" previously written, moving to
> end of data.
> |
> | warlocke-sd JobId 65: Ready to append to end of Volume "Local-0001"
> size=5908984571
>  |
> | warlocke-sd JobId 65: Job write elapsed time = 00:00:01, Transfer rate =
> 38.68 M Bytes/second
>|
> | warlocke-dir JobId 65: Bacula warlocke-dir 5.2.6 (21Feb12):
>   Build OS:   arm-unknown-linux-gnueabihf debian 7.0
>   JobId:  65
>   Job:BackupCatalog.2015-10-20_11.11.52_04
>   Backup Level:   Full
>   Client: "warlocke-fd" 5.2.6 (21Feb12)
> arm-unknown-linux-gnueabihf,debian,7.0
>   FileSet:"Catalog" 2015-10-20 11:11:52
>   Pool:   "File" (From Job resource)
>   Catalog:"MyCatalog" (From Client resource)
>   Storage:"File" (From Job resource)
>   Scheduled time: 20-Oct-2015 11:11:50
>   Start time: 20-Oct-2015 11:12:04
>   End time:   20-Oct-2015 11:12:05
>   Elapsed time:   1 sec
>   Priority:   11
>   FD Files Written:   1
>   SD Files Written:   1
>   FD Bytes Written:   38,689,048 (38.68 MB)
>   SD Bytes Written:   38,689,163 (38.68 MB)
>   Rate:   38689.0 KB/s
>   Software Compression:   None
>   VSS:no
>   Encryption: no
>   Accurate:   no
>   Volume name(s): Local-0001
>   Volume Session Id:  33
>   Volume Session Time:1444076755
>   Last Volume Bytes:  5,947,702,852 (5.947 GB)
>   Non-fatal FD errors:0
>   SD Errors:  0
>   FD termination status:  OK
>   SD termination status:  OK
>   Termination:Backup OK
>
>  |
> | warlocke-dir JobId 65: Begin pruning Jobs older than 6 months .
>  |
> | warlocke-dir JobId 65: No Jobs found to prune.
>   |
> | warlocke-dir JobId 65: Begin pruning Files.
>  |
> | warlocke-dir JobId 65: No Files found to prune.
>  |
> | warlocke-dir JobId 65: End auto prune.
>
>  |
> | warlocke-dir JobId 65: shell command: run AfterJob
> "/etc/bacula/scripts/delete_catalog_backup"
>   |
> | warlocke-dir JobId 65: Error: Runscript: AfterJob returned non-zero
> status=200. ERR=Permission denied
>  |
>
> ++
>
> +---+---+-+--+---+--++---+
> | JobId | Name  | StartTime   | Type | Level | JobFiles |
> JobBytes   | JobStatus |
>
> +---+---+-+--+---+--++---+
> |65 | BackupCatalog | 2015-10-20 11:12:04 | B| F |1 |
> 38,689,048 | T |
>
> +---+---+-+--+---+--++---+
> *
> 8><-
>
> This looks better?
>
>
>
> On 20 October 2015 at 11:11, Thing  wrote:
>
>> ah,
>>
>> 8><
>> # This is the backup of the catalog
>> FileSet {
>>   Name = "Catalog"
>>   Include {
>> Options {
>>   signature = MD5
>> }
>> File = "/var/lib/bacula/.sql"
>>   }
>> }
>>
>> # Client (File Services) to backup
>> =
>> 8><-
>>
>> so .sql should be bacula.sql I assume?
>>
>>
>> permissions.hmm, will take a look
>>
>> On 20 October 2015 at 10:59, Ana Emília M. Arruda > > wrote:
>>
>>> ​Hello Thing,
>>>

[Bacula-users] Bacula schedule

2015-10-19 Thread More, Ankush
Team,

Below is schedule configure in dir file. Bacula version 7.x
Backup is not happening as per  schedule , Notice full and differential backup 
happing same day.
is there wrong in schedule?

Schedule {
  Name = "Billable4"
  Run = Full 1st-3rd sun at 15:00
  Run = Incremental mon-thu at 22:00
  Run = Differential 2nd-4th sun at 15:00


Thank you,
Ankush More

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula schedule

2015-10-19 Thread Ana Emília M . Arruda
Hello Ankush,

On Mon, Oct 19, 2015 at 1:22 PM, More, Ankush 
wrote:

> Team,
>
>
>
> Below is schedule configure in dir file. Bacula version 7.x
>
> Backup is not happening as per  schedule , Notice full and differential
> backup happing same day.
>
> is there wrong in schedule?
>
>
>
> Schedule {
>
>   Name = "Billable4"
>
>
> ​​
> Run = Full 1st-3rd sun at 15:00
>
>
​this will run 1st, 2nd and 3rd sundays​


>   Run = Incremental mon-thu at 22:00
>
>
> ​​
> Run = Differential 2nd-4th sun at 15:00
>
>
​this will run 2nd, 3rd and 4th sundays​

​So, they are coinciding in 2nd and ​3rd sundays. I suppose you want to run
them interchangeably:

​
Run = Full 1st,3rd sun at 15:00
​
Run = Differential 2nd,4th sun at 15:00

If you have a 5th sunday, no job will run ok? Maybe you wanna run the full
on your 5th sunday:

​
Run = Full 1st,3rd, 5th sun at 15:00

The values are comma seppareted.


>
>
>
>
> Thank you,
>
> Ankush More
>

​Best regards,
Ana​


>
>
>
> --
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users