Re: [Bacula-users] LTO tape performances, again...

2024-01-25 Thread Pierre Bernhardt

Am 25.01.24 um 10:06 schrieb Marco Gaiarin:



2) checked disk performance (data came only from local disk); i've currently
  3 servers, some perform better, some worster, but the best one have a read
disk performance pretty decent, at least 200MB/s on random access (1500 MB/s
on sequential one).


Jim Pollard on private email ask me about controllers: i've not specified,
sorry, but LTO units are connected to a specific SAS controller, not the
disks one.


I'm registred also a lower than expected write performance.
My LTO-6 drive should be handle 160 MB/s uncompressable random data.
By the way mostly bacula said after writing a sequence that the tranfer speed
is mostly round about 80 MB/s.
I've not investigated yet but normally it should go faster. The job is spooled
to /tmp and the swap is not in use. So the Transfer should be much more
faster.

My suggestion now is:

Create a big data-random file as like a spool file in /tmp.
Spool it with dd from tmp to /dev/null
Spool from /dev/random to tape
Spool from /tmp to tape

Any suggestions about bs usage or something else?

Pierre





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Autochangers and unload timeout...

2024-01-25 Thread Pierre Bernhardt

Am 24.01.24 um 18:13 schrieb Marco Gaiarin:

  24-Jan 17:22 cnpve3-sd JobId 16234: [SI0202] End of Volume "AAJ661L9" at 333:49131 on 
device "LTO9Storage0" (/dev/nst0). Write of 524288 bytes got -1.
  24-Jan 17:22 cnpve3-sd JobId 16234: Re-read of last block succeeded.
  24-Jan 17:22 cnpve3-sd JobId 16234: End of medium on Volume "AAJ661L9" 
Bytes=17,846,022,566,912 Blocks=34,038,588 at 24-Jan-2024 17:22.
  24-Jan 17:22 cnpve3-sd JobId 16234: 3307 Issuing autochanger "unload Volume 
AAJ661L9, Slot 2, Drive 0" command.
  24-Jan 17:28 cnpve3-sd JobId 16234: 3995 Bad autochanger "unload Volume AAJ661L9, 
Slot 2, Drive 0": ERR=Child died from signal 15: Termination
Results=Program killed by Bacula (timeout)
  24-Jan 17:28 cnpve3-sd JobId 16234: 3304 Issuing autochanger "load Volume 
AAJ660L9, Slot 1, Drive 0" command.
  24-Jan 17:29 cnpve3-sd JobId 16234: 3305 Autochanger "load Volume AAJ660L9, Slot 
1, Drive 0", status is OK.

So, unload timeout, but subsequent load command works as expected (and
backup are continuing...).

In the mtx-changer.conf
You can set debug_log=1 to create a mtx.log in ~bacula home dir which should be 
/var/lib/bacula.
I'd set debug_level=100 to log everything.

Maybe the offline time is to low. In my opinion I give it simply 900 seconds to 
prevent me
from failures the drive needs more time than expected athough almost it needs 
less than 60 seconds.

offline_sleep should be 1.

By the way I'm using mtx-changer script for years untouched I found in my one 
the parameters won't
be used in the waiting loop:

# The purpose of this function to wait a maximum
#   time for the drive. It will
#   return as soon as the drive is ready, or after
#   waiting a maximum of 900 seconds.
# Note, this is very system dependent, so if you are
#   not running on Linux, you will probably need to
#   re-write it, or at least change the grep target.
#   We've attempted to get the appropriate OS grep targets
#   in the code at the top of this script.
#
wait_for_drive() {
  i=0
  while [ $i -le 900 ]; do  # Wait max 900 seconds
if mt -f $1 status 2>&1 | grep "${ready}" >/dev/null 2>&1; then
  stinit 2>/dev/null >/dev/null
  break
fi
debug $dbglvl "Device $1 - not ready, retrying..."
sleep 1
i=`expr $i + 1`
  done
}

By the way I'm not further sure that is still the state in the distributed 
mtx-changer script.
Normally I would expect in the while statement something like

   while [ ${offline_sleep -eq 1 ] && [ $i -le ${offline_time} ]; do  # Wait 
max 900 seconds

(untested)

Cheers,
Pierre




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to check a written job by re-read them

2024-01-23 Thread Pierre Bernhardt

Am 23.01.24 um 15:18 schrieb Bill Arlofski via Bacula-users:

Yes, a Copy job will need to read the backup data and by doing so, Bacula will 
verify the signature (checksum) of each file read. You would be notified in the 
job of a failure to read back a file with the correct checksum.

Ok, a signature check additionally is good. By the way I though the data from 
the job simply will be copy
without further checking and usingg smartctl before eject to show possible 
problems registred.


But, as the name implies, you will be copying the data to another storage 
location, and hence using some additional space - E
ven if it is only a temporary scratch space for your copies to be written to.

Thats the reason I want in best to write them to /dev/null and hopefully can 
use some configuration to do them.


Alternately, you can run a Verify (level=data) job which read the data from the 
backup media, also verifying the checksum of every file read - without actually 
writing the data to a second storage location.

Yea, if there is no difference against copy to /dev/null then this is the goal 
for  me.


I have written a script which (just for testing purposes), when calls from a 
Backup Job's RunScript (RunsWhen = After), automatically restores the entire 
job but also runs all three Verify levels against the job. You can pick the 
parts of the script you need (maybe just the Verify level=data), and 
remove/comment out the rest, or just pick and choose what you need.



I am attaching the script `AutoRestoreAndVerify.sh` which I use in my 
environment. Please edit the variables at the top and read through the 
instructions at the top of the script to understand how to implement this 
script.

Thank you, I will check it carefully what I can use.

Pierre




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to show diff of two backup?

2024-01-23 Thread Pierre Bernhardt

Am 23.01.24 um 10:46 schrieb Pedro Oliveira:

Hi Pierre

Hello Pedro,

I anwered also to the list. It looks like your answer does'nt it.


In Bacula, Verify Jobs perform three important tasks to ensure data integrity:

Verify Volume Data Integrity
Verify Volume Metadata Integrity
Verify File System Integrity

Verify Volume Data Integrity

Verifying the data written to a Volume is done using a level=data for a Verify 
Job of Bacula. Such a Job will read all data records from a Volume, and for 
each object encountered, will calculate the checksum and record the size. That 
information is then compared against the metadata as stored on the Volume. 
Doing so, the Storage Daemon is able to detect data corruption on the storage 
media.

In this case maybe I can use it instead of my other suggestion to simply use a 
copy
job to /dev/null. Maybe it is a little bit overdrawn to simply make a simple 
read test
the the tape is still readable. But better this than nothing ;-)


Verify Volume Metadata Integrity

Bacula allows to compare the metadata read from the media against what it 
stored in the catalog. With Bacula, this comparison is done on a Job-by-Job 
basis; other backup systems often verify complete volumes.

This could be used if it is possible to compare the metadata read from the 
media against a
different job id. But I think this is not possible for the moment.



In terms of Job to Job Verification, Bacula does not have that possibility 
without building some SQL queries.

Is there already a way to recreate alike virtual full backup based on given 
jobids of allready created backups
which only created in the database based on already stored data and metadata?


Nevertheless, its a nice Feature that can be very useful, let me discuss 
internally with Bacula Support Team, about this possibility, I will update you 
soon with more details.

Thanks for that.

Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to check a written job by re-read them

2024-01-23 Thread Pierre Bernhardt

Hello,

after a raid disaster which needs a full restore based on the
last full backup on tape which has unreadable blocks and
blocks the whole tape drive I want to check the written jobs.

A good idea is to create a copy job so I have a copy of the
written data which also checks the tape by reading them.

By the way I want to test the tape job only so I mean it
should be possible to write the copy data simply to /dev/null.

Is it possible to use the fifo device? Is there another
possibility to read the tape regularly?

Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to show diff of two backup?

2024-01-23 Thread Pierre Bernhardt

Hello,

because of an recovery based on the last backup I found that the full
backup tape is corrupt.
The last backup is based on this full and one diff job (bu1 = f1 + d11)
The day before I had created a backup based on a full job on a different
backup, a diff and several inc jobs (bu0 = f0 + d03 + i031 + i032 + i033)

It was possible to restore the data by using the one day older backup bu0
of the day before the last full backup to the corrupt tape has to been
created and additionally the newest difference by restoring from diff job
d11. So almost should be recovered.

But there is a small gap of one day between bu0 and bu1 which could not
recovered because these files are only stored on f1.

Before I will waste money to send the corrupt tap to a data recovery
company I want to find these files which are only stored on the f1 tape.

Is there a way afterwards to use bacula to give me a list of differences
based on job ids? I'm able to use psql to do this but maybe it is easier
by using it with the internal features like the verify jobs can do it.

Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to save /dev tree and base directories only?

2024-01-22 Thread Pierre Bernhardt

Am 22.01.24 um 16:41 schrieb Martin Simmons:

On Mon, 22 Jan 2024 10:44:49 +0100, Pierre Bernhardt said:

Are you using udev (see the output of "df /dev")?  If so, then I would expect
that to recreate the contents of /dev so no backup is wanted.

Debian Buster and Bullseye should have it. By the way the system was not come
up before I recreated the inodes manually. Maybe some of them are essential
for booting the system so not all needs to be recreated but I think it is not
an issue to restore them from a backup.

Here a list of files found on a fresh with debootstrap installed bullseye
before I firstly boot them up.

root@backup:/media/file# ls dev
console  fd  full  null  ptmx  pts  random  shm  stderr  stdin  stdout  tty  
urandom  zero
root@backup:/media/file# ls -lR dev
dev:
insgesamt 8
crw-rw-rw- 1 root root 5, 1 Jan 22 22:25 console
lrwxrwxrwx 1 root root   13 Jan 22 22:25 fd -> /proc/self/fd
crw-rw-rw- 1 root root 1, 7 Jan 22 22:25 full
crw-rw-rw- 1 root root 1, 3 Jan 22 22:25 null
crw-rw-rw- 1 root root 5, 2 Jan 22 22:25 ptmx
drwxr-xr-x 2 root root 4096 Jan 22 22:25 pts
crw-rw-rw- 1 root root 1, 8 Jan 22 22:25 random
drwxr-xr-x 2 root root 4096 Jan 22 22:25 shm
lrwxrwxrwx 1 root root   15 Jan 22 22:25 stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root   15 Jan 22 22:25 stdin -> /proc/self/fd/0
lrwxrwxrwx 1 root root   15 Jan 22 22:25 stdout -> /proc/self/fd/1
crw-rw-rw- 1 root root 5, 0 Jan 22 22:25 tty
crw-rw-rw- 1 root root 1, 9 Jan 22 22:25 urandom
crw-rw-rw- 1 root root 1, 5 Jan 22 22:25 zero

dev/pts:
insgesamt 0

dev/shm:
insgesamt 0

I think this is the minimum of dev inodes needed for starting the system.

Pierre




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to save /dev tree and base directories only?

2024-01-22 Thread Pierre Bernhardt

Am 22.01.24 um 16:54 schrieb Pedro Oliveira:

Some directories are irrelevant into a backup. For example:

- /dev/*
- /proc/*
- /sys/*
- /tmp/*
- *lost+found

They are created during the boot, and are related specific with your
devices and/or your running processes.

While its irrelevant to backup or not - your kernel will recreate those
directories when boot from restored backup - you probably will save a
little time if you do not backup it.


No it won'tWithout the directories recreated at first boot the system will
not comeup without recreating them manually because the mount needs an existing
directory.
So it is better to backup the dir without content so I must not recreate
them an find the info which perms they should have.
This will make my life easier (at the moment I must restore a couple of
systems because of a corrupt raid array. Each additional manual step
which could been prevented is a step too much.)

Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to save /dev tree and base directories only?

2024-01-22 Thread Pierre Bernhardt

Hello,

I had a full recovery of my nodes so I must recover the root filesystem.
By the way in the past I excluded /dev /proc /sys /tmp for not further
known reasons so they didn't recovered but are needed to boot up.
I could fix it by using a base installed system and made a tar copy
of /dev and mkdir /proc /sys and /tmp with there permissions.
After that the systems come up successfully. But now I want to modify
my jobs that the files and inodes in /dev would go also saved in
the backups.`

For /proc /sys and /tmp it should be easy to add an /* in the
list of Exclude rule.

But I'm unsure how to do with /dev. It could be possible bacula will
not save the inodes. Instead of them it could be possible bacula will
try to save there content which is not a good idea because there
content will be endless. Before I made some tests I want to simply ask
someone here which knows this. Maybe some special settings are needed
to prevent me from getting problems.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Solved: TLS Problem after create new certificates with error ...OpenSSL 1.1, enforce basicConstraints = CA:true in the certificate...

2023-01-23 Thread Pierre Bernhardt



Hello,

the problem was really easy to fix but the messages has been sent me
to a complete different not really relevant and helpfull way:

On server I use @tls.conf file line to load the certificates files.
On client these is directly configured two times instead of only
one time I found. So I replaced the first cafile declaration but
not the second one which pointed still to the old outdated ca file.

And this resulted in the mistaken message.

After replacing then cafile reference also for the second declaration
the backup could be run and finished successfully.

Thanks for all which has been warmup there head.

Cheers,



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] TLS Problem after create new certificates with error ...OpenSSL 1.1, enforce basicConstraints = CA:true in the certificate...

2023-01-23 Thread Pierre Bernhardt

Am 23.01.23 um 13:31 schrieb Pierre Bernhardt:

My self signed root ca and my certs has to been outdated.

So I created a new ca key, self segned ca cert and new
certs for bacula director and all clients.


...

I only replaced the tls certs and installed a new ca cert.


I double checked the installed ca crt file by comparing the
md5 sum and checked the client and backup certs also
against the ca crt file without find a problem with command
like:
openssl verify -verbose -CAfile RootCert.pem Intermediate.pem

At all related files there shown me an OK result.

Is there a way I can check the communication itself with openssl
like openssl s_client it will do?

Ah, and forget to say: I use still an older debian release with
bacula 9.4.2 on client and server side. An upgrade is planned
next two month.

Cheers,



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] TLS Problem after create new certificates with error ...OpenSSL 1.1, enforce basicConstraints = CA:true in the certificate...

2023-01-23 Thread Pierre Bernhardt

My self signed root ca and my certs has to been outdated.

So I created a new ca key, self segned ca cert and new
certs for bacula director and all clients.

The issue is that the message appears so i cerated a
new ca cert so the
basicConstraints = CA:true
also contains the ca cert

So I installed the new ca certs by copy to the director
and clients.

The tests on director server by using
status dir
status file=backup-fd
status storage
status file=client-fd

are running well. Also I can access again the director
with bconsole and bat without issues and error messages.

Th backup jobs for the backupserver itself also runs
without a problem.
But the jobs for the client will abort again with the message

...
23-Jan 12:35 client-fd JobId 65114: Error: tls.c:89 CA certificate is 
self signed. With OpenSSL 1.1, enforce basicConstraints = CA:true in the 
certificate creation to avoid this issue
23-Jan 12:34 backup-sd JobId 65114: Error: openssl.c:68 Connect failure: 
ERR=error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca
23-Jan 12:35 client-fd JobId 65114: Error: tls.c:96 Error with 
certificate at depth: 1, issuer = /C=DE/O=Me, subject = /C=DE/O=Me, 
ERR=19:self signed certificate in certificate chain
23-Jan 12:34 backup-sd JobId 65114: Fatal error: bnet.c:75 TLS 
Negotiation failed.
23-Jan 12:34 backup-sd JobId 65114: Fatal error: TLS negotiation failed 
with FD at "192.168.2.207:36572"
23-Jan 12:34 backup-sd JobId 65114: Fatal error: Incorrect authorization 
key from File daemon at client rejected.
For help, please see: 
http://www.bacula.org/rel-manual/en/problems/Bacula_Frequently_Asked_Que.html
23-Jan 12:34 backup-sd JobId 65114: Security Alert: Unable to 
authenticate File daemon
23-Jan 12:35 client-fd JobId 65114: Error: openssl.c:68 Connect failure: 
ERR=error:1416F086:SSL 
routines:tls_process_server_certificate:certificate verify failed

23-Jan 12:35 client-fd JobId 65114: Fatal error: TLS negotiation failed.
23-Jan 12:34 backup-dir JobId 65114: Fatal error: Bad response to 
Storage command: wanted 2000 OK storage

, got 2902 Bad storage
...

I think there is no problem between director and client fd but between 
storage daemon and client.


Any ideas whats happen?

I only replaced the tls certs and installed a new ca cert.

Cheers,




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Copy Job of a bunch of backups?

2022-05-27 Thread Pierre Bernhardt



Hello,

I create a full backup each 1st Sunday, diff backups of rest of Sundays, 
incremental backups each day.

I would create a copy after the first day of a month backup has been made. The 
data should be copied
from the last backup however it has been made.

So for the moment I create a restore before a given date/time and create a full 
backup of these
restore to a special tape.

Is there a "virtual copy" possible which which I can configure to create one 
copy job based on backup
before a specified time which made the same without having restore them before?


Thx,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Waiting for a volume already in the drive

2022-05-27 Thread Pierre Bernhardt

Hello,

it's looks like an issue which I had from time to time.

I tried to unload the media and reload it. By the way it was really a rare 
situation and I could
not trigger it by planned actions.

Maybe it is a good Idea to check also the status of the tapedrive itself e.g by
tapeinfo -f 

Cheers,
Pierre




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Found: IO waiting bacula-sd process, Re: Migration from tape to disk is hanging

2021-07-18 Thread Pierre Bernhardt
Am 16.07.21 um 13:42 schrieb Pierre Bernhardt:
It was hard to find the reason, but it looks like on one tape on special 
position
the bacula-sd process is waiting for a non-interruptible IO from the drive.
The interesting point is, that it is the penultimate job of the tape.

The situation was so hard, if the bacula-sd would be restarted the process
is left as a zombie process (State Z) so only a full reboot of the system
helps to fix the issue.

I migrated all other jobs from this tape to another one (via a disk migration
step) so only the problem job is still active on this tape. I hope I won't
need this weekly backup job so in one year I will put the tape in the trash
bin.

So the reason is found and I don't want to find the exact reason why the
bacula-sd process never comes back. But it is reproducible so I can make
investigations more in detail if it is needed.

PS: Normally I mean a problem should never bring the system in such a state.
The process should die by itself in such a situation e.b. by a timeout or
something else and leave the system intact.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Migration from tape to disk is hanging

2021-07-18 Thread Pierre Bernhardt
Hello,

any idea how I can get more information why my job is hanging? It looks
like it has started but not all copy the data to the disk.
Both jobs for migration are still running since hours.

I tried also to use data spooling to tmp but then the spool file will be
filled, but never the data will be send to the disk (the disk file will
not increase and won't change the mtime)

Before I used for latest test setdebug level=99 trace=1 to change the
debug for sd and dir. But I did not find information.

Maybe there is another option which give more details?

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Problem disable all schedules but jobs starts

2021-07-18 Thread Pierre Bernhardt
Hello,

I have bacula on debian buster. The configured jobs will start at the estimate 
times althoug
the schedules are manually disabled and in bconsole status dir shown no 
scheduled jobs.

Only if I disable manually the jobs one by one they will not start.

I don't know why they start after schedules has been disabled.

Is this a normal behavior?

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Some files are not backup because of more than 1 filename exits in filename table?

2021-06-21 Thread Pierre Bernhardt
By the way, as already seen in another post, bacula 11 will merge the
filename table to the file table so the problem for new backups could
be allready fixed in this version !?

Am 14.06.21 um 11:43 schrieb Pierre Bernhardt:
> Am 14.06.21 um 09:15 schrieb Pierre Bernhardt:
>> I have a bunch of other filenames which are duplicated in filename
>> table found. I found also some duplicate names in path table.
>> Did I have forgotten any thing?
>>
>> I fixed it by python script one step by next but this should not be
>> the best solution.

At first, the problem could be allready exists after I recover my
database after a bscan of all my tapes very long time ago. It was
not registred by me because I didn't found a problem before, but
I had never checked the recovers in detail. The files which have
a duplicate row in filename table and also in file table will be
recover but they will have only the wrong permissions and user:
group also timestamp.

Here the steps to identify and fix the issue:

select count(name) as amount, name from filename group by name having 
count(name) > 1;

All lines found here have exactly 2 rows in filename table, so
I want to remove the one with older filenameid. This means I must
check that in file table for each job the newer filenameid exists and the
older one could be removed.
There two solutions:

1. update all older filenameid in file table by the new filenameid
2. remove all duplicate entries in the file table
3. remove older filenameid entries in the filename table.

This is more complex because they could exists file entries which use
same filenameid but different pathes and/or fileindex.

1. remove all older filenameid in file table where newer filenameid row is a 
duplicate
2. update all older filenameid in file table which is not a duplicate
3. remove older filenameid entries in the filename table

This is better because it does not produce duplicate entries which must
identify before removed them.

By the way I'm novice to produce such sql queries. So better to use
the select statement above and use python to produce a delete clause
which remove all of them :-/

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Solved: Try to use DiskToCatalog and VolumeToCatalog checks but got error Unimplemented ...

2021-06-14 Thread Pierre Bernhardt
Am 14.06.21 um 16:32 schrieb Martin Simmons:
> Hi Pierre,
> 
> I CC'd the list in my reply, but maybe your email filters duplicate
> message-ids?  Also, check the options in
> https://lists.sourceforge.net/lists/options/bacula-users to see if you receive
> your own posts to the list.
Thx. Found, there is another vote line which I must switch.
> 
> The Job rule overwrites JobDefs rules from the verify job for me, so that is a
> mystery.  Which fileset is printed by "show job=file_home_verify"?
  --> FileSet: name=home Set IgnoreFileSetChanges=0

So it looks like that is ok.

> What are the dot files that it misses?  Your fileset definition looks like it
> will backup everything, including the dot files.
I've found already the issue: Duplicate Entries in the filename table for the
filename brings this problem.

I opened another thread for discussion.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Some files are not backup because of more than 1 filename exits in filename table?

2021-06-14 Thread Pierre Bernhardt
Am 14.06.21 um 09:15 schrieb Pierre Bernhardt:
> I have a bunch of other filenames which are duplicated in filename
> table found. I found also some duplicate names in path table.
> Did I have forgotten any thing?
> 
> I fixed it by python script one step by next but this should not be
> the best solution.

I'm sorry for this mistake sentences. I fixed it only for two filename.
But there many more filename duplicates in the table and also for path
table which is not fixed, yet.
In my opinion this should not be a problem but it looks like the bacula
code does not make sure that only one, maybe latest line, will be used
to fill the file table at backup time, so two lines will be created.
This looks like a bug. Where it comes from? No idea. I use this catalog
long time (since more than 10 years, my oldest job is from 2008).

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Some files are not backup because of more than 1 filename exits in filename table?

2021-06-14 Thread Pierre Bernhardt
Hello,

I've found by a verify job that some files looks like not correctly backup.

So my filled /home/pierre/.screenrc is in the bconsole restore menu shown
as empty file with root:root owner instead pierre:pierre and have a 0 unix
epoche timestamp (1.1.70 1:00 am).

I found also a warning message found:

 Warning: sql_get.c:186 More than one Filename!: 2 for file: .screenrc

So i checked the filename table and found two filenameid for the same filename.
I checked also the file table and found two sets of files, one related to the
one filenameid and the other to the other filenameid but pathid, jobid, 
fileindex
and all other columns without fileid are the same. fileid differs also.

It looks like each backup for one file there will be created two new
lines in the file table.

By a test I removed the older filenameid entry, than I checked the file
table that there exists minimum two rows in the file table where jobid,
pathid and fileindex, one to the removed filenameid and one to the
newer one. For all of them I removed the rows related to the removed
filenameid. The rest of rows i update the filenameid to the newer one
(only one row was found).

After a checking the catalog and a restore test no error related to
this file is shown any more. The stats like permissions and timestamps
are now also fixed.

I have a bunch of other filenames which are duplicated in filename
table found. I found also some duplicate names in path table.
Did I have forgotten any thing?

I fixed it by python script one step by next but this should not be
the best solution.

Is there a tool which can fix such issue?

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Solved: Try to use DiskToCatalog and VolumeToCatalog checks but got error Unimplemented ...

2021-06-13 Thread Pierre Bernhardt
Hello,

I wonder me that the list does not send me my own mail back to me.
You answered directly to me instead to the list so I cannot also answer
to the list so the information will be lost for other people :-(

Am 11.06.21 um 17:47 schrieb Martin Simmons:
> You need to make a special job with Type = Verify and run that.
ok, that was the issue. After creating a copy of home_file to home_file_verify
by copy another allready existing one was not working immediately.

The second issue was that in the run job I must add File Set="home Set"
although it was already configured in the job definition:

# List of files from home
FileSet {
Name = "home Set"
Include {
Options {
compression = GZIP;
basejob = A;
accurate = ipnugsm;
verify = ipnsm1;
aclsupport = yes;
onefs = yes;
signature = SHA1;
xattrsupport = yes;
noatime=yes;
}
File = /home
}
Exclude {
File = /.journal
File = /.fsck
}
}

 61 JobDefs {
 62   Name = "RotateJobData"
 63   Type = Backup
 64   Level = Incremental
 65   Schedule = "CycleData"
 66   Max Start Delay = 124
 67   Max Wait Time = 360
 68   Max Run Time = 360
 69   Spool Data = yes
 70   Spool Attributes = yes
 71   Spool Size = 8589934592
 72   Messages = Standard
 73   Pool = Daily
 74   Storage = "Disk2"
 75   Incremental Backup Pool = Daily
 76   Differential Backup Pool = Weekly
 77   Full Backup Pool = Monthly
 78   Rerun Failed Levels = yes
 79   Allow Duplicate Jobs = no
 80   Cancel lower level duplicates = yes
 81   Cancel Queued Duplicates = yes
 82   Cancel Running Duplicates = no
 83   Accurate = yes
 84   Priority = 9
 85   Allow Mixed Priority = yes
 86 }
 87

131 JobDefs {
132   Name = "VerifyJob"
133   Type = Verify
134   Level = Catalog
135   Pool = Default
136   Spool Attributes = yes
137   Spool Size = 9663676416
138   Messages = Standard
139   Allow Duplicate Jobs = yes
140   Cancel Queued Duplicates = no
141   Accurate = yes
142   FileSet = "Verify Full Set"
143   Schedule = "DailyVerify"
144   Accurate = yes
145   Priority = 15
146 }
147

348 Job {
349   Name = "file_home"
350   JobDefs = "RotateJobData"
351   Client = file-fd
352   FileSet = "home Set"
353   Write Bootstrap = "/var/lib/bacula/file_home.bsr"
354   Max Wait Time = 720
355   Max Run Time = 720
356   # Enabled = No
357 }
358
359 Job {
360   Name = "file_home_verify"
361   JobDefs = "VerifyJob"
362   FileSet = "home Set"
363   Client = file-fd
364   Enabled = No
365 }

So I wonder me that the FileSet rule in the job rule does not overwrite the
fileset configuration in the jobDefs VerifyJob. But I could "overwrite" this
setting by adding fileset="home Set" in the run command.
Without that it looks like it use the file set from jobDefs because it should
only compare /home filesystem but it had compared the / filesystem on disk
with /home in catalog of the full backup ans so I got many many unrelevant
differences like /etc does not exist in catalog.

So with the following command the job is now running:
run job=file_home_verify jobid=53129 level=DiskToCatalog fileset="home Set"

There some mysterious entries found in the verify which I want to ask in
following threads:

1. Job rule does not overwrite JobDefs rules from verify job?
2. Backup for "some dot files" are not made?

Thank you.
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Try to use DiskToCatalog and VolumeToCatalog checks but got error Unimplemented ...

2021-06-11 Thread Pierre Bernhardt
Hello,

I use Debian Buster with Bacula 9.4.2 with latest debian packages on the 
related systems.

I want to check the DiskToCatalog and VolumeToCatalog feature to check the 
entries in
the database against the files on my disks and on my backup volumes.

By the way I have a job 3-level-jobs running (Monthly full, Weekly Diff and 
Daily Incremental)
where Monthly and Weekly are backup on LTO-4 and Daily on Disk based Backupfile.

The backup and restore was mostly ok for the past also for full restore from a 
all three
level.

Here an example with the latest full backup:

Backup:
Log records for job 53121
2021-06-06 23:50:00
backup-dir
Start Backup JobId 53121, Job=backup_full.2021-06-06_23.50.00_25
2021-06-06 23:50:01
backup-sd
3307 Issuing autochanger "unload Volume LTO40077, Slot 19, Drive 1" command.
2021-06-06 23:53:56
backup-dir
Using Device "HPUltrium4-2" to write.
2021-06-06 23:53:57
backup-sd
3304 Issuing autochanger "load Volume LTO40073, Slot 4, Drive 1" command.
2021-06-06 23:55:30
backup-sd
3305 Autochanger "load Volume LTO40073, Slot 4, Drive 1", status is OK.
2021-06-06 23:55:38
backup-sd
Volume "LTO40073" previously written, moving to end of data.
2021-06-06 23:57:53
backup-sd
Spooling data ...


Ready to append to end of Volume "LTO40073" at file=470.
2021-06-06 23:59:40
backup-fd
 /var/lib/postgresql is a different filesystem. Will not descend from / 
into it.
2021-06-07 00:24:01
backup-fd
 /run is a different filesystem. Will not descend from / into it.
2021-06-07 00:25:56
backup-fd
 /var/lib/postgresql is a different filesystem. Will not descend from /var 
into it.
2021-06-07 00:35:20
backup-sd
Committing spooled data to Volume "LTO40073". Despooling 4,171,824,686 bytes ...
2021-06-07 00:37:18
backup-sd
Despooling elapsed time = 00:01:58, Transfer rate = 35.35 M Bytes/second


Elapsed time=00:39:25, Transfer rate=1.759 M Bytes/second


Sending spooled attrs to the Director. Despooling 20,844,697 bytes ...
2021-06-07 01:00:40
backup-dir
Bacula backup-dir 9.4.2 (04Feb19):
  Build OS:   x86_64-pc-linux-gnu debian 10.5
  JobId:  53121
  Job:backup_full.2021-06-06_23.50.00_25
  Backup Level:   Full
  Client: "backup-fd" 9.4.2 (04Feb19) 
x86_64-pc-linux-gnu,debian,10.5
  FileSet:"Full Set" 2017-10-09 08:53:50
  Pool:   "Monthly" (From Job FullPool override)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"FibreCAT TX48 S2" (From Pool resource)
  Scheduled time: 06-Jun-2021 23:50:00
  Start time: 06-Jun-2021 23:53:56
  End time:   07-Jun-2021 01:00:39
  Elapsed time:   1 hour 6 mins 43 secs
  Priority:   9
  FD Files Written:   86,324
  SD Files Written:   86,324
  FD Bytes Written:   4,125,513,811 (4.125 GB)
  SD Bytes Written:   4,160,381,028 (4.160 GB)
  Rate:   1030.6 KB/s
  Software Compression:   76.1% 4.2:1
  Comm Line Compression:  None
  Snapshot/VSS:   no
  Encryption: yes
  Accurate:   yes
  Volume name(s): LTO40073
  Volume Session Id:  688
  Volume Session Time:1617149650
  Last Volume Bytes:  482,907,608,064 (482.9 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK

Now to run a verify against the disk:

*run job=web_server jobid=53121 level=DiskToCatalog
Run Backup job
JobName:  web_server
Level:DiskToCatalog
Client:   web-fd
FileSet:  Server Set
Pool: Daily (From Job resource)
Storage:  Disk2 (From Pool resource)
When: 2021-06-11 10:32:01
Priority: 9
OK to run? (yes/mod/no):
Job queued. JobId=53191

backup-dir Fatal error: fd_cmds.c:377 Unimplemented backup level 100 d
 Using Device "DiskStorage2" to write.
 Start Backup JobId 53191, Job=web_server.2021-06-11_10.32.03_43
backup-dir Fatal error: Network error with FD during Backup: ERR=Interrupted 
system call
backup-dir
Error: Bacula backup-dir 9.4.2 (04Feb19):
  Build OS:   x86_64-pc-linux-gnu debian 10.5
  JobId:  53191
  Job:web_server.2021-06-11_10.32.03_43
  Backup Level:   DiskToCatalog
  Client: "web-fd" 9.4.2 (04Feb19) 
x86_64-pc-linux-gnu,debian,10.5
  FileSet:"Server Set" 2014-11-05 16:31:47
  Pool:   "Daily" (From Job resource)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"Disk2" (From Pool resource)
  Scheduled time: 11-Jun-2021 10:32:01
  Start time: 11-Jun-2021 10:32:05
  End time:   11-Jun-2021 10:36:06
  Elapsed time:   4 mins 1 sec
  Priority:   9
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software 

[Bacula-users] Segmentation fault with bat (bacula-console-qt) after searching for expired media an confirming the error message

2021-03-31 Thread Pierre Bernhardt
Hello,

the problem is reproducible at my workstation.

1. Start bat
2. Connect to director
3. Open Media view
4. Enable control switch "Expired"
5. Press

Result:
A window with:
"bat: ERROR in medialist/mediaview.cpp:203 Failed ASSERT: fieldlist.size() !=9"
and after pressing OK a "Segmentation fault" is shown in console where
bat has to been started.

I saw seqmentation fault also allready in other situations and so I think this
is not a problem directly by the error more like a problem after confirming
the error message. I saw this segmentation fault message also with other
error since years, but with these steps it is reproducible.
bat used on the director server reproduced the same

My system on workstation:
Linux nihilnihil 4.19.0-14-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 
GNU/Linux
Debian 10
ii  bacula-common 9.4.2-2+deb10u1 amd64network backup service - 
common support files
ii  bacula-console9.4.2-2+deb10u1 amd64network backup service - 
text console
ii  bacula-console-qt 9.4.2-2+deb10u1 amd64network backup service - 
Bacula Administration Tool

My system on director:
Linux server 4.19.0-16-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 
GNU/Linux
Debian 10
ii  bacula-bscan 9.4.2-2+deb10u1 amd64network backup 
service - bscan tool
ii  bacula-common9.4.2-2+deb10u1 amd64network backup 
service - common support files
ii  bacula-common-pgsql  9.4.2-2+deb10u1 amd64network backup 
service - PostgreSQL common files
ii  bacula-console   9.4.2-2+deb10u1 amd64network backup 
service - text console
ii  bacula-console-qt9.4.2-2+deb10u1 amd64network backup 
service - Bacula Administration Tool
ii  bacula-director  9.4.2-2+deb10u1 amd64network backup 
service - Director daemon
ii  bacula-director-common   9.4.2-2+deb10u1 all  transitional package
ii  bacula-director-pgsql9.4.2-2+deb10u1 all  network backup 
service - PostgreSQL storage for Director
ii  bacula-fd9.4.2-2+deb10u1 amd64network backup 
service - file daemon
ii  bacula-sd9.4.2-2+deb10u1 amd64network backup 
service - storage daemon
ii  bacula-server9.4.2-2+deb10u1 all  network backup 
service - server metapackage

Will be nice to hear why there is a segmentation fault occured

Cheers.



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Unable to keep the job history for a longer time in the baculum history table

2020-10-20 Thread Pierre Bernhardt
On 20.10.20 19:09, Marcin Haba wrote:
Hi,

I want to purge also only the file information without removing the job and 
volume
history.

>>> Here you can find more information about the update command:
>>>
>>> https://www.bacula.org/9.6.x-manuals/en/console/Bacula_Console.html#432
But I cannot find information what I must configure to remove only the the file
information from the database.
In bat it is possible to remove files from a single job but I want to remove
the files more in a automatic way after a period.

Thank you,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Parallel jobs not running with different priority but enabled allow mixed priority

2020-08-02 Thread Pierre Bernhardt
Am 06.07.20 um 08:54 schrieb Pierre Bernhardt:
Hello,

> Running 2 jobs with same priority at the same is running well.
> So I used the priority to order the start of the jobs and try to run
> different priority jobs by using "Allowed Mixed Priority = yes" for
> all the relevant jobs:
Does nobody has an idea what's happen? Allowed Mixed Priority looks
like does not do the job as expected.

Here I started the 4 jobs sequential beginning with nihilnihil_home so
also a second job should be start at the same time; I expect conny_home.

So why it will not start?

Scheduled Jobs:
Level  Type Pri  Scheduled  Job Name   Volume
===
Full   Backup12  02-Aug-20 23:50nihilnihil_homeDISK016
Full   Backup13  02-Aug-20 23:50file_home  DISK016
Full   Backup13  02-Aug-20 23:50conny_home DISK016
Full   Backup13  02-Aug-20 23:50mail_home  DISK016


Running Jobs:
Console connected using TLS at 02-Aug-20 12:51
 JobId  Type Level Files Bytes  Name  Status
==
 49250  Back Incr  0 0  nihilnihil_home   is running
 49251  Back Incr  0 0  conny_homeis waiting for higher 
priority jobs to finish
 49252  Back Incr  0 0  mail_home is waiting on max 
Storage jobs
 49254  Back Incr  0 0  file_home is waiting execution


Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Parallel jobs not running with different priority but enabled allow mixed priority

2020-07-06 Thread Pierre Bernhardt
Hello,

Running 2 jobs with same priority at the same is running well.
So I used the priority to order the start of the jobs and try to run
different priority jobs by using "Allowed Mixed Priority = yes" for
all the relevant jobs:

# Job definition for rotate
JobDefs {
  Name = "RotateJob"
  Type = Backup
  Level = Incremental
  Schedule = "Cycle"
  Max Start Delay = 124
  Max Wait Time = 360
  Max Run Time = 360
  Spool Data = yes
  Spool Attributes = yes
  Messages = Standard
  Pool = Daily
  Storage = "Disk2"
  Incremental Backup Pool = Daily
  Differential Backup Pool = Weekly
  Full Backup Pool = Monthly
  Rerun Failed Levels = yes
  Allow Duplicate Jobs = no
  Cancel lower level duplicates = yes
  Cancel Queued Duplicates = yes
  Cancel Running Duplicates = no
  Accurate = yes
  Priority = 9
  Allow Mixed Priority = yes
}
…

Here for each priority one job def:

…
Job {
  Name = "conny_full"
  JobDefs = "RotateJob"
  Client = conny-fd
  FileSet = "Full Set"
  Write Bootstrap = "/var/lib/bacula/conny_full.bsr"
}

Job {
  Name = "conny_home"
  JobDefs = "RotateJob"
  Client = conny-fd
  FileSet = "home Set"
  Write Bootstrap = "/var/lib/bacula/conny_home.bsr"
  Priority = 13
}

Job {
  Name = "nihilnihil_home"
  JobDefs = "RotateJob"
  Client = nihilnihil-fd
  FileSet = "home wo WinWork Set"
  Write Bootstrap = "/var/lib/bacula/nihilnihil_home.bsr"
  Priority = 12
}
…

By the way only nihilnihil_home is running for the moment although:

*status dir
backup-dir Version: 9.4.2 (04 February 2019) x86_64-pc-linux-gnu debian 
buster/sid
Daemon started 21-May-20 22:27, conf reloaded 21-May-2020 22:27:40
 Jobs: run=560, running=6 mode=0,0
 Heap: heap=913,408 smbytes=378,789 max_bytes=951,271 bufs=1,227 max_bufs=1,521
 Res: njobs=40 nclients=17 nstores=6 npools=10 ncats=1 nfsets=8 nscheds=3

Scheduled Jobs:
Level  Type Pri  Scheduled  Job Name   Volume
===
IncrementalBackup 9  06-Jul-20 23:50backup_fullDISK005
IncrementalBackup 9  06-Jul-20 23:50nihilnihil_fullDISK005
IncrementalBackup 9  06-Jul-20 23:50conny_full DISK005
IncrementalBackup 9  06-Jul-20 23:50mail_full  DISK005
IncrementalBackup 9  06-Jul-20 23:50file_full  DISK005
IncrementalBackup 9  06-Jul-20 23:50web_full   DISK005
IncrementalBackup12  06-Jul-20 23:50nihilnihil_homeDISK005
IncrementalBackup13  06-Jul-20 23:50web_server DISK005
IncrementalBackup13  06-Jul-20 23:50file_home  DISK005
IncrementalBackup13  06-Jul-20 23:50conny_home DISK005
IncrementalBackup13  06-Jul-20 23:50mail_home  DISK005
Full   Backup19  06-Jul-20 23:51BackupCatalog  DISK002


Running Jobs:
Console connected using TLS at 26-Jun-20 06:37
 JobId  Type Level Files Bytes  Name  Status
==
 48916  Back Full209,04561.50 G nihilnihil_home   is running
 48917  Back Full  0 0  web_serveris waiting for higher 
priority jobs to finish
 48918  Back Full  0 0  file_home is waiting execution
 48919  Back Full  0 0  mail_home is waiting execution
 48920  Back Full  0 0  conny_homeis waiting execution
 48921  Back Full  0 0  BackupCatalog is waiting execution


The limit of max 2 jobs is setup in the drives configuration area.
…
Director {# define myself
  Name = backup-dir
  Description = "Director on backup server backup."
  DIRport = 9101
  QueryFile = "/etc/bacula/scripts/query.sql"
  WorkingDirectory = "/var/lib/bacula"
  PidDirectory = "/run/bacula"
  Maximum Concurrent Jobs = 4
  Password = "password"
  Messages = Daemon
  DirAddress = backup
  # TLS configuration
  TLS Enable = yes
  TLS Require = yes
  TLS Verify Peer = yes
  TLS Allowed CN = "bacula@backup"
  TLS Allowed CN = "bacula@in94"
  TLS Allowed CN = "bacula@nihilnihil"
  TLS Allowed CN = "pierrei@"
  @/etc/bacula/tls_server.conf
}
…
Storage { # definition of myself
  Name = backup-sd
  SDPort = 9103  # Director's port
  WorkingDirectory = "/var/lib/bacula"
  Pid Directory = "/var/run/bacula"
  Maximum Concurrent Jobs = 20
  SDAddress = backup
  # Incoming connections from director
  TLS Enable = yes
  TLS Require = yes
  # No Verify because of sd connection cookie
  TLS Verify Peer = no
  # Server port
  @/etc/bacula/tls_server.conf
}
…
Storage {
  Name = "FibreCAT TX48 S2"
  address = backup
  SDPort = 9103
  Password = "password"
  Device = "FibreCAT TX48 S2"
  Media Type = "LTO-3"
  Autochanger = Yes
  Maximum Concurrent Jobs = 2
  # TLS Configuration
  TLS Enable = yes
  TLS Require = 

Re: [Bacula-users] Data spooling failing with Permission Denied

2020-06-18 Thread Pierre Bernhardt
Am 18.06.20 um 05:35 schrieb Ryan Sizemore:
> Device {
>   Name = LTO-4
>   Media Type = LTO-4
>   Archive Device = /dev/nst0
>   AutomaticMount = yes;
>   AlwaysOpen = yes;
>   RemovableMedia = yes;
>   RandomAccess = no;
>   Maximum File Size = 10GB
>   AutoChanger = yes
>   Maximum Spool Size = 1000GB
>   Spool Directory = "/scratch/spool"
> }
> 
> Here are the JobDefs and Job from the director:
> 
> JobDefs {
>   Name = "DefaultJob"
>   Type = Backup
>   Level = Incremental
>   Client = pacific-fd
>   FileSet = "Full Set"
>   Schedule = "WeeklyCycle"
>   Storage = File1
>   Messages = Standard
>   Pool = File
>   SpoolAttributes = yes
>   Priority = 10
>   Write Bootstrap = "/var/lib/bacula/%c.bsr"
> }
> 
> Job {
>   Name = "SynologyTest"
>   JobDefs = "DefaultJob"
>   FileSet = "SynologyTestFileSet"
>   Storage = LTO-4
>   Pool = TapePool
>   SpoolData = yes
> }
Be aware that /scratch will not be backup if spooling is enabled
which should be excluded in the SynologyTestFileSet configuration
if the backup server will be backup itself with this job.

Cheers,



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Data spooling failing with Permission Denied

2020-06-18 Thread Pierre Bernhardt
> root@pacific:/etc/bacula# ls -lsa /scratch/
> total 28
>  4 drw---  4 bacula bacula  4096 Jun 18 01:55 .
>  4 drwxr-xr-x 26 root   root4096 Jun 18 01:55 ..
> 16 drwx--  2 root   root   16384 Jun 17 19:45 lost+found
>  4 drwxrwxrwx  2 bacula bacula  4096 Jun 18 03:03 spool
> 
Maybe the bacula-sd process it running with tape-group rights.
It could be also helpfull to change the group right to tape
and set rwx also für the group right.

Cheers,




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bscan cannot open my tapes - help!

2020-05-29 Thread Pierre Bernhardt
Am 29.05.20 um 16:03 schrieb Nico De Ranter:
> But I also tried without manually mounting the tape but that didn't work
> either.  So I need to mount the tape first and then umount it to unlock it,
> correct?
> I also tried without bacula-sd running (to prevent it from locking the
> drive), but that didn't work either, hence my confusion.
bacula-sd blocks the drive so bscan cannot get the access.
umount normally release the tape and then the tape will be unloaded to
the related slot.
bacula must not run at all if bscan is working. You need only a running
database engine.
As I remember if you run a bscan with a couple of tapes (the written
tape order should be the same as for writing as I remember the jobs
which overlaps from one to another tape possible will not be correct
written to the database, but I'm unsure) the first tape must be
loaded by the changer so you can use mtx to load the first tape and
then use mt to check the status.
After loading bscan should run. It is a good idea to run bscan in
very verbose mode and write the log to a compressed file because
it will become very large uncompressed like

bscan . 2>&1 |gzip >logfile.gz

In another terminal session you can check the file with zless logfile.gz,
but i think following will not work so you can check file from time to
time by restarting zless and jump to the end with >.

I hope my 2-cents are not false informations. It's a longer time
that I used bscan (a couple of month?).

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula database is corrupt (Postgres)

2020-05-23 Thread Pierre Bernhardt
Hi,

if you want to restore the database best solution from my point of view
is:

1. Clean Database by dropping it,
2. Initialise Database by creating new one
3. Restore latest pg_dump file
4. bscan -m -s of all newer tapes which should fill the database with newer
backups.

Theses are the steps which I did it in the past and which are gave a
solution for me.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape Moving Error after server restart

2020-05-22 Thread Pierre Bernhardt
Am 22.05.20 um 17:24 schrieb Christian Lehmann:
> Hi Pierre,
Hi,

please answer to list, not to me.

> so you think it might be an issue of this specific tape? That is not the
> case as I tested different tapes and it happened anyway. See my last
> response, I attached also some config-files as well as output of the
> successful btape test.
> I am also not sure how to simply test a tape with bscan.
I tested a corrupt tape in the past by this command:


bscan -c bacula-sd.conf -v -d 99 -dt -S -r -h localhost -t 5432 -P 
secreddbpassword -V medianame /dev/nst0 2>&1 |gzip >|bscan.log.gz

This will produce a very large logfile which here is inline compressed to the
filesystem as a gz-file.
You can follow it with zless bscan.log.gz in another terminal session.

If you will also synchronize the database you can add -m -s in the list of 
options,
but I think it's not needed so you should check first errors at the end of the 
tape
without using this options.

Another useful information maybe can be found if you check your drive for errors
with some useful tools, like tapeinfo or, what I know for some weeks, smartctl
which is also possible to check for errors on tape drives (but both tools use
not nst0 file node, they use sg-devnode so you must find the related one.
(e.g. /dev/sg0) tapeinfo is helpful by try each of them.
Use both tools with loaded tapes. Maybe comparing at the start and the end is
helpful to find issues.
I hope my information is correct, it's some days ago where I found a corrupt
tape and so I must not use the tools after that time.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape Moving Error after server restart

2020-05-13 Thread Pierre Bernhardt
Hello,

can you check the tape with high verbosity bscan?

Cheers,



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] waiting on max Job jobs - not cancelled?

2020-04-19 Thread Pierre Bernhardt
Am 03.04.20 um 04:58 schrieb Pierre Bernhardt:
> I cannot see an error. By the way here my JobDefs configuration for my daily 
> jobs
> which should work nearly as you expect:
> 
> 34 JobDefs {
>  35   Name = "RotateJob"
>  36   Type = Backup
>  37   Level = Incremental
>  38   Schedule = "Cycle"
>  39   Max Start Delay = 124
>  40   Max Wait Time = 360
>  41   Max Run Time = 360
>  42   Spool Data = yes
>  43   Spool Attributes = yes
>  44   Messages = Standard
>  45   Pool = Daily
>  46   Storage = "Disk2"
>  47   Incremental Backup Pool = Daily
>  48   Differential Backup Pool = Weekly
>  49   Full Backup Pool = Monthly
>  50   Rerun Failed Levels = yes
>  51   Allow Duplicate Jobs = no
>  52   Cancel lower level duplicates = yes
>  53   Cancel Queued Duplicates = yes
>  54   Cancel Running Duplicates = no
>  55   Accurate = yes
>  56   Priority = 9
>  57   Allow Mixed Priority = yes
>  58 }
> 
> By the way against my meaning I remember it cancel also lower level
> running jobs (Inc if Diff is started) althoug I should first finish
> the Inc and then start the diff job. Only if the Inc is waiting in
> the queue it should be cancel. I must check it again later.

I testet something:

FULL job is running: Start of INC, DIFF and FULL will aborted immediately
FULL job is queued: Start of INC will aborted immediately. Start of FULL will 
abort the
waiting FULL and New job will be queued.

DIFF job is running: Start of INC and DIFF will aborted immediately. Start of 
FULL will abort the
DIFF and New FULL job will be queued.
DIFF job is queued: Start of INC will aborted immediately. Start of DIFF and 
FULL will abort the
DIFF and New job will be queued.

INC job is running: Start of INC will aborted immediately. Start of DIFF and 
FULL will abort the
INC and New FULL job will be queued
INC job is queued: Start of INC, DIFF and FULL will abort the INC and New job 
will be queued.

For running jobs only that the running job with same level or higher will be 
abortet the running.
For queued job it is ok, but for running job it should not start a new at same 
level.
Better want to wait or want to cancel.
If there is a way that a higher job will wait for each running job has to been 
finished
would be also welcome. So I can start a new full after a inc has to been 
canceled.

But for queued jobs the behavior is exactly as expected and needed.

So any idea what I must change to have the effect on running jobs without 
changing the
queued behavior?

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Running Copy Job Immediately After Disk Backups

2020-04-07 Thread Pierre Bernhardt
Am 06.04.20 um 21:43 schrieb Phil Stracchino:
> On 2020-04-06 14:52, Pierre Bernhardt wrote:
>> Hi,
>>
>> schedule your jobs with prio 10 for parallel running.
>> schedule a job which starts the copy job with prio >= 11 so it runs after
>> all prio 10 jobs has been finished which start the copy job.
>>
>> Hope it will work as expected ;-)
> 
> 
> No, this will not work.  The execution parameters for a Copy job —
> including which jobs it is to copy — are evaluated not at the time the
> job actually begins to run, but when it is queued to run, and running
> jobs which have not yet completed will not be considered for copying.
> The way the selection code currently works, you can schedule and queue a
> Copy job only after all of the jobs you want it to copy have completed.
You havn't understand me:

The prio 11 job should be scheduled 1 min. after the prio 10 jobs
and if start it should start *the new job* by executing the target
copy job. This shoiuld prevent the problem.

In short, the prio 11 job is only a job which starts the copy job like here:
>> Maybe you can deploy an After Job script that can start the copy of the 
>> original backup job after a few seconds:
>>
>> run job=copy_job jobid=xx yes | bconsole

Cheers,
Pierre





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Running Copy Job Immediately After Disk Backups

2020-04-06 Thread Pierre Bernhardt
Hi,

schedule your jobs with prio 10 for parallel running.
schedule a job which starts the copy job with prio >= 11 so it runs after
all prio 10 jobs has been finished which start the copy job.

Hope it will work as expected ;-)

Cheers,



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] waiting on max Job jobs - not cancelled?

2020-04-03 Thread Pierre Bernhardt
Am 03.04.20 um 14:19 schrieb Gary R. Schmidt:
> You need to run the Differential, Incremental, and Full jobs at different 
> priorities - that reflect how you want things done - and set "Cancel Lower 
> Level Duplicate = Yes" and "Allow Mixed Priority = Yes" in them all.
Hi,

that could not be really true because I did not use different priorities für 
different levels,
but it is mostly working as expected and if an diff job is queued a new full 
will be cancel
the diff, and the new inc will also be canceled.
For the moment I'm migrate some tapes so next daysI have no time to test it 
again.

Schedule {
  Name = "Cycle"
  Run = Level=Full 1st sun at 23:50
  Run = Level=Differential 2nd-5th sun at 23:50
  Run = Level=Incremental mon-sat at 23:50
}

But I will recheck later.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] waiting on max Job jobs - not cancelled?

2020-04-02 Thread Pierre Bernhardt





Am 01.04.20 um 17:41 schrieb Bernie Elbourn:
> Oops...
>
> On 01/04/2020 16:10, Bernie Elbourn wrote:
>> Hi,
>>
>> Oddly, jobs run sequentially as follows rather than duplicates being 
>> cancelled:
>>
>> Running Jobs:
>> Console connected at 29-Mar-20 12:36
>>  JobId  Type Level Files Bytes  Name  Status
>> ==
>>  70172  Back Diff12036.70 M Backup-pc is running
>>  70173  Back Incr  0 0  Backup--pc is waiting on max Job jobs
>> 
>>
>> Are there any pointers to trace why the duplicate job 70173  is not 
>> cancelled?
>
> Obfuscation error both above should read Backup-pc - they were same name.
It is exactly same job configured?

I cannot see an error. By the way here my JobDefs configuration for my daily 
jobs
which should work nearly as you expect:

34 JobDefs {
 35   Name = "RotateJob"
 36   Type = Backup
 37   Level = Incremental
 38   Schedule = "Cycle"
 39   Max Start Delay = 124
 40   Max Wait Time = 360
 41   Max Run Time = 360
 42   Spool Data = yes
 43   Spool Attributes = yes
 44   Messages = Standard
 45   Pool = Daily
 46   Storage = "Disk2"
 47   Incremental Backup Pool = Daily
 48   Differential Backup Pool = Weekly
 49   Full Backup Pool = Monthly
 50   Rerun Failed Levels = yes
 51   Allow Duplicate Jobs = no
 52   Cancel lower level duplicates = yes
 53   Cancel Queued Duplicates = yes
 54   Cancel Running Duplicates = no
 55   Accurate = yes
 56   Priority = 9
 57   Allow Mixed Priority = yes
 58 }

By the way against my meaning I remember it cancel also lower level
running jobs (Inc if Diff is started) althoug I should first finish
the Inc and then start the diff job. Only if the Inc is waiting in
the queue it should be cancel. I must check it again later.

Cheers,
Pierre




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simple disk storage devices migrations/backups

2020-04-01 Thread Pierre Bernhardt
Am 01.04.20 um 15:19 schrieb Martin Simmons:
> Yes, I think 1 is the best solution, but it will not fix existing backups.
> 
> Migration could also be changed to allow certain types of error, like restore
> does with the "Restore OK -- with errors" status.
I think to correct the problems on existing backups a bscan -m -s should also
fix this issues permanently so a patch of bscan is needed, too.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Solved: Execute command on client

2020-04-01 Thread Pierre Bernhardt
Am 01.04.20 um 12:37 schrieb Wanderlei Huttel:
> ClientRunBeforeJob - Run In client before backup  (single line config)
> ClientRunAfterJob - Run In client after backup  (single line config)
> RunScript -  (multiples line config)
> 
> You can take a look in the Job Resource in manual and look for "RunScript
> {body-of-runscript}" parameter
> https://www.bacula.org/9.6.x-manuals/en/main/Configuring_Director.html#SECTION00213
That is the information which I'd not found. Thank you.

Cheers,
Pierre




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Execute command on client

2020-04-01 Thread Pierre Bernhardt
Hello,

is it possible to execute scripts or commands on a client to e.g. create 
snapshots before
backup them? I did not found a description about this feature.
I need to execute a script/command before and after a backup has to been 
produced to
create and remove the lvm snapshot.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simple disk storage devices migrations/backups

2020-04-01 Thread Pierre Bernhardt
Am 01.04.20 um 00:25 schrieb Pierre Bernhardt:
> Am 01.04.20 um 00:11 schrieb Pierre Bernhardt:
> So now I restarted again a migration to check the result.

Here the content from the migration job:
The migration back to the tape has now been finshed without problems:
01-Apr 00:19 backup-dir JobId 47775: The following 1 JobId was chosen to be 
migrated: 47704
01-Apr 00:19 backup-dir JobId 47775: Migration using JobId=47704 
Job=nihilnihil_home.2020-03-21_20.23.31_49
01-Apr 00:19 backup-dir JobId 47775: Start Migration JobId 47775, 
Job=MigrateFile2Drive.2020-04-01_00.19.07_04
01-Apr 00:19 backup-dir JobId 47775: Using Device "DiskStorage2" to read.
01-Apr 00:19 backup-sd JobId 47775: Ready to read from volume "DISK016" on File 
device "DiskStorage2" (/media/baculadisk2).
01-Apr 00:19 backup-sd JobId 47775: Forward spacing Volume "DISK016" to addr=217
01-Apr 05:27 backup-sd JobId 47775: block.c:682 [SE0208] Volume data has error 
at 0:0! Short block of 57010 bytes on device "DiskStorage2" 
(/media/baculadisk2) discarded.
01-Apr 05:27 backup-sd JobId 47775: read_records.c:160 block.c:682 [SE0208] 
Volume data has error at 0:0! Short block of 57010 bytes on device 
"DiskStorage2" (/media/baculadisk2) discarded.
01-Apr 05:27 backup-sd JobId 47775: End of Volume "DISK016" at 
addr=972406571008 on device "DiskStorage2" (/media/baculadisk2).
01-Apr 05:28 backup-sd JobId 47775: Ready to read from volume "DISK017" on File 
device "DiskStorage2" (/media/baculadisk2).
01-Apr 05:28 backup-sd JobId 47775: Forward spacing Volume "DISK017" to addr=213
01-Apr 06:27 backup-sd JobId 47775: End of Volume "DISK017" at 
addr=110838477984 on device "DiskStorage2" (/media/baculadisk2).
01-Apr 06:27 backup-sd JobId 47775: Elapsed time=06:07:56, Transfer rate=49.02 
M Bytes/second
01-Apr 08:30 backup-dir JobId 47775: Bacula backup-dir 9.4.2 (04Feb19):
  Build OS:   x86_64-pc-linux-gnu debian buster/sid
  Prev Backup JobId:  47704
  Prev Backup Job:nihilnihil_home.2020-03-21_20.23.31_49
  New Backup JobId:   47776
  Current JobId:  47775
  Current Job:MigrateFile2Drive.2020-04-01_00.19.07_04
  Backup Level:   Full
  Client: backup-fd
  FileSet:"Full Set" 2017-10-09 08:53:50
  Read Pool:  "Migrate" (From Job resource)
  Read Storage:   "Disk2" (From Pool resource)
  Write Pool: "Monthly" (From Job Pool's NextPool resource)
  Write Storage:  "FibreCAT TX48 S2" (From Job Pool's NextPool resource)
  Catalog:"MyCatalog" (From Client resource)
  Start time: 01-Apr-2020 00:19:11
  End time:   01-Apr-2020 08:25:40
  Elapsed time:   8 hours 6 mins 29 secs
  Priority:   21
  SD Files Written:   1,030,385
  SD Bytes Written:   1,082,331,572,757 (1.082 TB)
  Rate:   37080.1 KB/s
  Volume name(s): LTO40025|LTO40026
  Volume Session Id:  1
  Volume Session Time:1585692943
  Last Volume Bytes:  270,297,861,120 (270.2 GB)
  SD Errors:  0
  SD termination status:  OK
  Termination:Migration OK

Here from migration-backup job:
01-Apr 00:19 backup-dir JobId 47776: Recycled current volume "LTO40025"
01-Apr 00:19 backup-dir JobId 47776: Using Device "HPUltrium4-2" to write.
01-Apr 00:19 backup-sd JobId 47776: Recycled volume "LTO40025" on Tape device 
"HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst), all previous data lost.
01-Apr 04:30 backup-sd JobId 47776: [SI0202] End of Volume "LTO40025" at 
812:15489 on device "HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst). 
Write of 64512 bytes got -1.
01-Apr 04:30 backup-sd JobId 47776: Re-read of last block succeeded.
01-Apr 04:30 backup-sd JobId 47776: End of medium on Volume "LTO40025" 
Bytes=812,947,258,368 Blocks=12,601,488 at 01-Apr-2020 04:30.
01-Apr 04:30 backup-sd JobId 47776: 3307 Issuing autochanger "unload Volume 
LTO40025, Slot 17, Drive 1" command.
01-Apr 04:32 backup-dir JobId 47776: Recycled volume "LTO40026"
01-Apr 04:32 backup-sd JobId 47776: 3304 Issuing autochanger "load Volume 
LTO40026, Slot 9, Drive 1" command.
01-Apr 04:34 backup-sd JobId 47776: 3305 Autochanger "load Volume LTO40026, 
Slot 9, Drive 1", status is OK.
01-Apr 04:34 backup-sd JobId 47776: Recycled volume "LTO40026" on Tape device 
"HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst), all previous data lost.
01-Apr 04:34 backup-sd JobId 47776: New volume "LTO40026" mounted on device "HP 
Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst) at 01-Apr-2020 04:34.
01-Apr 06:30 backup-sd JobId 47776: Elapsed time=06:07:02,

Re: [Bacula-users] Simple disk storage devices migrations/backups

2020-03-31 Thread Pierre Bernhardt
Am 01.04.20 um 00:11 schrieb Pierre Bernhardt:
> In which installed files is block.c and read_records.c used?
> I compiled all but repaced only the bacula-sd files. Maybe the modification
> is located in another file and not in the binary?
Ok, I found the file by using grep:

root@backup:/var/lib/bacula# strings /usr/lib/bacula/libbacsd-9.4.2.so |grep 
Short
[SE0208] Volume data has error at %u:%u! Short block of %d bytes on device %s 
discarded.

The file is installed by debian-common deb so now I installed this new created
installation deb. As you can see I've added also a …has… in the message so we 
should
now see the modified in the output of the job.

So now I restarted again a migration to check the result.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simple disk storage devices migrations/backups

2020-03-31 Thread Pierre Bernhardt
Am 31.03.20 um 22:07 schrieb Pierre Bernhardt:
>> Your change to use M_INFO looks correct, but the block.c:682 message in the
>> log still says "Error:" so you are still running the original code.  Did you
>> run "make" and "make install" after changing the code?  Did they complete
>> without errors (you might need to run "make install" as root)?  Did you
>> restart the bacula-sd after that?
> I compiled all on my workstation by the debian way, means I created a modified
> deb, transfered it to the backup server and installed it by dpkg -i.
> I checked also the md5sum of the bin before and after the installation and
> the original installation differs from the new compiled version.

In which installed files is block.c and read_records.c used?
I compiled all but repaced only the bacula-sd files. Maybe the modification
is located in another file and not in the binary?

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simple disk storage devices migrations/backups

2020-03-31 Thread Pierre Bernhardt
Am 31.03.20 um 17:05 schrieb Martin Simmons:
>>>>>> On Tue, 31 Mar 2020 12:40:04 +0200, Pierre Bernhardt said:
>>
>> Am 30.03.20 um 16:12 schrieb Martin Simmons:
>> Hello,
>>

>>
>> I think the return line should be corrected by changing the false? I'm 
>> virgin coding
>> c++ ;-)
>>
>>if (block->block_len > block->read_len) {
>>   dev->dev_errno = EIO;
>>   Mmsg4(dev->errmsg, _("[SE0208] Volume has data error at %u:%u! Short 
>> block of %d bytes on device %s discarded.\n"),
>>  dev->file, dev->block_num, block->read_len, dev->print_name());
>>   Jmsg(jcr, M_INFO, 0, "%s", dev->errmsg);
>>   dev->set_short_block();
>>   block->read_len = block->binbuf = 0;
>>   return true; /* return error */
>>}
> 
> No, returning true will not work correctly -- the calling function must get
> false for a short block.
Ok. For the moment I test it without any return value, but found allready 
another
problem with the new binary, but this is another point.

> Your change to use M_INFO looks correct, but the block.c:682 message in the
> log still says "Error:" so you are still running the original code.  Did you
> run "make" and "make install" after changing the code?  Did they complete
> without errors (you might need to run "make install" as root)?  Did you
> restart the bacula-sd after that?
I compiled all on my workstation by the debian way, means I created a modified
deb, transfered it to the backup server and installed it by dpkg -i.
I checked also the md5sum of the bin before and after the installation and
the original installation differs from the new compiled version.

> I also notice that there is another use of M_ERROR at line 160 of
> read_records.c that causes a second error message:
> 
>Jmsg1(jcr, M_ERROR, 0, "%s", dev->errmsg);
> 
> This also needs to be changed to M_INFO.
Will do it also and retest it. Also I will modify the message a little bit
so I can check the message is really from the modified binary.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simple disk storage devices migrations/backups

2020-03-31 Thread Pierre Bernhardt
Am 30.03.20 um 16:12 schrieb Martin Simmons:
Hello,

> You could try temporarily hacking bacula-sd to report the short block as an
> info message.  In src/stored/block.c, change M_ERROR to M_INFO in these lines:
> 
>   Mmsg4(dev->errmsg, _("[SE0208] Volume data error at %u:%u! Short block 
> of %d bytes on device %s discarded.\n"),
>  dev->file, dev->block_num, block->read_len, dev->print_name());
>   Jmsg(jcr, M_ERROR, 0, "%s", dev->errmsg);
It looks like it is not enought ot change M_ERROR to M_INFO because the job 
will still
fail and so the metadata still wont migrate to the new job:

31-Mar 01:48 backup-dir JobId 47771: The following 1 JobId was chosen to be 
migrated: 47704
31-Mar 01:48 backup-dir JobId 47771: Migration using JobId=47704 
Job=nihilnihil_home.2020-03-21_20.23.31_49
31-Mar 01:48 backup-dir JobId 47771: Start Migration JobId 47771, 
Job=MigrateFile2Drive.2020-03-31_01.48.24_05
31-Mar 01:49 backup-dir JobId 47771: Using Device "DiskStorage2" to read.
31-Mar 01:53 backup-sd JobId 47771: Ready to read from volume "DISK016" on File 
device "DiskStorage2" (/media/baculadisk2).
31-Mar 01:53 backup-sd JobId 47771: Forward spacing Volume "DISK016" to addr=217
31-Mar 07:00 backup-sd JobId 47771: Error: block.c:682 [SE0208] Volume data 
error at 0:0! Short block of 57010 bytes on device "DiskStorage2" 
(/media/baculadisk2) discarded.
31-Mar 07:00 backup-sd JobId 47771: Error: read_records.c:160 block.c:682 
[SE0208] Volume data error at 0:0! Short block of 57010 bytes on device 
"DiskStorage2" (/media/baculadisk2) discarded.
31-Mar 07:00 backup-sd JobId 47771: End of Volume "DISK016" at 
addr=972406571008 on device "DiskStorage2" (/media/baculadisk2).
31-Mar 07:01 backup-sd JobId 47771: Ready to read from volume "DISK017" on File 
device "DiskStorage2" (/media/baculadisk2).
31-Mar 07:01 backup-sd JobId 47771: Forward spacing Volume "DISK017" to addr=213
31-Mar 08:00 backup-sd JobId 47771: End of Volume "DISK017" at 
addr=110838477984 on device "DiskStorage2" (/media/baculadisk2).
31-Mar 08:00 backup-sd JobId 47771: Elapsed time=06:07:58, Transfer rate=49.02 
M Bytes/second
31-Mar 10:01 backup-dir JobId 47771: Warning: Found errors during the migration 
process. The original job 47704 will be kept in the catalog and the Migration 
job will be marked in Error
31-Mar 10:01 backup-dir JobId 47771: Error: bsock.c:388 Wrote 4 bytes to 
Storage daemon:backup.localnet.cosmicstars.de:9103, but only 0 accepted.
31-Mar 10:01 backup-dir JobId 47771: Error: Bacula backup-dir 9.4.2 (04Feb19):
  Build OS:   x86_64-pc-linux-gnu debian buster/sid
  Prev Backup JobId:  47704
  Prev Backup Job:nihilnihil_home.2020-03-21_20.23.31_49
  New Backup JobId:   47772
  Current JobId:  47771
  Current Job:MigrateFile2Drive.2020-03-31_01.48.24_05
  Backup Level:   Full
  Client: backup-fd
  FileSet:"Full Set" 2017-10-09 08:53:50
  Read Pool:  "Migrate" (From Job resource)
  Read Storage:   "Disk2" (From Pool resource)
  Write Pool: "Monthly" (From Job Pool's NextPool resource)
  Write Storage:  "FibreCAT TX48 S2" (From Job Pool's NextPool resource)
  Catalog:"MyCatalog" (From Client resource)
  Start time: 31-Mar-2020 01:49:20
  End time:   31-Mar-2020 10:01:43
  Elapsed time:   8 hours 12 mins 23 secs
  Priority:   21
  SD Files Written:   1,030,385
  SD Bytes Written:   1,082,331,572,757 (1.082 TB)
  Rate:   36635.8 KB/s
  Volume name(s): LTO40025|LTO40026
  Volume Session Id:  1
  Volume Session Time:1585612030
  Last Volume Bytes:  270,297,861,120 (270.2 GB)
  SD Errors:  2
  SD termination status:  OK
  Termination:*** Migration Error ***



I think the return line should be corrected by changing the false? I'm virgin 
coding
c++ ;-)

   if (block->block_len > block->read_len) {
  dev->dev_errno = EIO;
  Mmsg4(dev->errmsg, _("[SE0208] Volume has data error at %u:%u! Short 
block of %d bytes on device %s discarded.\n"),
 dev->file, dev->block_num, block->read_len, dev->print_name());
  Jmsg(jcr, M_INFO, 0, "%s", dev->errmsg);
  dev->set_short_block();
  block->read_len = block->binbuf = 0;
  return true; /* return error */
   }



> I suggest you also report this as a bug to https://bugs.bacula.org/.
Yes, I will do it later after I have some more information.
By debian buster I have another issue to compile the source. But this is 
another point.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simple disk storage devices migrations/backups

2020-03-27 Thread Pierre Bernhardt
Am 27.03.20 um 21:26 schrieb Pierre Bernhardt:
> Am 27.03.20 um 18:26 schrieb Martin Simmons:

>>> Any idea how I can extract the single data für fileindex 932145 from the 
>>> disks for comparing?
>>> If the short block is only repeated as whole block on the next disk the 
>>> problem could be fixed
>>> by modify the database so the short block will not be read or by truncating 
>>> at short block on
>>> the DISK016 (?)
>> You can find the filename for fileindex 932145 by running bscan -r -vv and
>> then do a restore for that filename.

> By the way the restore (with bconsole?) will be work without getting a 
> problem? I will
> try to do it.

27-Mar 21:27 backup-sd JobId 47756: Ready to read from volume "DISK016" on File 
device "DiskStorage2" (/media/baculadisk2).
27-Mar 21:27 backup-sd JobId 47756: Forward spacing Volume "DISK016" to 
addr=971937834340
27-Mar 21:28 backup-sd JobId 47756: Error: block.c:682 [SE0208] Volume data 
error at 0:0! Short block of 57010 bytes on device "DiskStorage2" 
(/media/baculadisk2) discarded.
27-Mar 21:28 backup-sd JobId 47756: Error: read_records.c:160 block.c:682 
[SE0208] Volume data error at 0:0! Short block of 57010 bytes on device 
"DiskStorage2" (/media/baculadisk2) discarded.
27-Mar 21:28 backup-sd JobId 47756: End of Volume "DISK016" at 
addr=972406571008 on device "DiskStorage2" (/media/baculadisk2).
27-Mar 21:28 backup-sd JobId 47756: Ready to read from volume "DISK017" on File 
device "DiskStorage2" (/media/baculadisk2).
27-Mar 21:28 backup-sd JobId 47756: Forward spacing Volume "DISK017" to addr=213
27-Mar 21:28 backup-sd JobId 47756: End of Volume "DISK017" at addr=645332 on 
device "DiskStorage2" (/media/baculadisk2).
27-Mar 21:28 backup-sd JobId 47756: Elapsed time=00:00:15, Transfer rate=134.3 
K Bytes/second
27-Mar 21:28 backup-dir JobId 47756: Bacula backup-dir 9.4.2 (04Feb19):
  Build OS:   x86_64-pc-linux-gnu debian buster/sid
  JobId:  47756
  Job:RestoreFiles.2020-03-27_21.27.41_46
  Restore Client: nihilnihil-fd
  Where:  /tmp/restore
  Replace:Always
  Start time: 27-Mar-2020 21:27:44
  End time:   27-Mar-2020 21:28:07
  Elapsed time:   23 secs
  Files Expected: 1
  Files Restored: 1
  Bytes Restored: 2,122,031 (2.122 MB)
  Rate:   92.3 KB/s
  FD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Restore OK -- with errors   27-Mar 21:28 backup-dir 
JobId 47756: Begin pruning Jobs older than 99 years .
27-Mar 21:28 backup-dir JobId 47756: No Jobs found to prune.
27-Mar 21:28 backup-dir JobId 47756: Begin pruning Files.
27-Mar 21:28 backup-dir JobId 47756: No Files found to prune.
27-Mar 21:28 backup-dir JobId 47756: End auto prune.

Looks like restoring has been worked although only with the shot block notice 
error message.

Now use a newer backup from tape to restore the same file and compare them with 
sha256sum.
Content and meta data are looks like correct after restore.

So my problem is only, because of failed migration because of short block 
notice, the
migrated database entries will not transfered after the migration back from 
disk to tape
has been proceed.

Is there a way so I can finish the migration maybe by a workaround?

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simple disk storage devices migrations/backups

2020-03-27 Thread Pierre Bernhardt
Am 27.03.20 um 18:26 schrieb Martin Simmons:
Hi,

>> Any idea how I can extract the single data für fileindex 932145 from the 
>> disks for comparing?
>> If the short block is only repeated as whole block on the next disk the 
>> problem could be fixed
>> by modify the database so the short block will not be read or by truncating 
>> at short block on
>> the DISK016 (?)
> You can find the filename for fileindex 932145 by running bscan -r -vv and
> then do a restore for that filename.
I found it on database with

select client.name, job.jobid, job.level, path.path, filename.name, file.lstat 
from file, filename, path, job, client where 
file.filenameid=filename.filenameid and file.pathid=path.pathid and 
file.jobid=job.jobid and job.clientid=client.clientid and job.jobid=47704 and 
file.fileindex=932145 order by job.endtime desc;

(Sorry for the long line ;-)

By the way the restore (with bconsole?) will be work without getting a problem? 
I will
try to do it.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simple disk storage devices migrations/backups

2020-03-26 Thread Pierre Bernhardt
Am 26.03.20 um 16:48 schrieb Martin Simmons:
> Looks like a bug to me, but a possible workaround is to limit the size of your
Me2. If the short block is correctly identified at writing it should be repeated
or should be finished on the new disk. In both cases the data on the disks 
should
be ok. Only the reading has maybe than a problem.
But if the tail data of the short block is lost, the data on the disks should be
incomplete

> disk volumes (see Maximum Volume Bytes) to avoid filling the disks during the
> backup.  This will avoid the "short block" when you migrate.
Yes, but I have different size of disks (320 GB to 2 TB) and cannot change each
time the max size.

> BTW, can you post the log from jobid 47704 as well?

Migration job:

21-Mar 20:23 backup-dir JobId 47703: The following 1 JobId was chosen to be 
migrated: 46802
21-Mar 20:23 backup-dir JobId 47703: Migration using JobId=46802 
Job=nihilnihil_home.2020-01-05_23.50.01_29
21-Mar 20:23 backup-dir JobId 47703: Start Migration JobId 47703, 
Job=Migrate2FileTmpVol.2020-03-21_20.23.31_48
21-Mar 20:23 backup-dir JobId 47703: Using Device "HPUltrium4-2" to read.
21-Mar 20:23 backup-sd JobId 47703: 3307 Issuing autochanger "unload Volume 
LTO40027, Slot 5, Drive 1" command.
21-Mar 20:27 backup-sd JobId 47703: 3304 Issuing autochanger "load Volume 
LTO40026, Slot 9, Drive 1" command.
21-Mar 20:28 backup-sd JobId 47703: 3305 Autochanger "load Volume LTO40026, 
Slot 9, Drive 1", status is OK.
21-Mar 20:28 backup-sd JobId 47703: Ready to read from volume "LTO40026" on 
Tape device "HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst).
21-Mar 20:28 backup-sd JobId 47703: Forward spacing Volume "LTO40026" to 
addr=19:1457
22-Mar 00:36 backup-sd JobId 47703: End of Volume "LTO40026" at addr=628:5763 
on device "HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst).
22-Mar 00:37 backup-sd JobId 47703: 3307 Issuing autochanger "unload Volume 
LTO40026, Slot 9, Drive 1" command.
22-Mar 00:38 backup-sd JobId 47703: 3304 Issuing autochanger "load Volume 
LTO40025, Slot 17, Drive 1" command.
22-Mar 00:40 backup-sd JobId 47703: 3305 Autochanger "load Volume LTO40025, 
Slot 17, Drive 1", status is OK.
22-Mar 00:40 backup-sd JobId 47703: Ready to read from volume "LTO40025" on 
Tape device "HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst).
22-Mar 00:40 backup-sd JobId 47703: Forward spacing Volume "LTO40025" to 
addr=1:11779
22-Mar 05:33 backup-sd JobId 47703: End of Volume "LTO40025" at addr=0:0 on 
device "HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst).
22-Mar 05:33 backup-sd JobId 47703: 3307 Issuing autochanger "unload Volume 
LTO40025, Slot 17, Drive 1" command.
22-Mar 05:34 backup-sd JobId 47703: 3304 Issuing autochanger "load Volume 
LTO40027, Slot 5, Drive 1" command.
22-Mar 05:36 backup-sd JobId 47703: 3305 Autochanger "load Volume LTO40027, 
Slot 5, Drive 1", status is OK.
22-Mar 05:36 backup-sd JobId 47703: Ready to read from volume "LTO40027" on 
Tape device "HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst).
22-Mar 05:36 backup-sd JobId 47703: Forward spacing Volume "LTO40027" to 
addr=0:1
22-Mar 10:32 backup-sd JobId 47703: End of Volume "LTO40027" at addr=298:0 on 
device "HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst).
22-Mar 10:32 backup-sd JobId 47703: Elapsed time=14:03:10, Transfer rate=21.39 
M Bytes/second
22-Mar 12:21 backup-dir JobId 47703: Bacula backup-dir 9.4.2 (04Feb19):
  Build OS:   x86_64-pc-linux-gnu debian buster/sid
  Prev Backup JobId:  46802
  Prev Backup Job:nihilnihil_home.2020-01-05_23.50.01_29
  New Backup JobId:   47704
  Current JobId:  47703
  Current Job:Migrate2FileTmpVol.2020-03-21_20.23.31_48
  Backup Level:   Full
  Client: backup-fd
  FileSet:"Full Set" 2017-10-09 08:53:50
  Read Pool:  "Monthly" (From Job resource)
  Read Storage:   "FibreCAT TX48 S2" (From Pool resource)
  Write Pool: "Migrate" (From Job Pool's NextPool resource)
  Write Storage:  "Disk2" (From Job Pool's NextPool resource)
  Catalog:"MyCatalog" (From Client resource)
  Start time: 21-Mar-2020 20:23:34
  End time:   22-Mar-2020 12:16:45
  Elapsed time:   15 hours 53 mins 11 secs
  Priority:   21
  SD Files Written:   1,030,385
  SD Bytes Written:   1,082,331,572,757 (1.082 TB)
  Rate:   18924.9 KB/s
  Volume name(s): DISK016|DISK017
  Volume Session Id:  41
  Volume Session Time:1584646035
  Last Volume Bytes:  110,838,413,473 (110.8 GB)
  SD Errors:  0
  SD termination status:  OK
  Termination:Migration OK

Related Backup-Migration Job:


21-Mar 20:23 backup-dir JobId 47704: Using Volume "DISK016" from 'Scratch' pool.
21-Mar 20:23 backup-dir JobId 47704: Using Device "DiskStorage2" to write.
21-Mar 20:23 backup-sd JobId 47704: Wrote label to prelabeled Volume "DISK016" 

Re: [Bacula-users] Simple disk storage devices migrations/backups

2020-03-26 Thread Pierre Bernhardt
Am 26.03.20 um 14:06 schrieb Josh Fisher:
> On 3/25/2020 3:23 PM, Pierre Bernhardt wrote:
> And what is the Autochanger / Device configurations for the disk storage in 
> bacula-sd.conf?

Device {
  Name = DiskStorage1
  Media Type = Disk
  Device Type = File
  Archive Device = /media/baculadisk1
  LabelMedia = yes;   # lets Bacula label unlabeled media
  AutoChanger = No;
  Random Access = Yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = yes;
  AlwaysOpen = Yes;
  Requires Mount = yes;
  Spool Directory = "/tmp"
  Maximum Spool Size = 17179869184
  Maximum Job Spool Size = 17179869184
  Mount Point = /media/baculadisk1
  Mount Command = "/usr/bin/pmount %m"
  Unmount Command = "/usr/bin/pumount %m"
  #Free Space Command = "echo `/bin/df %m | /usr/bin/tr -s \  | /usr/bin/cut -d 
\  -f 4 | /usr/bin/tail -1`*1024 | /usr/bin/bc"
}

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Simple disk storage devices migrations/backups

2020-03-25 Thread Pierre Bernhardt
Am 24.03.20 um 22:39 schrieb Pierre Bernhardt:
>
Today I tried to migrate again the job which uses two disk files.
But now I tried to put both files in one directory (I used my second
bay to mount the DISK017 file) and used a symbolic link:

-rw-rw-r-- 1 bacula tape 972406571008 Mar 24 17:10 /media/baculadisk1/DISK017
-rw-rw-r-- 1 bacula tape 972406571008 Mar 22 07:47 /media/baculadisk2/DISK016
lrwxrwxrwx 1 root   root   26 Mar 25 07:32 /media/baculadisk2/DISK017 
-> /media/baculadisk1/D

By the way the migration situation doesn't have been changed:

Here the messages from the backup and from the migration job:


Backup (Migration) Job:
25-Mar 07:48 backup-dir JobId 47755: Recycled volume "LTO40026"
25-Mar 07:48 backup-dir JobId 47755: Using Device "HPUltrium4-2" to write.
25-Mar 07:48 backup-sd JobId 47755: 3307 Issuing autochanger "unload Volume 
LTO40030, Slot 8, Drive 1" command.
25-Mar 07:49 backup-sd JobId 47755: 3304 Issuing autochanger "load Volume 
LTO40026, Slot 9, Drive 1" command.
25-Mar 07:51 backup-sd JobId 47755: 3305 Autochanger "load Volume LTO40026, 
Slot 9, Drive 1", status is OK.
25-Mar 07:51 backup-sd JobId 47755: Recycled volume "LTO40026" on Tape device 
"HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst), all previous data lost.
25-Mar 12:02 backup-sd JobId 47755: [SI0202] End of Volume "LTO40026" at 813:1 
on device "HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst). Write of 
64512 bytes got -1.
25-Mar 12:02 backup-sd JobId 47755: Re-read of last block succeeded.
25-Mar 12:02 backup-sd JobId 47755: End of medium on Volume "LTO40026" 
Bytes=812,948,032,512 Blocks=12,601,500 at 25-Mar-2020 12:02.
25-Mar 12:02 backup-sd JobId 47755: 3307 Issuing autochanger "unload Volume 
LTO40026, Slot 9, Drive 1" command.
25-Mar 12:04 backup-dir JobId 47755: Recycled volume "LTO40025"
25-Mar 12:04 backup-sd JobId 47755: 3301 Issuing autochanger "loaded? drive 1" 
command.
25-Mar 12:04 backup-sd JobId 47755: 3302 Autochanger "loaded? drive 1", result: 
nothing loaded.
25-Mar 12:04 backup-sd JobId 47755: 3304 Issuing autochanger "load Volume 
LTO40025, Slot 17, Drive 1" command.
25-Mar 12:05 backup-sd JobId 47755: 3305 Autochanger "load Volume LTO40025, 
Slot 17, Drive 1", status is OK.
25-Mar 12:05 backup-sd JobId 47755: Recycled volume "LTO40025" on Tape device 
"HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst), all previous data lost.
25-Mar 12:05 backup-sd JobId 47755: New volume "LTO40025" mounted on device "HP 
Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst) at 25-Mar-2020 12:05.
25-Mar 14:01 backup-sd JobId 47755: Elapsed time=06:06:37, Transfer rate=49.20 
M Bytes/second
25-Mar 14:01 backup-sd JobId 47755: Sending spooled attrs to the Director. 
Despooling 300,838,146 bytes ...


Migration job:


25-Mar 07:48 backup-dir JobId 47754: The following 1 JobId was chosen to be 
migrated: 47704
25-Mar 07:48 backup-dir JobId 47754: Migration using JobId=47704 
Job=nihilnihil_home.2020-03-21_20.23.31_49
25-Mar 07:48 backup-dir JobId 47754: Start Migration JobId 47754, 
Job=MigrateFile2Drive.2020-03-25_07.48.01_43
25-Mar 07:48 backup-dir JobId 47754: Using Device "DiskStorage2" to read.
25-Mar 07:51 backup-sd JobId 47754: Ready to read from volume "DISK016" on File 
device "DiskStorage2" (/media/baculadisk2).
25-Mar 07:51 backup-sd JobId 47754: Forward spacing Volume "DISK016" to addr=217
25-Mar 12:59 backup-sd JobId 47754: Error: block.c:682 [SE0208] Volume data 
error at 0:0! Short block of 57010 bytes on device "DiskStorage2" 
(/media/baculadisk2) discarded.
25-Mar 12:59 backup-sd JobId 47754: Error: read_records.c:160 block.c:682 
[SE0208] Volume data error at 0:0! Short block of 57010 bytes on device 
"DiskStorage2" (/media/baculadisk2) discarded.
25-Mar 12:59 backup-sd JobId 47754: End of Volume "DISK016" at 
addr=972406571008 on device "DiskStorage2" (/media/baculadisk2).
25-Mar 13:00 backup-sd JobId 47754: Ready to read from volume "DISK017" on File 
device "DiskStorage2" (/media/baculadisk2).
25-Mar 13:00 backup-sd JobId 47754: Forward spacing Volume "DISK017" to addr=213
25-Mar 13:59 backup-sd JobId 47754: End of Volume "DISK017" at 
addr=110838477984 on device "DiskStorage2" (/media/baculadisk2).
25-Mar 13:59 backup-sd JobId 47754: Elapsed time=06:08:08, Transfer rate=49.00 
M Bytes/second
25-Mar 15:51 backup-dir JobId 47754: Warning: Found errors during the migration 
process. The original job 47704 will be kept in the catalog and the Migration 
job will be marked in Error
25-Mar 15:51 backup-dir JobId 47754: Error: bsock.c:388 Wrote 4 bytes to 
Storage daemon:backup.localnet.cosmicstars.de:9103, but only 0 accepted.
25-Mar 15:51 bac

[Bacula-users] Simple disk storage devices migrations/backups

2020-03-24 Thread Pierre Bernhardt
Hello,

I try to migrate a bigger job from three tapes to two disks.
I use a USB3-SATA Disk swapping station which have at all the same SCSI-ID
so I can use same /dev-node.
I use a mountpoint /median/baculadisk2
Each Disk has only write rights by root user so the bacula user can only
write on files which allready exists.
The Labeled file instead has write/read rights for bacula:tape so
bacula can write, clear and read the file, but cannot create new
files in this mount point.
To label a new disk I create a fs, mount it in /baculadisk2
and create a empty file like DISK018 and set the permissions
-rw-rw for bacula:tape.
A label command for the diskstorage2 device with name DISK018
will be then finished successfully and the file can be filled up
as long enough space is available on the disk.

If a job will needs more space than is available, the job will be
fill up the disk and then request for a new volume. So I unmount
the volume in bacula by unmounting the configured storage device.
Than I put out the USB-3 plug, switch of the bay, replace the
disk by another one (e.g. a labeled disk), switch on the bay and
plug in the USB connector.
Then the disk inode is created so a bacula mount command will
mount the filesystem and the labeled file is available in
/media/baculadisk2. The backup/migrate job will continue in
this volume (file) as long the backup/migration is finished.

Problem:
I want to migrate the job now back from the two disks to tapes,
so I configured a job which should do it by selecting jobid.

For first all looks fine: The jobs starts and bacula asks for
the first of the two disks. I mount the requested disks and
the migration starts.
After a couple of ours the job arrives the end of the file
(volume) and asks for the next disk.
I umount the 1st and mount the 2nd disk and the job continues.

By the way the migration job ends with an error which looks like
identified by a problem which was produced at the end of
the first disk, because the last block was truncated?
If so I wonder me that the migration to the disk has been
worked without problems.

Here the jobs output which I got via mail. First the migration
from tape to disk then back from disk to tape.
Maybe you can explain what the problem is.
Maybe it is only a single random problem hopefully it is not
a problem because of truncated last block of 1st disk.
In my opinion like on tapes it should reread and than rewritten
to the 2nd disk.




21-Mar 20:23 backup-dir JobId 47703: The following 1 JobId was chosen to be 
migrated: 46802
21-Mar 20:23 backup-dir JobId 47703: Migration using JobId=46802 
Job=nihilnihil_home.2020-01-05_23.50.01_29
21-Mar 20:23 backup-dir JobId 47703: Start Migration JobId 47703, 
Job=Migrate2FileTmpVol.2020-03-21_20.23.31_48
21-Mar 20:23 backup-dir JobId 47703: Using Device "HPUltrium4-2" to read.
21-Mar 20:23 backup-sd JobId 47703: 3307 Issuing autochanger "unload Volume 
LTO40027, Slot 5, Drive 1" command.
21-Mar 20:27 backup-sd JobId 47703: 3304 Issuing autochanger "load Volume 
LTO40026, Slot 9, Drive 1" command.
21-Mar 20:28 backup-sd JobId 47703: 3305 Autochanger "load Volume LTO40026, 
Slot 9, Drive 1", status is OK.
21-Mar 20:28 backup-sd JobId 47703: Ready to read from volume "LTO40026" on 
Tape device "HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst).
21-Mar 20:28 backup-sd JobId 47703: Forward spacing Volume "LTO40026" to 
addr=19:1457
22-Mar 00:36 backup-sd JobId 47703: End of Volume "LTO40026" at addr=628:5763 
on device "HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst).
22-Mar 00:37 backup-sd JobId 47703: 3307 Issuing autochanger "unload Volume 
LTO40026, Slot 9, Drive 1" command.
22-Mar 00:38 backup-sd JobId 47703: 3304 Issuing autochanger "load Volume 
LTO40025, Slot 17, Drive 1" command.
22-Mar 00:40 backup-sd JobId 47703: 3305 Autochanger "load Volume LTO40025, 
Slot 17, Drive 1", status is OK.
22-Mar 00:40 backup-sd JobId 47703: Ready to read from volume "LTO40025" on 
Tape device "HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst).
22-Mar 00:40 backup-sd JobId 47703: Forward spacing Volume "LTO40025" to 
addr=1:11779
22-Mar 05:33 backup-sd JobId 47703: End of Volume "LTO40025" at addr=0:0 on 
device "HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst).
22-Mar 05:33 backup-sd JobId 47703: 3307 Issuing autochanger "unload Volume 
LTO40025, Slot 17, Drive 1" command.
22-Mar 05:34 backup-sd JobId 47703: 3304 Issuing autochanger "load Volume 
LTO40027, Slot 5, Drive 1" command.
22-Mar 05:36 backup-sd JobId 47703: 3305 Autochanger "load Volume LTO40027, 
Slot 5, Drive 1", status is OK.
22-Mar 05:36 backup-sd JobId 47703: Ready to read from volume "LTO40027" on 
Tape device "HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst).
22-Mar 05:36 backup-sd JobId 47703: Forward spacing Volume "LTO40027" to 
addr=0:1
22-Mar 10:32 backup-sd JobId 47703: End of Volume "LTO40027" at addr=298:0 on 
device "HP Ultrium 4-2" (/dev/tape/by-id/scsi-HU19145705-nst).
22-Mar 10:32 backup-sd JobId 47703: 

[Bacula-users] Sovled: Set tape drive to readonly in bacula?

2020-03-16 Thread Pierre Bernhardt
Am 15.03.20 um 15:53 schrieb Radosław Korzeniewski:
> niedz., 15 mar 2020 o 13:01 Pierre Bernhardt 
>> I did not found a setup so I want to ask, is it possible to
>> setup a drive so it will used only for read access but never
>> for write access?
>>
> 
> Yes, absolutely! I think it is available since 7.2 -
> https://www.bacula.org/9.6.x-manuals/en/main/New_Features_in_7_2_0.html#SECTION00824000
Found and read and configured it. Thank you.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Set tape drive to readonly in bacula?

2020-03-15 Thread Pierre Bernhardt
Hello,

I did not found a setup so I want to ask, is it possible to
setup a drive so it will used only for read access but never
for write access?

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape issue: less size than before for LTO-4 tapes

2020-03-15 Thread Pierre Bernhardt
Am 14.03.20 um 04:46 schrieb Pierre Bernhardt:
> Am 13.03.20 um 08:28 schrieb Pierre Bernhardt:
>> Am 10.03.20 um 20:27 schrieb Pierre Bernhardt:
>>> Write failed at block 8758373. stat=-1 ERR=Auf dem Gerät ist kein 
>>> Speicherplatz mehr verfügbar
>>> btape: btape.c:411-0 Volume bytes=565.0 GB. Write rate = 25.08 MB/s
>>> btape: btape.c:612-0 Wrote 1 EOF to "HP Ultrium 4-1" 
>>> (/dev/tape/by-id/scsi-HU1914570A-nst)
>>
>> Thats very interesting. On the other drive which can store ~ 750-800 GiByte 
>> (drive 1)
>> pcp shows me for the btape test ~ 80 MByte/s:
>>
>> # Device r/s w/s  kb_r/s  kb_w/s   r_pct   w_pct   o_pctRs/s 
>>   o_cnt
>> st0 0.00 1259.59   0   812580.00   98.23   98.230.00 
>>0.00
>>
>> So it looks really there is really an issue on the drive 0 or the connection 
>> path.
>> I will wait for the end of the both tests
>> (btape on drive 1 and backup test on drive 0)
> It's really dependend on the tape drive. With same tape on the other drive the
> btape test will write many more data on this tape:
> 
> Write failed at block 12611722. stat=-1 ERR=Auf dem Gerät ist kein 
> Speicherplatz mehr verfügbar
> btape: btape.c:411-0 Volume bytes=813.6 GB. Write rate = 79.21 MB/s
> btape: btape.c:612-0 Wrote 1 EOF to "HP Ultrium 4-2" 
> (/dev/tape/by-id/scsi-HU19145705-nst)
> 
> The backup test shows me not the best:
> 
> # Device r/s w/s  kb_r/s  kb_w/s   r_pct   w_pct   o_pctRs/s  
>  o_cnt
> st1 0.00  259.29   0   167270.00   79.75   79.750.00  
>   0.00
> st1 0.00  259.19   0   167210.00   79.75   79.750.00  
>   0.00
> …
> 
> 16 MByte/s is to less.
> 
> And the backuptest shows me on drive 0 only a stored size of 475 MByte.
> 
> I would like to check FC connection for lip resets, error stats and so on.
> I checked the "files" in /sys/class/fc_host/host?/statistics but all which 
> could
> have a problem value shows me only 0x0 values.
> So my first guess is there is no really problem on the hba-drive-connections.
> 
> I will now try to clean the drive with a fresh cleaning tape and use btape
> again for a test. I think I will not get big different result, but it's
> a try.
This does not has been helped. So I will disable the drive for write so only 
the other is
used. To check the tapes I will migrate the tapes written with less size to 
empty tapes
so I will get back the full size.

Maybe I can replace the drive later, but at time I bought the library the 2nd 
drive
was not really needed. Now it's the good replacement part for my backup jobs ;-)

Thank you for finding the issue.

Cheers,
Pierre




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape issue: less size than before for LTO-4 tapes

2020-03-13 Thread Pierre Bernhardt
Am 13.03.20 um 08:28 schrieb Pierre Bernhardt:
> Am 10.03.20 um 20:27 schrieb Pierre Bernhardt:
>> Write failed at block 8758373. stat=-1 ERR=Auf dem Gerät ist kein 
>> Speicherplatz mehr verfügbar
>> btape: btape.c:411-0 Volume bytes=565.0 GB. Write rate = 25.08 MB/s
>> btape: btape.c:612-0 Wrote 1 EOF to "HP Ultrium 4-1" 
>> (/dev/tape/by-id/scsi-HU1914570A-nst)
> 
> Thats very interesting. On the other drive which can store ~ 750-800 GiByte 
> (drive 1)
> pcp shows me for the btape test ~ 80 MByte/s:
> 
> # Device r/s w/s  kb_r/s  kb_w/s   r_pct   w_pct   o_pctRs/s  
>  o_cnt
> st0 0.00 1259.59   0   812580.00   98.23   98.230.00  
>   0.00
> 
> So it looks really there is really an issue on the drive 0 or the connection 
> path.
> I will wait for the end of the both tests
> (btape on drive 1 and backup test on drive 0)
It's really dependend on the tape drive. With same tape on the other drive the
btape test will write many more data on this tape:

Write failed at block 12611722. stat=-1 ERR=Auf dem Gerät ist kein 
Speicherplatz mehr verfügbar
btape: btape.c:411-0 Volume bytes=813.6 GB. Write rate = 79.21 MB/s
btape: btape.c:612-0 Wrote 1 EOF to "HP Ultrium 4-2" 
(/dev/tape/by-id/scsi-HU19145705-nst)

The backup test shows me not the best:

# Device r/s w/s  kb_r/s  kb_w/s   r_pct   w_pct   o_pctRs/s   
o_cnt
st1 0.00  259.29   0   167270.00   79.75   79.750.00
0.00
st1 0.00  259.19   0   167210.00   79.75   79.750.00
0.00
…

16 MByte/s is to less.

And the backuptest shows me on drive 0 only a stored size of 475 MByte.

I would like to check FC connection for lip resets, error stats and so on.
I checked the "files" in /sys/class/fc_host/host?/statistics but all which could
have a problem value shows me only 0x0 values.
So my first guess is there is no really problem on the hba-drive-connections.

I will now try to clean the drive with a fresh cleaning tape and use btape
again for a test. I think I will not get big different result, but it's
a try.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape issue: less size than before for LTO-4 tapes

2020-03-13 Thread Pierre Bernhardt
Am 13.03.20 um 15:18 schrieb Sebastian Suchanek:
> Am 10.03.2020 um 09:26 schrieb Pierre Bernhardt:
> 
>> since beginning of this year a tape issue has been arrived my backups.
>> All my LTO-4 Tapes will not fill any more to the ~ 760 GiByte than
>> before. Only ~ 510-580 GiByte will be stored on the tapes which is
>> shown by the list volume command.
> 
> Looks like a worn-out drive to me.
Maybe yes. I think I will disable the drive and use only the second
one.

The good is, the recovers still are working without showing errors
to me. All data is soft-encrypted and compressed. Is there any
problem the restore should fail.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape issue: less size than before for LTO-4 tapes

2020-03-13 Thread Pierre Bernhardt
Am 10.03.20 um 20:27 schrieb Pierre Bernhardt:
> Write failed at block 8758373. stat=-1 ERR=Auf dem Gerät ist kein 
> Speicherplatz mehr verfügbar
> btape: btape.c:411-0 Volume bytes=565.0 GB. Write rate = 25.08 MB/s
> btape: btape.c:612-0 Wrote 1 EOF to "HP Ultrium 4-1" 
> (/dev/tape/by-id/scsi-HU1914570A-nst)

Thats very interesting. On the other drive which can store ~ 750-800 GiByte 
(drive 1)
pcp shows me for the btape test ~ 80 MByte/s:

# Device r/s w/s  kb_r/s  kb_w/s   r_pct   w_pct   o_pctRs/s   
o_cnt
st0 0.00 1259.59   0   812580.00   98.23   98.230.00
0.00

So it looks really there is really an issue on the drive 0 or the connection 
path.
I will wait for the end of the both tests
(btape on drive 1 and backup test on drive 0)

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape issue: less size than before for LTO-4 tapes

2020-03-13 Thread Pierre Bernhardt
Am 10.03.20 um 20:50 schrieb Pierre Bernhardt:
> Am 10.03.20 um 15:13 schrieb Martin Simmons:
>> On Tue, 10 Mar 2020 09:26:20 +0100, Pierre Bernhardt said:
>> Also, you might try using smartctl to get information about error rates from
>> the drive.  Something like:
>>
>> smartctl -a -d scsi -T permissive /dev/nst0
> Oh, thats new for me ;-)
> 
> root@backup:~# smartctl -a -d scsi -T permissive 
> /dev/tape/by-id/scsi-HU1914570A-nst
> smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.19.0-8-amd64] (local build)
> Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org
> 
> === START OF INFORMATION SECTION ===
> Vendor:   HP
> Product:  Ultrium 4-SCSI
> Revision: V67B
> Logical Unit id:  0x2001000e11129ae5
> Serial number:HU1914570A
> Device type:  tape
> Transport protocol:   Fibre channel (FCP-2)
> Local Time is:Tue Mar 10 20:33:02 2020 CET
> Temperature Warning:  Disabled or Not Supported
> 
> === START OF READ SMART DATA SECTION ===
> TapeAlert Supported
> TapeAlert: OK
> Current Drive Temperature: 31 C
> Drive Trip Temperature:
> 
> Error counter log:
>Errors Corrected by   Total   Correction Gigabytes
> Total
>ECC  rereads/errors   algorithm  processed
> uncorrected
>fast | delayed   rewrites  corrected  invocations   [10^9 bytes]  
> errors
> read:  00 0 0  0  0.000   
> 0
> write:1552171 1 1   21869758  0.000   
> 0
> 
> Device does not support Self Test logging
> 
The Backup on the other drive has been filled the tape up to 750 GiByte. Thats 
better
than before. So maybe it is really any drive problem.


Here the smartctl of the drive 1 which looks like better:

*list volume=LTO40035
+-++---+-+-+--+--+-+--+---+---+-+--+-+
| mediaid | volumename | volstatus | enabled | volbytes| volfiles | 
volretention | recycle | slot | inchanger | mediatype | voltype | volparts | 
expiresin   |
+-++---+-+-+--+--+-+--+---+---+-+--+-+
| 470 | LTO40035   | Full  |   1 | 812,948,161,536 |  813 |  
473,040,000 |   1 |   10 | 1 | LTO-3 |   2 |0 | 
473,000,399 |
+-++---+-+-+--+--+-+--+---+---+-+--+-+

That is the stored volume which I want on tapes.

root@backup:~# smartctl -a -d scsi -T permissive 
/dev/tape/by-id/scsi-HU19145705-nst
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.19.0-8-amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:   HP
Product:  Ultrium 4-SCSI
Revision: V67B
Logical Unit id:  0x2004000e11129ae5
Serial number:HU19145705
Device type:  tape
Transport protocol:   Fibre channel (FCP-2)
Local Time is:Fri Mar 13 06:37:45 2020 CET
NO tape present in drive
Temperature Warning:  Disabled or Not Supported

=== START OF READ SMART DATA SECTION ===
TapeAlert Supported
TapeAlert: OK
Current Drive Temperature: 32 C
Drive Trip Temperature:

Error counter log:
   Errors Corrected by   Total   Correction Gigabytes
Total
   ECC  rereads/errors   algorithm  processed
uncorrected
   fast | delayed   rewrites  corrected  invocations   [10^9 bytes]  
errors
read: 120 0 0 12  0.000 
  0
write: 582680 0 0  80927  0.000 
  0

Device does not support Self Test logging

I will repeat the btape test on this drive 1 and will repeat the backup test on 
drive 0 to
prevent from false situations.
If all is as before the btape test should fill the tape on drive 1 up to 750 
GiByte ~ 800 GByte
and the backup test should fill the tape on drive 0 only to 400 - 500 GiByte.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape issue: less size than before for LTO-4 tapes

2020-03-10 Thread Pierre Bernhardt
Am 10.03.20 um 15:13 schrieb Martin Simmons:
>>>>>> On Tue, 10 Mar 2020 09:26:20 +0100, Pierre Bernhardt said:
> I suggest checking the bacula log and the syslog to see what it reports when
> the tape is marked as full.
Thats a problem. I've send only mails in the past. Only after upgrade in feb
2020 i added also local log files :-(
The mails does not show relevant message, only tape is full and another from
scratch pool has to been used to continue the backups.
On one of the last tape full exchanges:

02-Mar 09:06 backup-sd JobId 47481: Writing spooled data to Volume. Despooling 
2,147,487,764 bytes ...
02-Mar 09:06 backup-sd JobId 47481: [SI0202] End of Volume "LTO40029" at 
551:5309 on device "HP Ultrium 4-1" 
(/dev/tape/by-id/scsi-32001000e11129ae5-nst). Write of 64512 bytes got -1.
02-Mar 09:06 backup-sd JobId 47481: Re-read of last block succeeded.
02-Mar 09:06 backup-sd JobId 47481: End of medium on Volume "LTO40029" 
Bytes=548,973,702,144 Blocks=8,509,636 at 02-Mar-2020 09:06.
02-Mar 09:06 backup-sd JobId 47481: 3307 Issuing autochanger "unload Volume 
LTO40029, Slot 20, Drive 0" command.
02-Mar 09:08 backup-dir JobId 47481: Using Volume "LTO40032" from 'Scratch' 
pool.
02-Mar 09:08 backup-sd JobId 47481: 3304 Issuing autochanger "load Volume 
LTO40032, Slot 18, Drive 0" command.
02-Mar 09:09 backup-sd JobId 47481: 3305 Autochanger "load Volume LTO40032, 
Slot 18, Drive 0", status is OK.
02-Mar 09:10 backup-sd JobId 47481: Wrote label to prelabeled Volume "LTO40032" 
on Tape device "HP Ultrium 4-1" (/dev/tape/by-id/scsi-32001000e11129ae5-nst)
02-Mar 09:10 backup-sd JobId 47481: New volume "LTO40032" mounted on device "HP 
Ultrium 4-1" (/dev/tape/by-id/scsi-32001000e11129ae5-nst) at 02-Mar-2020 09:10.
02-Mar 09:14 backup-sd JobId 47481: Despooling elapsed time = 00:03:59, 
Transfer rate = 8.985 M Bytes/second


> It might be a problem with the tape drive.  Can you run the manufacturer's
> diagnostics?
No idea how. It's linux box and as long tapeinfor does not shows an error
or an error is reported in syslog/dmesg/log ...


> Also, you might try using smartctl to get information about error rates from
> the drive.  Something like:
> 
> smartctl -a -d scsi -T permissive /dev/nst0
Oh, thats new for me ;-)

root@backup:~# smartctl -a -d scsi -T permissive 
/dev/tape/by-id/scsi-HU1914570A-nst
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.19.0-8-amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:   HP
Product:  Ultrium 4-SCSI
Revision: V67B
Logical Unit id:  0x2001000e11129ae5
Serial number:HU1914570A
Device type:  tape
Transport protocol:   Fibre channel (FCP-2)
Local Time is:Tue Mar 10 20:33:02 2020 CET
Temperature Warning:  Disabled or Not Supported

=== START OF READ SMART DATA SECTION ===
TapeAlert Supported
TapeAlert: OK
Current Drive Temperature: 31 C
Drive Trip Temperature:

Error counter log:
   Errors Corrected by   Total   Correction Gigabytes
Total
   ECC  rereads/errors   algorithm  processed
uncorrected
   fast | delayed   rewrites  corrected  invocations   [10^9 bytes]  
errors
read:  00 0 0  0  0.000 
  0
write:1552171 1 1   21869758  0.000 
  0

Device does not support Self Test logging


> 
>> (How can I check it which drive index has been used before and since
>> 2020 ? Is there a field which shows me the drive index in the db?)
> 
> This information should be in the bacula log.
It looks like it is ever drive 0.

Cheers,
Pierre






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape issue: less size than before for LTO-4 tapes

2020-03-10 Thread Pierre Bernhardt
Am 10.03.20 um 09:26 schrieb Pierre Bernhardt:
> Todo: Check a backup so the drive 2 will be used instead of drive 1.
> Maybe since 2020 the other drive is used.
> (How can I check it which drive index has been used before and since
> 2020 ? Is there a field which shows me the drive index in the db?)
I made at first a btape rawfill test. But it looks like it was only
shows me the actual behavior, means the new tape has been stored only
565 GiByte. By the way it was stored with a rate of 25 MiByte/s
so I think it was to slow for the drive so maybe it has been
inserted empty areas.

Write failed at block 8758373. stat=-1 ERR=Auf dem Gerät ist kein Speicherplatz 
mehr verfügbar
btape: btape.c:411-0 Volume bytes=565.0 GB. Write rate = 25.08 MB/s
btape: btape.c:612-0 Wrote 1 EOF to "HP Ultrium 4-1" 
(/dev/tape/by-id/scsi-HU1914570A-nst)

I changed the dev by using the serial number of the tape drive instead of the
lun id. I made the btape test on the drive 0 which all backups were made.

I will repeat with the other drive. For the moment a backup test on a new tape 
on the
tape drive 1 is running. pcp shows me a write rate between 30 and 50 MiByte/s 
which
is more against the btape test above.

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tape issue: less size than before for LTO-4 tapes

2020-03-10 Thread Pierre Bernhardt
Am 10.03.20 um 09:26 schrieb Pierre Bernhardt:
> (How can I check it which drive index has been used before and since
> 2020 ? Is there a field which shows me the drive index in the db?)
Ok, found a field but it is ever filled by 0:

bacula=> select volumename,deviceid,volbytes from media where volumename like 
'LTO4%';
 volumename | deviceid |   volbytes
+--+--
 LTO40002   |0 | 812678501376
 LTO40004   |0 | 812714886144
 LTO40009   |0 | 812782107648
 LTO40012   |0 | 812686500864
 LTO40007   |0 | 812761721856
 LTO40016   |0 | 812686758912
 LTO40013   |0 | 812738562048
 LTO40015   |0 | 812673921024
 LTO40006   |0 | 812596248576
 LTO40008   |0 | 812671534080
 LTO40010   |0 | 812681017344
 LTO40014   |0 | 812726369280
 LTO40018   |0 | 812729723904
 LTO40020   |0 | 812725788672
 LTO40019   |0 | 812656180224
 LTO40025   |0 | 628959808512
 LTO40031   |0 | 552112210944
 LTO40003   |0 | 812701596672
 LTO40027   |0 | 629233274880
 LTO40022   |0 | 812717724672
 LTO40023   |0 | 782473273344
 LTO40026   |0 | 628718920704
 LTO40028   |0 | 579429107712
 LTO40011   |0 | 812760173568
 LTO40005   |0 | 812693016576
 LTO40001   |0 | 812703725568
 LTO40017   |0 | 812674050048
 LTO40034   |0 | 475099978752
 LTO40024   |0 | 336473819136
 LTO40032   |0 | 554964544512
 LTO40029   |0 | 548973702144
 LTO40035   |0 |64512
 LTO40033   |0 | 558934032384
(33 rows)

bacula=> select volumename,deviceid from media where deviceid != 0;
 volumename | deviceid
+--
(0 rows)

Cheers,
Pierre



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Tape issue: less size than before for LTO-4 tapes

2020-03-10 Thread Pierre Bernhardt
Hello,

since beginning of this year a tape issue has been arrived my backups.
All my LTO-4 Tapes will not fill any more to the ~ 760 GiByte than
before. Only ~ 510-580 GiByte will be stored on the tapes which is
shown by the list volume command.
The latest volume with > 720 GiByte has to been written on the full-
Backups at 06.01.2020. All tapes after that date will only filled
with 500-600 GiByte.

| 422 | LTO30097   | Full  |   1 | 406,660,746,240 |  406 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-08-08 05:30:51 | 454,457,559 |
| 424 | LTO30099   | Full  |   1 | 406,697,131,008 |  409 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-08-05 10:38:22 | 454,216,810 |
| 425 | LTO30100   | Full  |   1 | 406,700,292,096 |  406 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-08-07 10:33:05 | 454,389,293 |
| 426 | LTO30101   | Full  |   1 | 406,729,451,520 |  406 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-08-05 17:08:29 | 454,240,217 |
| 427 | LTO40001   | Full  |   1 | 812,703,725,568 |  813 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2018-11-05 10:51:08 | 430,630,376 |
| 428 | LTO40002   | Full  |   1 | 812,678,501,376 |  812 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2018-11-06 10:04:59 | 430,714,007 |
| 429 | LTO40003   | Full  |   1 | 812,701,596,672 |  813 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2018-10-08 20:22:19 | 428,245,447 |
| 431 | LTO40004   | Full  |   1 | 812,714,886,144 |  817 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2018-12-03 12:22:02 | 433,055,030 |
| 432 | LTO40005   | Full  |   1 | 812,693,016,576 |  812 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2018-12-04 10:28:39 | 433,134,627 |
| 433 | LTO40006   | Full  |   1 | 812,596,248,576 |  815 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-01-07 04:01:50 | 436,049,018 |
| 434 | LTO40007   | Full  |   1 | 812,761,721,856 |  812 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-01-07 18:38:57 | 436,101,645 |
| 435 | LTO40016   | Full  |   1 | 812,686,758,912 |  813 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-02-04 04:23:58 | 438,469,546 |
| 436 | LTO40008   | Full  |   1 | 812,671,534,080 |  812 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-03-04 04:18:59 | 440,888,447 |
| 437 | LTO40009   | Full  |   1 | 812,782,107,648 |  812 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-03-04 19:16:49 | 440,942,317 |
| 438 | LTO40010   | Full  |   1 | 812,681,017,344 |  813 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-04-08 04:15:13 | 443,912,221 |
| 439 | LTO40011   | Full  |   1 | 812,760,173,568 |  813 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-02-04 18:40:48 | 438,520,956 |
| 441 | LTO40014   | Full  |   1 | 812,726,369,280 |  812 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-04-08 18:14:51 | 443,962,599 |
| 442 | LTO40012   | Full  |   1 | 812,686,500,864 |  813 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-05-06 04:26:15 | 446,332,083 |
| 443 | LTO40013   | Full  |   1 | 812,738,562,048 |  812 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-05-06 18:28:43 | 446,382,631 |
| 444 | LTO40015   | Full  |   1 | 812,673,921,024 |  812 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-06-03 19:18:29 | 448,804,817 |
| 445 | LTO40017   | Full  |   1 | 812,674,050,048 |  812 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-06-04 22:02:59 | 448,901,087 |
| 446 | LTO40018   | Full  |   1 | 812,729,723,904 |  813 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-06-03 18:26:33 | 448,801,701 |
| 447 | LTO40019   | Full  |   1 | 812,656,180,224 |  813 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-07-09 19:34:41 | 451,916,189 |
| 448 | LTO40020   | Full  |   1 | 812,725,788,672 |  813 |  
473,040,000 |   1 |0 | 0 | LTO-3 |   2 |0 | 
2019-07-08 17:53:41 | 451,823,729 |
| 449 

[Bacula-users] remote bacula-tray-monitor vs. bconsole or bat

2020-02-25 Thread Pierre Bernhardt
Hello,

I use on client and server the debian buster packages:

root@server:/etc/bacula# dpkg -l bacula\* |grep ^ii
ii  bacula-bscan 9.4.2-2  amd64network 
backup service - bscan tool
ii  bacula-common9.4.2-2  amd64network 
backup service - common support files
ii  bacula-common-pgsql  9.4.2-2  amd64network 
backup service - PostgreSQL common files
ii  bacula-console   9.4.2-2  amd64network 
backup service - text console
ii  bacula-console-qt9.4.2-2  amd64network 
backup service - Bacula Administration Tool
ii  bacula-director  9.4.2-2  amd64network 
backup service - Director daemon
ii  bacula-director-pgsql9.4.2-2  all  network 
backup service - PostgreSQL storage for Director
ii  bacula-fd9.4.2-2  amd64network 
backup service - file daemon
ii  bacula-sd9.4.2-2  amd64network 
backup service - storage daemon
ii  bacula-server9.4.2-2  all  network 
backup service - server metapackage

user@workstation:~$ dpkg -l bacula\* |grep ^ii
ii  bacula-common 9.4.2-2  amd64network backup service - 
common support files
ii  bacula-console9.4.2-2  amd64network backup service - 
text console
ii  bacula-console-qt 9.4.2-2  amd64network backup service - 
Bacula Administration Tool
ii  bacula-fd 9.4.2-2  amd64network backup service - 
file daemon
ii  bacula-tray-monitor   9.4.2-2  amd64network backup service - 
Bacula Tray Monitor


i've a running connection via bat + bconsole from my workstation to my 
backupserver
which is running well with this configuration (same between bat + bconsole).
(Only running at root for the moment, but working.)

By the way it looks like bacula-tray-monitor ignores the default setup in 
/etc/bacula
an starts with an empty configuration scheme. So I made my changes each time 
also
on ~/.bacula-tray-monitor.

To prevent me from access issues I tried first to run bacula-tray-monitor as 
root.
(Ugly and bad, but will be changed afer I get an running configuration)

In /etc/bacula
bat.conf + bconsole.conf
#
# Bacula Administration Tool (bat) configuration file
#

Director {
  Name = backup-dir
  DIRport = 9101
  address = FQDN-of-Backupserver
  Password = "mysecretdirectorpassword"

  # For client connection to server port
  # TLS configuration
  TLS Enable = yes
  TLS Require = yes
  @/etc/bacula/tls_client.conf
}

tls_client.conf
  TLS CA Certificate File = /etc/bacula/ssl/certs/ca2.crt.pem
  TLS Certificate = /etc/bacula/ssl/certs/bac...@fqdn-of-workstation.crt.pem
  TLS Key = /etc/bacula/ssl/private/HN-of-Workstation.key.pem

I tried to use the same configuration file for bacula-tray-monitor by
simply copy the bat.conf to bacula-tray-monitor.conf in /etc/bacula but had 
some issues:

DIRport is not allowed. So I reanmed the name from DIRport to port, by the way 
also
I comment the line because it is not needed. (port instead Dirport Breaks the 
description)

A monitor ressource is needed. So I readded the monitor section by copy
it from the server, but then "password" is not allowed message appears. So I 
used
the lines from the original configuration file:

Monitor {
  Name = backup-mon
  RefreshInterval = 30 seconds
}


Also I got a error message because TLS Require is not allowed. So I comment the 
line
(is this more unsecure ?)

Looks like the @/etc/bacula/tls_client.conf is not used so I added the lines in
director section and comment out the @-Line.

Now all is starting without errors by the way in debug mode ob 
bacula-tray-monitor
I get now an "Authentification error":

...
tray-monitor: parse_conf.c:1032-0 parse state=0 pass=2 got token=T_EOL
Monitor: name=backup-mon
Director: name=backup-dir address=FQDN-of-Backupserver port=9101
tray-monitor: parse_conf.c:1149-0 Leave parse_config()
tray-monitor: tray-monitor.cpp:179-0 Do not start the scheduler
tray-monitor: dirstatus.cpp:34-0 doUpdate(5568167106a8)
tray-monitor: task.cpp:232-0 Trying to connect to DIR
tray-monitor: bsockcore.c:299-0 Current My.BUSRV.IP.Adress:9101 All 
My.BUSRV.IP.Adress:9101
tray-monitor: bsockcore.c:228-0 who=Director daemon host=FQDN-of-Backupserver 
port=9101
tray-monitor: bsockcore.c:411-0 OK connected to server  Director daemon 
FQDN-of-Backupserver:9101.
tray-monitor: task.cpp:236-0 Connect done!
tray-monitor: watchdog.c:197-0 Registered watchdog 7fe7d40041c8, interval 300 
one shot
tray-monitor: btimers.c:177-0 Start bsock timer 7fe7d4004298 tid=7fe836df7700 
for 300 secs at 1582621658
tray-monitor: cram-md5.c:133-0 cram-get received: auth cram-md5 
<10821x.x21608@backup-dir> ssl=0
tray-monitor: cram-md5.c:157-0 sending resp to challenge: f//6jW/xxx/xxx/x/A
tray-monitor: 

Re: [Bacula-users] bsockcore issue.

2019-12-15 Thread Pierre Bernhardt
Hello,

Am 15.12.19 um 12:50 schrieb Erik P. Olsen:
> Well, I got nothing out of tcpdump:
> 
> dropped privs to tcpdump
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on wlp4s0, link-type EN10MB (Ethernet), capture size 262144 bytes
> ^C
> 0 packets captured
> 0 packets received by filter
> 0 packets dropped by kernel

ok, tcpdump will not really work with localhost interface. I suggested you
try to use bconsole from a different system.

> I thought it would be easy because bacula-dir and bconsole is on the same 
> system. I have
> also added port 3306 to mysql service to no avail. Status of mariadb is as 
> follows, but I
> don't know if it's all right or not:
> 
> [erik@Erik-PC ~]$ systemctl status mariadb.service
> ● mariadb.service - MariaDB 10.3 database server
>Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor 
> preset: disabled)
>   Drop-In: /usr/lib/systemd/system/mariadb.service.d
>└─tokudb.conf
>Active: active (running) since Sun 2019-12-15 12:41:14 CET; 50s ago
>  Docs: man:mysqld(8)
>https://mariadb.com/kb/en/library/systemd/
>   Process: 1184 ExecStartPre=/usr/libexec/mysql-check-socket (code=exited, 
> status=0/SUCCESS)
>   Process: 1240 ExecStartPre=/usr/libexec/mysql-prepare-db-dir 
> mariadb.service (code=exited, status=0/SUCCESS)
>   Process: 1672 ExecStartPost=/usr/libexec/mysql-check-upgrade (code=exited, 
> status=0/SUCCESS)
>  Main PID: 1512 (mysqld)
>Status: "Taking your SQL requests now..."
> Tasks: 71 (limit: 4915)
>Memory: 186.1M
>CGroup: /system.slice/mariadb.service
>└─1512 /usr/libexec/mysqld --basedir=/usr
> 
> dec 15 12:41:13 Erik-PC.epolan.dk systemd[1]: Starting MariaDB 10.3 database 
> server...
> dec 15 12:41:13 Erik-PC.epolan.dk mysql-check-socket[1184]: Socket file 
> /var/lib/mysql/mysql.sock exists.
> dec 15 12:41:13 Erik-PC.epolan.dk mysql-check-socket[1184]: No process is 
> using /var/lib/mysql/mysql.sock, which means it is a garbage, so it will be 
> removed automatically.
> dec 15 12:41:13 Erik-PC.epolan.dk mysql-prepare-db-dir[1240]: Database 
> MariaDB is probably initialized in /var/lib/mysql already, nothing is done.
> dec 15 12:41:13 Erik-PC.epolan.dk mysql-prepare-db-dir[1240]: If this is not 
> the case, make sure the /var/lib/mysql is empty before running 
> mysql-prepare-db-dir.
> dec 15 12:41:13 Erik-PC.epolan.dk mysqld[1512]: 2019-12-15 12:41:13 0 [Note] 
> /usr/libexec/mysqld (mysqld 10.3.20-MariaDB) starting as process 1512 ...
> dec 15 12:41:13 Erik-PC.epolan.dk mysqld[1512]: 2019-12-15 12:41:13 0 [ERROR] 
> WSREP: rsync SST method requires wsrep_cluster_address to be configured on 
> startup.
> dec 15 12:41:14 Erik-PC.epolan.dk systemd[1]: Started MariaDB 10.3 database 
> server.
> [erik@Erik-PC ~]$ 
> 

A connection with maria db, bacula user and password is possible and you can 
directly
use the tables of the bacula database like

select * from pool

?

Do you use security as like TLS for communication?

Cheers,



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Question about bacula

2019-12-13 Thread Pierre Bernhardt
Hello,

I think what you mean is a bare metal recovery procedure.
This is generally possible but needs some special preparations and instructions.
It is not a full out of the box recovery procedure.

It depends on how secure you backup you servers.

I've already written a complex base article but I think
this is not the time to publish them here. I don't make
people to read boring and unfinished alpharelease stuff :-)

Cheers,
Pierre

Am 12.12.19 um 16:49 schrieb Gregor Burck:
> Hi,
> 
> I've already a running system with bacula 9.4.4 and baculum.
> My main question is, could I make a desaster recovery of my Windows and Linux 
> Server?
> 
> It seem to me, sat bacula only made File Backup?
> 
> I suggest more than a veeam thing, but then I've to use Bacula Enterprise, 
> that support things like hyper visor, SQL and Exchange and other features?
> 
> In the moment I've a proxmox cluster and use a mix from Backupassist and the 
> proxmox own backup, I want to replace this solution thru an centrelized 
> Backupserver.
> 
> Bye
> 
> Gregor
> 
> 
> 
> 
> 
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
> 




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bsockcore issue.

2019-12-05 Thread Pierre Bernhardt
Am 05.12.19 um 13:59 schrieb Erik P. Olsen:
> Yes, ports 9101-9103 are all open.
> Do you have root access like with sudo?
Check on director server:

Check that director is really running:

pgrep -lf bacula-dir
8473 bacula-dir

If running is should be opened 9101
which you can check with netstat:

netstat -an |grep LISTEN|grep 910

You can try with tcpdump that a communication
is try to established:

sudo tcpdump -ni  port 9101

There should be something like this shown if you are starting a bconsole
from another machine:

sudo tcpdump -ni eth0 port 9101
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
root@backup:/media# tcpdump -ni eth0 port 9101
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
14:20:15.645897 IP 192.168.2.202.43984 > 192.168.2.254.9101: Flags [S], seq 
157316563, win 29200, options [mss 1460,sackOK,TS val 362236160 ecr 
0,nop,wscale 7], length 0
14:20:15.645931 IP 192.168.2.254.9101 > 192.168.2.202.43984: Flags [S.], seq 
1935232238, ack 157316564, win 28960, options [mss 1460,sackOK,TS val 
1402703476 ecr 362236160,nop,wscale 7], length 0
14:20:15.646116 IP 192.168.2.202.43984 > 192.168.2.254.9101: Flags [.], ack 1, 
win 229, options [nop,nop,TS val 362236160 ecr 1402703476], length 0
14:20:15.646547 IP 192.168.2.202.43984 > 192.168.2.254.9101: Flags [P.], seq 
1:35, ack 1, win 229, options [nop,nop,TS val 362236160 ecr 1402703476], length 
34
14:20:15.646598 IP 192.168.2.254.9101 > 192.168.2.202.43984: Flags [.], ack 35, 
win 227, options [nop,nop,TS val 1402703477 ecr 362236160], length 0
14:20:15.646639 IP 192.168.2.254.9101 > 192.168.2.202.43984: Flags [P.], seq 
1:60, ack 35, win 227, options [nop,nop,TS val 1402703477 ecr 362236160], 
length 59
14:20:15.647063 IP 192.168.2.202.43984 > 192.168.2.254.9101: Flags [.], ack 60, 
win 229, options [nop,nop,TS val 362236160 ecr 1402703477], length 0
14:20:15.647069 IP 192.168.2.202.43984 > 192.168.2.254.9101: Flags [P.], seq 
35:62, ack 60, win 229, options [nop,nop,TS val 362236160 ecr 1402703477], 
length 27
14:20:15.647114 IP 192.168.2.254.9101 > 192.168.2.202.43984: Flags [P.], seq 
60:77, ack 62, win 227, options [nop,nop,TS val 1402703477 ecr 362236160], 
length 17
14:20:15.647526 IP 192.168.2.202.43984 > 192.168.2.254.9101: Flags [P.], seq 
62:119, ack 77, win 229, options [nop,nop,TS val 362236160 ecr 1402703477], 
length 57
14:20:15.647567 IP 192.168.2.254.9101 > 192.168.2.202.43984: Flags [P.], seq 
77:104, ack 119, win 227, options [nop,nop,TS val 1402703477 ecr 362236160], 
length 27
14:20:15.647782 IP 192.168.2.202.43984 > 192.168.2.254.9101: Flags [P.], seq 
119:136, ack 104, win 229, options [nop,nop,TS val 362236160 ecr 1402703477], 
length 17
14:20:15.690505 IP 192.168.2.254.9101 > 192.168.2.202.43984: Flags [.], ack 
136, win 227, options [nop,nop,TS val 1402703488 ecr 362236160], length 0
14:20:15.690896 IP 192.168.2.202.43984 > 192.168.2.254.9101: Flags [P.], seq 
136:653, ack 104, win 229, options [nop,nop,TS val 362236171 ecr 1402703488], 
length 517
14:20:15.690907 IP 192.168.2.254.9101 > 192.168.2.202.43984: Flags [.], ack 
653, win 235, options [nop,nop,TS val 1402703488 ecr 362236171], length 0
14:20:15.691066 IP 192.168.2.254.9101 > 192.168.2.202.43984: Flags [P.], seq 
104:4200, ack 653, win 235, options [nop,nop,TS val 1402703488 ecr 362236171], 
length 4096
14:20:15.691947 IP 192.168.2.202.43984 > 192.168.2.254.9101: Flags [.], ack 
4200, win 293, options [nop,nop,TS val 362236171 ecr 1402703488], length 0
14:20:15.691955 IP 192.168.2.254.9101 > 192.168.2.202.43984: Flags [P.], seq 
4200:4606, ack 653, win 235, options [nop,nop,TS val 1402703488 ecr 362236171], 
length 406
14:20:15.692868 IP 192.168.2.202.43984 > 192.168.2.254.9101: Flags [P.], seq 
653:660, ack 4606, win 315, options [nop,nop,TS val 362236171 ecr 1402703488], 
length 7
14:20:15.693863 IP 192.168.2.202.43984 > 192.168.2.254.9101: Flags [R.], seq 
660, ack 4606, win 315, options [nop,nop,TS val 362236172 ecr 1402703488], 
length 0

The first line is the connection request from bconsole, the second line the
answer from the director.
Maybe you will see other communication, which is not relevant, so it should
be shown exactly at time the bconsole has been executed.

Cheers,




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Multi-cores compression

2012-03-29 Thread Pierre Bernhardt
Am 05.03.2012 15:28, schrieb Kleber l:
 2012/3/5 Gael Guilmin gael.guil...@pdgm.com
 On 05/03/12 14:00, Gael Guilmin wrote:
 I'd like to know if there is a way to allow the use of multi-cores
 during a backup and especially for the compression of the data?

 Why do you want compression?
 No, you dont need software compression to save space on LTO drive.
 LTO drives are hardware compression enabled.
 You will need software compression only for file based volumes.
And how should the tape drive compress pki encrypted files?

I'm sorry, but it's true, soft compression ist needed before the
data from the client will encrypt the files to save file space.
My systems all have different pki certificats so only the client
can decrypt the files. So I'm interested for an answer, too.

Cheers...
Pierre


--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to test restore without real writing files/dirs to a sotrage

2011-12-06 Thread Pierre Bernhardt
Hello,

I want to test the stored backups by reading them from the
media, but without storing to a filesystem like a normal
scheduled job. Is this possible?

Cheers...
Pierre Bernhardt



--
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Data spooling for migration jobs?

2011-12-01 Thread Pierre Bernhardt
Am 01.12.2011 08:25, schrieb James Harper:

 Is it possible to have an Spool Data directive for migrating jobs?

 I don't think you can do that. You could migrate to fast disk first then
 to the fast tape. Not as efficient as spooling but if you have lots of
 jobs it wouldn't be that bad.
All is fine. I checked and spooling data is possible in migration jobs.
Thats great and will save my tape drive from often rewinds.

Cheers...
Pierre Bernhardt



--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Data spooling for migration jobs?

2011-11-30 Thread Pierre Bernhardt
Hello,

I'm migrating my tapes from an older slow drive to a newer one.
I registred much stops of the new drive at the time of migration.

I've not set an Spool Data directive to my migration job configuration.

Is it possible to have an Spool Data directive for migrating jobs?

I'll check that if the running job has to been finished which needs
a couple of time, but it will be fine anybody can give me an answer
in the meantime.

Cheers...
Pierre


--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to prune/delete job entries from bacula db

2011-11-04 Thread Pierre Bernhardt
Am 09.01.2010 12:31, schrieb Pierre Bernhardt:
 
 PS: I could not find the database model in the documentation but it is 
 possible
 that I've searched the wrong one.
Found them here:
http://www.bacula.org/manuals/en/developers/developers/Database_Tables.html

Cheers...
Pierre


--
RSA(R) Conference 2012
Save $700 by Nov 18
Register now
http://p.sf.net/sfu/rsa-sfdev2dev1
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Posssible Init Catalog will not delete old entries?

2011-10-19 Thread Pierre Bernhardt
Hello,

the history is I have upgrade from 2.4.x to 5.0.1 step by step incl. the 
database.
On 2.4.x I'd run some time the init catalog job for different fd's.

After upgrade I would do that now more like scheduled so I made first an init 
catalog
and afterwards directly an Job Level Catalog.

The problem looks like now that there files found from the init catalog job 
which
has to been put in the catalog long time ago

Here some messages from the catalog job:
...
19-Oct 00:43 backup-dir JobId 9426: Warning: The following files are in the 
Catalog but not on disk:
19-Oct 00:43 backup-dir JobId 9426:   /var/lib/dpkg/info/libcurl3.postinst
19-Oct 00:43 backup-dir JobId 9426:   /var/cache/man/pt/index.db
19-Oct 00:43 backup-dir JobId 9426:   /var/cache/man/fr/index.db

These files are really not on the disks, neighter at init catalog run nor at 
catalog
run.

So back to my question: Is it possible, there is a problem with the init 
catalog in
bacula which do not delete old files from database at init catalog job?

How can I fix them e.g. manually first? I think it is enough to cleanup 
something in the
database to have an work around before I execute the next init catalog job, 
isn't it?

The job and fileset definition for the job is the same. The level will be 
overriden
by manually modification of the level or by the schedule.

Cheers
Pierre


--
All the data continuously generated in your IT infrastructure contains a
definitive record of customers, application performance, security
threats, fraudulent activity and more. Splunk takes this data and makes
sense of it. Business sense. IT sense. Common sense.
http://p.sf.net/sfu/splunk-d2d-oct
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] btape test append ejects the cartridge from tape

2011-09-15 Thread Pierre Bernhardt
Hello,

I've installed new backups system on same hardware. The old system
was lenny with an bacula 2.x release working well with this drive.

After installing I've tried to test all the tape drives. For my DLT-8000
tape drive in my Library I have no problem registred and the test has
been ended successfully.

But for my SuperDLT1 internal tape drive if I start the test the cartrige
will ejects at Append files test and then I got messages the test has
been failed with an, tata, No medium found error.

I googled about the problem and found one thread from 2007, but there is
no real solution shown nor an answer of an hint to disable Offline on umount
to no, which is allready performed by me.


root@backup:/etc/bacula# btape -v -d 99 -c bacula-sd.conf 
/dev/tape/by-path/xen-pci-0-pci-\:00\:01.0-scsi-0\:0\:1\:0-nst
Tape block granularity is 1024 bytes.
btape: stored_conf.c:698-0 Inserting director res: backup-mon
btape: stored_conf.c:698-0 Inserting device res: SDLT-1
btape: stored_conf.c:698-0 Inserting device res: DEC TZ89
btape: stored_conf.c:698-0 Inserting device res: QUANTUM DLT8000
btape: butil.c:284 Using device: 
/dev/tape/by-path/xen-pci-0-pci-:00:01.0-scsi-0:0:1:0-nst for writing.
btape: btape.c:476 open device SDLT-1 
(/dev/tape/by-path/xen-pci-0-pci-:00:01.0-scsi-0:0:1:0-nst): OK
*test

=== Write, rewind, and re-read test ===

I'm going to write 1 records and an EOF
then write 1 records and an EOF, then rewind,
and re-read the data to verify that it is correct.

This is an *essential* feature ...

btape: btape.c:1148 Wrote 1 blocks of 64412 bytes.
btape: btape.c:608 Wrote 1 EOF to SDLT-1 
(/dev/tape/by-path/xen-pci-0-pci-:00:01.0-scsi-0:0:1:0-nst)
btape: btape.c:1164 Wrote 1 blocks of 64412 bytes.
btape: btape.c:608 Wrote 1 EOF to SDLT-1 
(/dev/tape/by-path/xen-pci-0-pci-:00:01.0-scsi-0:0:1:0-nst)
btape: btape.c:1206 Rewind OK.
1 blocks re-read correctly.
Got EOF on tape.
1 blocks re-read correctly.
=== Test Succeeded. End Write, rewind, and re-read test ===

btape: btape.c:1274 Block position test
btape: btape.c:1286 Rewind OK.
Reposition to file:block 0:4
Block 5 re-read correctly.
Reposition to file:block 0:200
Block 201 re-read correctly.
Reposition to file:block 0:
Block 1 re-read correctly.
Reposition to file:block 1:0
Block 10001 re-read correctly.
Reposition to file:block 1:600
Block 10601 re-read correctly.
Reposition to file:block 1:
Block 2 re-read correctly.
=== Test Succeeded. End Write, rewind, and re-read test ===


=== Append files test ===

This test is essential to Bacula.

I'm going to write one record  in file 0,
   two records in file 1,
 and three records in file 2

btape: btape.c:578 Rewound SDLT-1 
(/dev/tape/by-path/xen-pci-0-pci-:00:01.0-scsi-0:0:1:0-nst)
btape: btape.c:1905 Wrote one record of 64412 bytes.
btape: btape.c:1907 Wrote block to device.
btape: btape.c:608 Wrote 1 EOF to SDLT-1 
(/dev/tape/by-path/xen-pci-0-pci-:00:01.0-scsi-0:0:1:0-nst)
btape: btape.c:1905 Wrote one record of 64412 bytes.
btape: btape.c:1907 Wrote block to device.
btape: btape.c:1905 Wrote one record of 64412 bytes.
btape: btape.c:1907 Wrote block to device.
btape: btape.c:608 Wrote 1 EOF to SDLT-1 
(/dev/tape/by-path/xen-pci-0-pci-:00:01.0-scsi-0:0:1:0-nst)
btape: btape.c:1905 Wrote one record of 64412 bytes.
btape: btape.c:1907 Wrote block to device.
btape: btape.c:1905 Wrote one record of 64412 bytes.
btape: btape.c:1907 Wrote block to device.
btape: btape.c:1905 Wrote one record of 64412 bytes.
btape: btape.c:1907 Wrote block to device.
btape: btape.c:608 Wrote 1 EOF to SDLT-1 
(/dev/tape/by-path/xen-pci-0-pci-:00:01.0-scsi-0:0:1:0-nst)
15-Sep 09:48 btape: Fatal Error at btape.c:472 because:
dev open failed: dev.c:491 Unable to open device SDLT-1 
(/dev/tape/by-path/xen-pci-0-pci-:00:01.0-scsi-0:0:1:0-nst): ERR=No medium 
found


Append test failed. Attempting again.
Setting Hardware End of Medium = no
and Fast Forward Space File = no
and retrying append test.



=== Append files test ===

This test is essential to Bacula.

I'm going to write one record  in file 0,
   two records in file 1,
 and three records in file 2

15-Sep 09:48 btape: ABORTING due to ERROR in dev.c:782
dev.c:781 Bad call to rewind. Device SDLT-1 
(/dev/tape/by-path/xen-pci-0-pci-:00:01.0-scsi-0:0:1:0-nst) not open
Bacula interrupted by signal 11: Segmentation violation
Kaboom! btape, btape got signal 11 - Segmentation violation. Attempting 
traceback.
Kaboom! exepath=/etc/bacula
Calling: /etc/bacula/btraceback /etc/bacula/btape 11319 /tmp
execv: /etc/bacula/btraceback failed: ERR=No such file or directory
It looks like the traceback worked ...
Dumping: /tmp/btape.11319.bactrace
btape: lockmgr.c:928 lockmgr disabled

The tape drive shows no error with tapeinfo:

root@backup:/etc/bacula# tapeinfo -f /dev/sg3
Product Type: Tape Drive
Vendor ID: 'COMPAQ  '
Product ID: 'SuperDLT1 

[Bacula-users] Maybe fixed: btape test append ejects the cartridge from tape

2011-09-15 Thread Pierre Bernhardt
Am 15.09.2011 10:32, schrieb Pierre Bernhardt:
Hello again,

 But for my SuperDLT1 internal tape drive if I start the test the cartrige
 will ejects at Append files test and then I got messages the test has
 been failed with an, tata, No medium found error.
It looks like at the point the umount command has to been executed at
possible after rewind. I know, that mt commands wait for access to the
device, so it is possible that the eject command comes much time later
than it was started to execute.

 
 Device {
   Name = SDLT-1
...
   Offline On Unmount = No
   Requires Mount = Yes
   Mount Command = mt -f %a load
   Unmount Command = mt -f %a eject
I've uncomment Umount Command and now it works without ejecting the cartrige.

For this drive it is normally not a problem after a bconsole umount to go
to the drive and eject manually the cartrigde.

Cheers...
Pierre Bernhardt




--
Doing More with Less: The Next Generation Virtual Desktop 
What are the key obstacles that have prevented many mid-market businesses
from deploying virtual desktops?   How do next-generation virtual desktops
provide companies an easier-to-deploy, easier-to-manage and more affordable
virtual desktop model.http://www.accelacomm.com/jaw/sfnl/114/51426474/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Maybe fixed: btape test append ejects the cartridge from tape

2011-09-15 Thread Pierre Bernhardt
Am 15.09.2011 13:11, schrieb Martin Simmons:
 On Thu, 15 Sep 2011 12:10:54 +0200, Pierre Bernhardt said:

 Am 15.09.2011 10:32, schrieb Pierre Bernhardt:
Hi,

 Does the mt load command pull the tape back into the drive?  Bacula expects
 the Mount Command to mount the tape.
It's more like an offline command. The old DLT tape drives has no engine to
eject a drive normally so you must normally pull an door to eject the drive
manually. In libraries the tape roboter to that by the roboter leg or by
an added engine at the drive. So if I use eject command to the drive
the drive will unload the tape in the cartridge but the cartridge itself
will be pull out by the roboter leg.
So if the roboter will not pull out the cartridge the tape could be reloaded
by an mt load command.
For this drive I need the eject command before mtx can unload the cartrige
because the lib will not send an eject command to the lib. In this case
also after the eject of the tape in the cartridge is finished the roboter
can open the drive door and pull out the cartrigde.

I'm not sure how I did this in the past. I should look in the mtx-changer
script because it is possible I did this with an modified script.

MfG...
Pierre


--
Doing More with Less: The Next Generation Virtual Desktop 
What are the key obstacles that have prevented many mid-market businesses
from deploying virtual desktops?   How do next-generation virtual desktops
provide companies an easier-to-deploy, easier-to-manage and more affordable
virtual desktop model.http://www.accelacomm.com/jaw/sfnl/114/51426474/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bscan on postgresql database

2010-08-11 Thread Pierre Bernhardt
Hello,

is it possible to connect bscan to a postgresql database? I try to scan my tapes
but bscan could not find the bacuala.db which says to me, that bscan do not try
to connect to the postgresql db I given in the command line:

r...@backup:/etc/bacula# bscan -vv -d 99 -c bacula-sd.conf -n bacula -u bacula 
-P Mrdu9E4rlMO4lQ -h localhost:5433 -s -m -V 
DLT-IV-034\|DLT-IV-035\|DLT-IV-036\|DLT-IV-037\|DLT-IV-038\|DLT-IV-039\|DLT-IV-040
 /dev/nst2m
bscan: stored_conf.c:675-0 Inserting director res: backup-mon
bscan: stored_conf.c:675-0 Inserting device res: SDLT-1
bscan: stored_conf.c:675-0 Inserting device res: DEC TZ89
bscan: stored_conf.c:675-0 Inserting device res: QUANTUM DLT8000
bscan: butil.c:282 Using device: /dev/nst2m for reading.
bscan: acquire.c:107-0 MediaType dcr= dev=DLT IV
11-Aug 16:28 bscan JobId 0: Invalid slot=0 defined in catalog for Volume 
DLT-IV-034 on QUANTUM DLT8000 (/dev/nst2m). Manual load my be required.
11-Aug 16:28 bscan JobId 0: 3301 Issuing autochanger loaded? drive 1 command.
11-Aug 16:28 bscan JobId 0: 3302 Autochanger loaded? drive 1, result is Slot 
7.
bscan: acquire.c:209-0 opened dev QUANTUM DLT8000 (/dev/nst2m) OK
bscan: acquire.c:212-0 calling read-vol-label
bscan: reserve.c:313-0 jid=0 reserve_volume DLT-IV-034
bscan: reserve.c:238-0 jid=0 new Vol=DLT-IV-034 at 80da140 dev=QUANTUM 
DLT8000 (/dev/nst2m)
bscan: reserve.c:181-0 jid=0 List from end new volume: DLT-IV-034 at 80da140 on 
device QUANTUM DLT8000 (/dev/nst2m)

Volume Label:
Id: Bacula 1.0 immortal
VerNo : 11
VolName   : DLT-IV-034
PrevVolName   :
VolFile   : 0
LabelType : VOL_LABEL
LabelSize : 162
PoolName  : Scratch
MediaType : DLT IV
PoolType  : Backup
HostName  : backup
Date label written: 25-Mai-2010 23:58
11-Aug 16:28 bscan JobId 0: Ready to read from volume DLT-IV-034 on device 
QUANTUM DLT8000 (/dev/nst2m).
11-Aug 16:28 bscan: ERROR TERMINATION at bscan.c:284
sqlite.c:177 Database /var/lib/bacula/bacula.db does not exist, please create 
it.

I modified the -h -P -u and -n differently but nothing helps. Any idea?
I know the release is allready deprecated, but at the moment it works and I 
need some time
to spend for an upgrade which I not have at the moment.

Cheers..
Pierre


--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bscan on postgresql database

2010-08-11 Thread Pierre Bernhardt
Am 11.08.2010 17:22, schrieb Bruno Friedmann:
 On 08/11/2010 04:36 PM, Pierre Bernhardt wrote:
 Hello,

 is it possible to connect bscan to a postgresql database? I try to scan my 
 tapes
 but bscan could not find the bacuala.db which says to me, that bscan do not 
 try
 to connect to the postgresql db I given in the command line:

 I modified the -h -P -u and -n differently but nothing helps. Any idea?
 I know the release is allready deprecated, but at the moment it works and I 
 need some time
 to spend for an upgrade which I not have at the moment.
 
 Pierre you need to use a bscan utility compiled for postgresql, the one you 
 used is for sqlite ...
 sqlite.c:177 Database /var/lib/bacula/bacula.db does not exist, please 
 create it.
 
 That's why it was bugging :-)

Thank you for the information. I'll reinstall the packages I need for 
postgresql.

Pierre


--
This SF.net email is sponsored by 

Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] How to prune/delete job entries from bacula db

2010-01-09 Thread Pierre Bernhardt
Hello,

I want do delete all the failed Jobs from the job lists.

I've identified all them with the following sql

# 19
:List jobs which have no files in db and are possible failed
SELECT DISTINCT 
Job.JobId,Job.Name,ClientId,Job.StartTime,Job.Type,Job.Level,JobFiles,JobBytes,JobStatus
 FROM Job
 WHERE JobId not in ( SELECT DISTINCT JobId from File )
 AND JobStatus in ('A','R','E','f','C','e')
 ORDER BY JobId

I've to much of them.

Prune or purge did not help because looks like delete more of the jobs
I want.

Any idea? Searching list and dokumentation didn't give me an solution.

At the moment only the following sql will help, I think:

delete from job
WHERE JobId not in ( SELECT DISTINCT JobId from File )
AND JobStatus in ('A','R','E','f','C','e')

Thank you...
Pierre

PS: I could not find the database model in the documentation but it is possible
that I've searched the wrong one.


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Corrupt tape label

2007-03-19 Thread Pierre Bernhardt
Hello,

I got a problem with one tape which have a corrupt label so
bacula cannot mount them.
I checked with btape too: readlabel could not found the
original label.

The tape is in use of my weekly backups.

Now my serious problem:
A job has been startet.
Bacula has loaded the tape an wants to mount it. But it cannot
because of the label was not found.
Now bacula wait and wait and wait..so I must cancel the jobs
which will go on these tape. I think auto cancel of the job I can
configure on job level.

To reread which tape is in which port I have updated the media list
by an

umount
update slots scan. At Slot 6 which contains the corrupt tape I got a
message.

A update slots=6 scan again said:

Could not unserialize Volume label: ERR=label.c:775 Expecting Volume
Label, got FI=SOS_LABEL Stream=141 len=162.

Here a problem:

But the tape will be still loading at next backup so I have now disabled
the volume which should contains the slot.

1. Is there a way to use an Error status instead of an disabling? (Not
   important but shows that there is a problem with this tape).
   Disabled status tapes could not purge, later.

2. Because of the corrupt label I think all backups stored on the tape
   and the following incrementals are no more readable so I want to
   purge/prune them. I did this without any problem after setting the
   status back to Full. Any other idea?

3. Here I found possible an bug:
   I have to delete the volume so I used update which didn't helped me:

*update
Update choice:
 1: Volume parameters
 2: Pool from resource
 3: Slots from autochanger
Choose catalog item to update (1-3): 3
The defined Storage resources are:
 1: File
 2: Libra-8
Select Storage resource (1-2): 2
Connecting to Storage daemon Libra-8 at spaceetch.privatnet:9103 ...
Connecting to Storage daemon Libra-8 at spaceetch.privatnet:9103 ...
3306 Issuing autochanger slots command.
Device Libra-8 has 8 slots.
Connecting to Storage daemon Libra-8 at spaceetch.privatnet:9103 ...
3306 Issuing autochanger list command.
No Volumes found to label, or no barcodes.

  So I used different one to check how I can delete the volume from
  the database. A update than slots didn't helped me.

  Any idea how I can delete the media from bacula?

4. update command can move a volume to an slot number which didn't
  exist on the tape library (12 for example). Not dau friendly :-)



MfG...
Pierre Bernhardt




-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Password Protection on Restore?

2007-03-19 Thread Pierre Bernhardt
Michael Havas schrieb:
 Hi,
 
 I was thinking of using data encryption as discussed in the manual and
 have the ssl key require a passphrase. Here are a few questions I thought
 of:
 
 1. Is this supported by bacula? Is somebody else doing this?
 
 2. Will this even work?
 
 3. Is it possible to only use the master certificate to do the encryption?
In my opinion yes. Use only a master cert on the fd for encryption.
This will prevent from restore without having the master key file.
But this mean you cannot directly restore on the client without having
the master key on the client so the client fd can read them.
 
 4. Will I be required to enter the passphrase upon backing up data as
 well? For automation reasons, this is not something I want.
For encryption you will never need an password. The cert is enough.
The cert could not be used for decryption.
For decryption:
I have never seen asking interactive password for decryption. You must
have the key for decryption stored without the password. But my idea is
you can put them on an memory stick for example.

This is my opinion. I have not tested them but it should work.
For encryption I use a master cert and a fd cert for every client.
And on every client the fd key is stored so I can recover directly
on the client.
The master key is only used by my if the client key is lost by recover
the whole client.

MfG...
Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] winbacula 2.0.3 signature alert

2007-03-12 Thread Pierre Bernhardt
Pierre Bernhardt schrieb:
 Hello,
 
 the signature file don't corrospond to the file:
 
 $ /cygdrive/c/Programme/Internet/GnuPT/GPG/gpg.exe -d winbacula-2.0.3.exe.sig
 gpg: Signature made 03/01/07 10:03:11 using DSA key ID 10A792AD
 gpg: BAD signature from Bacula Distribution Verification Key 
 (www.bacula.org)
Hello,

can any check this please? Did I have a infected binary gotten?


 $ /cygdrive/c/Programme/Internet/GnuPT/GPG/gpg.exe --print-md md5 
 winbacula-2.0
 .3.exe winbacula-2.0.3.exe.sig
 winbacula-2.0.3.exe: 0E D2 E6 6F 15 F5 9E 60  DC FB 0A 88 31 E2 71 7F
 winbacula-2.0.3.exe.sig: 57 71 4F 53 AE 02 6E BC  5F 5A 14 B8 42 29 6B 90
 
 Check for modification please.
 
 What's wrong?
 
Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] winbacula 2.0.3 signature alert

2007-03-11 Thread Pierre Bernhardt
Hello,

the signature file don't corrospond to the file:

$ /cygdrive/c/Programme/Internet/GnuPT/GPG/gpg.exe -d winbacula-2.0.3.exe.sig
gpg: Signature made 03/01/07 10:03:11 using DSA key ID 10A792AD
gpg: BAD signature from Bacula Distribution Verification Key (www.bacula.org)

$ /cygdrive/c/Programme/Internet/GnuPT/GPG/gpg.exe --print-md md5 winbacula-2.0
.3.exe winbacula-2.0.3.exe.sig
winbacula-2.0.3.exe: 0E D2 E6 6F 15 F5 9E 60  DC FB 0A 88 31 E2 71 7F
winbacula-2.0.3.exe.sig: 57 71 4F 53 AE 02 6E BC  5F 5A 14 B8 42 29 6B 90

Check for modification please.

What's wrong?

Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Feature request: more flexible TLS cert validation

2007-03-11 Thread Pierre Bernhardt
Kern Sibbald schrieb:
 Hello,
Hi,
 
 Unless I am mistaken, even if there is a duplicate CN as you fear, it seems 
 to 
 me it should pose no problems because the certificate would not match.
 
 Does someone more experienced with TLS know the answer to that?
Hmm. I'm not an expert but I've learned much of tls/ssl by installing
them on bacula 2 :-)

you must use for every IP/Hostname an own certificate.
But it's ok to use one key per machine with different related
certificates (you should know that one key can have much of
certificates) I do this.

I have a full TLS and PKI solution on test at the moment. I've created my own
root certificate so I can use trusted connections. The certificates
which are installed are related to:

1. Certificate for a access from a user.
2. Certificate for grant the bacula service.
3. Decryption Key for every user.
4. Decryption Key for bacula service.

5. Certificate for PKI Master encryption.
6. Certificate for PKI FD-Related encryption.

So I have one key for every real user (me at the moment, the server and
every (at the moment one) client)
or better understand:

Easy:
A. Every service which opens a port have a own cert.
B. Every clientmachine which opens a connection have a own cert,
including the bacula server, too.

Why:
The director will connect to the storage deamon.
In this situation the director is the client (B.) and the storage daemon is
the service (A.)

or:
The bconsole (B.) will connect to the director (A.)

or:
The director (B.) will connect to a file daemon (A.)

or:
The storage deamon (B.) will connect to the director (A.)

any more...?

If all is on the same machine under the same user:

A. is a service cert from a key related on the interface.
B. is a user cert from a key related from the [EMAIL PROTECTED]

On my server I'm using only one key with two certs created from:

eg. cn = bserver.localnet for A.
cn = [EMAIL PROTECTED] for B.

For a second fd client I use a different key but with two certs, too:

eg. cn = client.localnet for A.
cn = [EMAIL PROTECTED] for B.

For a bconsole I use an own key/cert:

eg. cn = [EMAIL PROTECTED]

Further information:

The cn for A. must the same configured in the rules for Address


The cn for B. can be all you want (include the one for A.).
But I'd trouble before I used good identified cn's.

Any questions?

MfG...
Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] winbacula 2.0.3 signature alert

2007-03-11 Thread Pierre Bernhardt
Kern Sibbald schrieb:
 On Sunday 11 March 2007 17:08, Pierre Bernhardt wrote:
 Hello,
Hi,

 the signature file don't corrospond to the file:

 $ /cygdrive/c/Programme/Internet/GnuPT/GPG/gpg.exe -d
 winbacula-2.0.3.exe.sig gpg: Signature made 03/01/07 10:03:11 using DSA key
 ID 10A792AD
 gpg: BAD signature from Bacula Distribution Verification Key
 (www.bacula.org)





 $ /cygdrive/c/Programme/Internet/GnuPT/GPG/gpg.exe --print-md md5
 winbacula-2.0 .3.exe winbacula-2.0.3.exe.sig
 winbacula-2.0.3.exe: 0E D2 E6 6F 15 F5 9E 60  DC FB 0A 88 31 E2 71 7F
 winbacula-2.0.3.exe.sig: 57 71 4F 53 AE 02 6E BC  5F 5A 14 B8 42 29 6B 90

 Check for modification please.

 What's wrong?
 
 We use public/private key cryptographic signatures rather than simple md5 
 hash 
 codes.  It is much more secure.
Your wrong. The md5 is only for a test on server side. If you look above you can
see the that the public key 0x10A792AD match not with the file and the sig.

MfG...
Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bweb on sqlite3 ?

2007-02-26 Thread Pierre Bernhardt
Hello,

I dont't want to install a comlex postgresql or mysql server.
Is it principle possible to use sqlite instead of using mysql or postgresql?

--
MfG...
Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] schedule on different pools

2007-02-26 Thread Pierre Bernhardt
Darien Hager schrieb:
 On Feb 4, 2007, at 4:12 AM, Pierre Bernhardt wrote:
 
 I have different pools created:

 Dailyfor incremental backups
 Weekly   for differential backups
 Monthly  for full backups

 If a Daily job executes and a full backup must be saved the bu
 schould go automatically to the monthly pool. For Differential
 it should use the Weekly pool.

 So I've configured the following schedule directive in director
 configuration.

 Schedule {
   Name = Cycle
   Run = Level = Full 1st sun at 18:35
   Run = Level = Differential Full Pool = Monthly 2nd-5th sun at 18:35
   Run = Level = Incremental Full Pool = Monthly Differential Pool =  
 Weekly
 mon-sat at 18:35
 }

 But the problem is, it will not work as I guess.

 Where is the mistake?
1st it's looks like running without any changes. Not i the way I
understood but I still check them.

 I can think of one issue which I encountered--if you do a  
 Differential or Incremental backup but no prior Full backup exists,  
 it becomes upgraded to Full. Because of this, you can possibly have a  
 Full backup done on any day, at least when you're starting out.
Yes, thats true. It's ok and the full backup should gone to monthly pool
instead of daily/weekly pool.

 Myself, I realized I didn't care so much about whether it was a daily/ 
 weekly/monthly backup, but actually I wanted to separate them by  
 their level, so my pools are full/diff/incr instead. If this is the  
 case for you, I would suggest you look at these three Job/JobDefs  
 directives:
 
 Full Backup Pool = pool-resource-name
 Differential Backup Pool = pool-resource-name
 Incremental Backup Pool = pool-resource-name
I will check them. Possible it's easier to do that in this way.

 These will override the pool specification depending on the level of  
 the backup. Using all three of them makes Pool =  redundant. When  
 using them, I could remove the pool specifications from my schedule.
Thx. for help.

cu...
Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] schedule on different pools

2007-02-26 Thread Pierre Bernhardt
Arno Lehmann schrieb:
 Hello,
 
 On 2/4/2007 1:12 PM, Pierre Bernhardt wrote:
 Hello,

 I have different pools created:

 Dailyfor incremental backups
 Weekly   for differential backups
 Monthly  for full backups

 If a Daily job executes and a full backup must be saved the bu
 schould go automatically to the monthly pool. For Differential
 it should use the Weekly pool.

 So I've configured the following schedule directive in director
 configuration.

 Schedule {
   Name = Cycle
   Run = Level = Full 1st sun at 18:35
   Run = Level = Differential Full Pool = Monthly 2nd-5th sun at 18:35
 
   Run = Level = Incremental Full Pool = Monthly Differential Pool = Weekly
 mon-sat at 18:35
 }

 But the problem is, it will not work as I guess.

 Where is the mistake?
 
 What do you expect the Full I marked above to do?
This means it should do a differential backup, but if no full backup
is made, the backup should go to the media pool Monthly instead of
media pool Weekly.

 Apart from that, you've done the same I did for quite some time, and 
 which worked flawless.
 
 Today, I prefer to set the pools for the backup levels in the job 
 definition, and link the pools to the storage device in the pool setup.
 
 That is not possible with 1.36, by the way.
I'm using bacula 2.0.0 from the debian packages on sorceforge.


cu...
Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bug? It is possible to purge running jobs

2007-02-26 Thread Pierre Bernhardt
Kern Sibbald schrieb:
 On Sunday 04 February 2007 13:05, Pierre Bernhardt wrote:
 Hello,

 I've seen some minutes ago that purging running jobs
 is possible. The job will finish with error message:

 starflake_test.2007-02-04_09.43.30 Warning: Error getting job record for
 stats: sql_get.c:293 No Job found for JobId 7

 Possible a bug or a feature ;-)
 
 User error.
In my opinion it is more an user interface problem because of user
errors should not bring a software to an unexpected situation.

But this is a problem the most software have:-(

MfG...
Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] No compression

2007-02-26 Thread Pierre Bernhardt
...
Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] btape test failure

2007-02-26 Thread Pierre Bernhardt
Pierre Bernhardt schrieb:
Hi after some time.
The problem comes from a usb to scsi adapter interface which have
some timing problems under vmware.
I've changed to a adaptec 2940 and now the problem is gone.

MfG...
Pierre Bernhardt

 Hi,
 
 I have no more idea and my search was not successful.
 
 btape test command will bring my tapes and/or st in an hanging
 state.
 
 Here my config I made the latest test:
 
 Device {
   Name = Loader_8x12
   Media Type = DDS-3
   Archive Device = /dev/nst0
   AutomaticMount = yes;   # when device opened, read it
   AlwaysOpen = yes;
   RemovableMedia = yes;
   RandomAccess = no;
   Changer Command = /etc/bacula/scripts/mtx-changer %c %o %S %a %d
   Changer Device = /dev/sg1
   AutoChanger = yes
   Alert Command = sh -c 'tapeinfo -f %c |grep TapeAlert|cat'
   Hardware End of File = no
   TWOEOF = yes
   Fast Forward Space File = no
 }
 
 I've checked first without Hardware End of File, TWOEOF and
 Fast Forward Space File settings with nearly same results.
 
 My system is Debian Sarge 2.6.8-2-386 and btape is from
 sarge DVD and has release 1.36.2.
 
 Here the test with debug:
 
 debian:~# btape -c /etc/bacula/bacula-sd.conf -d 99 -v -p /dev/nst0
 Tape block granularity is 1024 bytes.
 btape: stored_conf.c:453 Inserting director res: debian-mon
 btape: stored_conf.c:453 Inserting device res: Loader_8x12
 btape: butil.c:258 Using device: /dev/nst0 for writing.
 btape: dev.c:215 init_dev: tape=1 dev_name=/dev/nst0
 btape: dev.c:255 open_dev: tape=1 dev_name=/dev/nst0 vol=
 btape: dev.c:260 open_dev: device is tape
 btape: dev.c:310 open_dev: tape 3 opened
 btape: butil.c:170 Device opened for read.
 btape: btape.c:335 open_dev /dev/nst0 OK
 *test
 
 === Write, rewind, and re-read test ===
 
 I'm going to write 1000 records and an EOF
 then write 1000 records and an EOF, then rewind,
 and re-read the data to verify that it is correct.
 
 This is an *essential* feature ...
 
 btape: dev.c:374 rewind_dev /dev/nst0
 btape: btape.c:786 Wrote 1000 blocks of 64412 bytes.
 btape: dev.c:1200 weof_dev
 btape: btape.c:465 Wrote 1 EOF to /dev/nst0
 btape: btape.c:802 Wrote 1000 blocks of 64412 bytes.
 btape: dev.c:1200 weof_dev
 btape: btape.c:465 Wrote 1 EOF to /dev/nst0
 btape: dev.c:1200 weof_dev
 btape: btape.c:465 Wrote 1 EOF to /dev/nst0
 btape: dev.c:374 rewind_dev /dev/nst0
 btape: btape.c:811 Rewind OK.
 1000 blocks re-read correctly.
 12-Oct 13:44 btape: btape Error: block.c:782 Read error at file:blk 0:1000 on
 device /dev/nst0. ERR=Input/output error.
 btape: btape.c:823 Read block 1001 failed! ERR=Input/output error
 *status
  Bacula status: file=0 block=1000
 btape: btape.c:1707 Device status: 0. ERR=dev.c:639 ioctl MTIOCGET error on
 /dev/nst0. ERR=Input/output error.
 
 *quit
 btape: dev.c:1342 really close_dev /dev/nst0
 btape: dev.c:1477 term_dev
 Pool   Maxsize  Maxused  Inuse
 NoPool  2567  0
 NAME1300  0
 FNAME   2567  0
 MSG   645123  0
 EMSG   10242  0
 
 debian:~# mt status
 /dev/tape: Input/output error
 
 It's look like that the EOF marker could not be found.
 An second drive (DDS-2) which I've checked before has
 exactly the same problem.
 
 The tape will not really further work but after remove/add
 scsi operation on /proc/scsi/scsi the tape is normaly
 back and functional.
 
 Normal stores and recovers with dump has no problems shown.



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula server under vxmware

2007-02-26 Thread Pierre Bernhardt
Pierre Bernhardt schrieb:
 Hello,
 
 is anybody here which has an running bacula in vmware
 virtual machine running on linux installation with
 an tape drive?
 
 I have an problem with btape test command.
 
 Thank you for answers.
Hi,

with a pci adaptec 2940 card and a proper configuration of
the virtual machine I have actually no problems with hardware
more.

At the moment I testing bacula 2.0.0 on debian etch with a
Libra-8 tape library (8 x DDS-3 with Sony SDT9000)

MfG...
Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bweb on sqlite3 ?

2007-02-26 Thread Pierre Bernhardt
Hello,

I dont't want to install a comlex postgresql or mysql server.
Is it principle possible to use sqlite instead of using mysql or postgresql?

--
MfG...
Pierre Bernhardt




-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bweb on sqlite3 ?

2007-02-26 Thread Pierre Bernhardt
Hello,

I dont't want to install a comlex postgresql or mysql server.
Is it principle possible to use sqlite instead of using mysql or postgresql?

--
MfG...
Pierre Bernhardt

PS: This is the 3rd time to send these message. Hopefully now it is stored
in news group.


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] rerun full job in daily if latest full job was not started

2007-02-25 Thread Pierre Bernhardt
Hello,

first many thx to all which helped me ago with my noob
questions :-)

At first I will explain shortly what I do:

I have monthly full backups,
I have weekly differential backups to the last full one,
I have daily incremential backups based on latest diff hopefully.

If a higher job (diff/full) is failed, at next job start (next day)
the missing diff/full is reruned. Thats very nice and is working.

I'm using in job definition
Rerun Failed Levels = yes

If the system was down at this time, this will not work.
Is there a way to have this on missing last full/diff, too?

I mean, if the monthly full backup hasn't been run because
of director was down, the full backup will run at next (diff or
increment) schedule!?

Any idea how to implement such a directive?

MfG...
Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cleaning Jobs

2007-02-24 Thread Pierre Bernhardt
Arno Lehmann schrieb:
 Here my feature request for a cleaning enhacement:
 
 If you really want that to be a feature request you should rewrite it 
 into the format suggested by Kern. You can find that on the project web 
 site, I believe.
 
 1. It should be possible to declare for one or more cleaning slots
in a library.
 
 That would be done in mtx-changer, possibly configurable. This would be 
 possible rather easily, I think.
Hmm... I've read and understood I can change mtx-changer skript by simulating
barcodes. But the barcodes will then be use by bacula for label naming for
all tapes I will insert in future, too.
If I have more than 8 cartriges (I have near than 50, ok not all used
for bacula at the moment...) this could be a problem. So what I need
is here 7 Slots should didn't have a barcode label and the 8 will
get a barcode named CLN...

Could be this a way?


 2. A max. cleaning procedures for a cleaning tape should declared.
 3. The cleaning procedure should only arrange by bacula (count the
cleaning procedures)
 
 This is encompassed by the existing feature request, IMO.
 
 4. The cleaning procedure should be declared by a job.
 
 No. You really don't want to clean a drive unless the drive itself asks 
 you to.
Oh you don't really understand my intervention. I don't need really a
schedule. But handling cleaning in bacula (not only by bacula) by running
a job by run command will be a nice feature. So here we have a good
way to count the cleaning jobs. Repeat: No schedule of cleaning. Possible
checking for cleaning and if neccessary then clenaning if schedule.

 5. The job should be run by a request (tape alert, run command,...)
I mean there is a way that the drive itself could request for cleaning.
Possible a command which works by that...But this for future idea.

 6. The cleaning job should be abort if the number of cleaning requests
are arrived (message or other)
 
 I don't think that we need extra job types for drive cleaning. I rather 
 think that an external script would be best - think of mtx-changer for 
 tape movement - or, as a first step, a simple operator intervention request.
Every cleaning cartrige should only used a maximum count. This is my idea
behind this point. The abort should only then notifies me that I need to
insert a new cartrige.

 7. update slots scan should never scan the defined cleaning slots.
 Item 1.
My answer :-)


 8. cleaning slots should never requests for reading/writing.
 
 Item 1.
 
 What did the tape if I do cleaning manually:
 (Cleaning cartrige in slot 8)
 1. mtx load 8
 The changer loads the cartrige in the drive
 The drive inject the cartrige and cleaning process is startet.
 The drive ejects the cartrige and know its different:
 Libra-8: mtx unload 8 is needed because the drive eject don't
unload the cartrige back in slot.
 HP 12000: The drive eject the cartrige and the changer unloads
the cartrige automatically back in the slot (6)
(I'm not reallly shure. didn't use the library any more)
 
 First, drive cleaning is not a daily operation, so the actual loading of 
 the cleaning tape could be done manually, as a first step.
Yes I know.
 
 The more important step towards integrated drive cleaning is to 
 interface that into the normal tape operation. We will need job pausing 
 for this. Just imagine that you have a really huge job running, and 
 after the first few blocks written to tape the drive wants to be 
 cleaned. Today, you'd have to wait until that tape is no longer busy, 
 i.e. until all jobs to that tape are finished, or until a tape change is 
 required. Being able to pause the job, unmount the tape, clean the 
 drive, mount the tape again and continue the job would be a major 
 improvement.
A cleaning handling in bacula has the nice impact that I can run a job
manually. I can put this in the queue so the running job will be finished,
the cleaning job will be start, then next job will be run.

Again I aggree with you: No cleaning procedure if cleaning is not needed.
But if cleaning needed, than in bacula with a job and a dedicated/configured
slot for that is very good.

MfG...
Pierre Bernhardt

Sorry for my bad english :-)


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 'PURGE' command

2007-02-24 Thread Pierre Bernhardt
Hi,

 The delete command can have... unwanted side-effects, too.
Yes,

I've found that the commands in bconsole are a little bit more
clement than they should.

For example I can have the following line:

*list volumes pool=Scratch

All ok the volumes in pool Scratch are shown.

*list volumes pool=Scratch lala blabla

The command means all is ok and list the same before.

So the following command will give exactly the same:

*list volumes pool=Scratch volid=DDS3_34

In affect the command did not do the things what I mean.

All other commands working in the same way, purge too.

This means if I am a stupid user because I'm to fast type
a command wrong the command ignore them silently, mostly.

So the command do more then they should.  A correct
parsing error handling of given commands are missing
on most commands.

For purge/prune and so on I've failed at the same problem,
ago.

Wrong commands should not ignore silently. They should abort
with a error message. I think this should be added in future
releases.


MfG...
Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cleaning Jobs

2007-02-21 Thread Pierre Bernhardt
HAWKER, Dan (external) schrieb:
 
 Hi All,
 
 Am presently in the midst of configuring Bacula to backup some
 aggregated data from a few servers to tape. All works fine, however am
 slightly confused wrt cleaning tapes.
 
 According to the manual you can assign a cleaning prefix, however as I
 understand this is only useful for autochangers with barcodes.
 
 So how do you create a cleaning job??? For instance I'd like to assign
 slot6 as a cleaning slot, and then create a *CleaningJob* that loads and
 accesses the cleaning tape, once a week. Say on Sunday evening when
 nothing should be happening.
 
 Is this supported in Bacula, or is it just easier/better to have a quick
 and dirty script run from cron, that uses mtx to load and then unload
 the tape???
This is a question which I am ask some days ago, too.
I'm interessted, too.

MfG...
Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cleaning Jobs

2007-02-21 Thread Pierre Bernhardt
Arno Lehmann schrieb:
 Hi,
 
 On 2/21/2007 11:52 AM, HAWKER, Dan (external) wrote:
 Hi All,

 Am presently in the midst of configuring Bacula to backup some
 aggregated data from a few servers to tape. All works fine, however am
 slightly confused wrt cleaning tapes.

 According to the manual you can assign a cleaning prefix, however as I
 understand this is only useful for autochangers with barcodes.
 
 Right.
 
 So how do you create a cleaning job???
 
 I don't.
 
 For instance I'd like to assign
 slot6 as a cleaning slot, and then create a *CleaningJob* that loads and
 accesses the cleaning tape, once a week. Say on Sunday evening when
 nothing should be happening.
 
 No. I would not do this.
Ok.

 Modern tape drives can decide themselves when they need to get cleaned. 
 Cleaning more often than strictly necessary will no only use up the 
 cleaning cartridge but might also damage your tape drive and invalidate 
 its warranty.
 
 Autochangers can usually manage cleaning on their own - you dedicate one 
 slot to the cleaning tape, and everything else happens automatically. I 
 don't know if tis interferes witch Baculas operation, though - I don't 
 operate an autochanger like that.
Usually only, if this is a high expensive tape library. I have bought an
Libra-8 8 cartrige DDS-3 and an 6 cartrige DDS-2 HP libraries.
Both libs have no barcode reade.
It is possible that both can use a cleaning cartrige.

But update slots scan command after cartrige change of the other ones
are not a good Idea because the cleaning cartrige is loaded, too.
At this time the tape is loaded but will eject by the drive but not unload.
And bacula don't eat the cartrige and is still waiting for the
data on it (but could never read them)

 Is this supported in Bacula, or is it just easier/better to have a quick
 and dirty script run from cron, that uses mtx to load and then unload
 the tape???
If I use mtx bacula will not see what I did with the other cartriges.
So I got problems because bacula has other information stored.

 There is a feature request open regarding better tape cleaning 
 management... until that is implemented, I do that manually.
At first I need a option so bacula will never use a proibited slot.
At the moment I know only, that I can use Cleaning prefix and than
add the Cartrige manually by add command. But I have not test this
because the update slot scan feature will check the slot again.

Only modifying the mtxchanger script is a workaround, but then
I must modify it on a very dirty way.

 Whenever there is a tapealert (which you can get into Baculas job 
 report) or a drive failure, it's time for manual intervention. Not 
 really a serious problem, because in a clean environment you won't have 
 to clean your drives very often. And you wouldn't want to operate a 
 modern tape drive in a dusty environment :-)
Ok. Some points of a feature for next bacula:

Here my feature request for a cleaning enhacement:

1. It should be possible to declare for one or more cleaning slots
   in a library.
2. A max. cleaning procedures for a cleaning tape should declared.
3. The cleaning procedure should only arrange by bacula (count the
   cleaning procedures)
4. The cleaning procedure should be declared by a job.
5. The job should be run by a request (tape alert, run command,...)
6. The cleaning job should be abort if the number of cleaning requests
   are arrived (message or other)
7. update slots scan should never scan the defined cleaning slots.
8. cleaning slots should never requests for reading/writing.

What did the tape if I do cleaning manually:
(Cleaning cartrige in slot 8)
1. mtx load 8
The changer loads the cartrige in the drive
The drive inject the cartrige and cleaning process is startet.
The drive ejects the cartrige and know its different:
Libra-8: mtx unload 8 is needed because the drive eject don't
   unload the cartrige back in slot.
HP 12000: The drive eject the cartrige and the changer unloads
   the cartrige automatically back in the slot (6)
   (I'm not reallly shure. didn't use the library any more)

Discussion is welcome :-)

MfG...
Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] patches for deb packages was: No compression

2007-02-21 Thread Pierre Bernhardt
Michel Meyers schrieb:
 On Sat, 17 Feb 2007 21:03:04 +0100, Pierre Bernhardt [EMAIL PROTECTED] 
 wrote:
 Hi,

 here the relevant director configuration:
 # List of files to be backed up
 FileSet {
 Name = Full Set
 Include {
 Options {
 signature = SHA1
 compression = GZIP
 [...]
 10-Feb 14:38 spaceetch-dir: Bacula 2.0.0 (04Jan07): 10-Feb-2007 14:38:16
 [...]
 Any idea why software compression is turned off? With my Catalog backup I
 havn't a problem (both are using same bacula-fd):
 
 See the release notes for Bacula 2.0.1:
 The Options scanner in the FD was corrected to correctly handle the SHA1 
 option, which was eating the next option.
 
 You should upgrade Bacula to at least 2.0.1.
Hmm,

I'm installed the deb packages on an etch. Are there patches or newer
debs which I can use?

MfG
Pierre Bernhardt


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT  business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


  1   2   >