Re: [Bacula-users] Bacula Release 15.0.2

2024-03-28 Thread Uwe Schuerkamp
Hello Eric & team,

thanks for the update & your continuing support of the bacula
community edition. I just upgraded my first bacula install from 13.0.2
to 15.0.2 without any issues whatsoever (Ubuntu 22.04, bacula compiled
from source).

All the best,

Uwe

-- 
Uwe Schürkamp // email: 




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ubuntu 22.04: bacula-fd from official repo cannot talk to src-compiled server on 20.04

2023-11-15 Thread Uwe Schuerkamp
Hi again,

On Tue, Nov 14, 2023 at 03:29:26PM +, Martin Simmons wrote:


> The --with-openssl=..path-to-openssl-install.. option works for me (Debian
> 11), where ..path-to-openssl-install.. is the path containing files like:
> 
> include/openssl/ssl.h
> lib/libssl.so
> 
> Also, LD_LIBRARY_PATH needs to be set at runtime.
> 

That's weird, this definitely did not work on my end when using the
--with-openssl-Flag. Configure would pick up the openssl binary in
/usr/bin, so I had to modify LD_LIBRARY_PATH, PATH and set these flags

LDFLAGS=-L/server/openssl-3.0.12/lib64
CFLAGS=-I/server/openssl-3.0.12/include/

in order to successfully compile bacula using the new ssl libraries.

All the best,

Uwe



-- 
Uwe Schürkamp // email: 





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ubuntu 22.04: bacula-fd from official repo cannot talk to src-compiled server on 20.04

2023-11-14 Thread Uwe Schuerkamp
On Thu, Nov 09, 2023 at 09:46:25AM -0500, Phil Stracchino wrote:

> 
> One presumes you are aware of how old your Director's OpenSSL is?
> 

Hello Phil,

one presumes correctly. :-)

However I was under the impression that a modern bacula (13.0.3) would
either be able to talk to clients on new Ubuntu releases if using the
bacula-fd from bacula's own repos for 22.04 or that "configure" on
Ubuntu 20.04 (still in support) would complain loudly about the age of
the system-provided ssl library.

Anyway, I compiled openssl-3.0.12 from source on the 20.04 director /
stored and provided the location to bacula's configure script using
the "--with-openssl" directive in order to recompile it using the new
libraries / binaries in /server/openssl-3.012 (the chosen installation
location for the modern openssl).

Sadly, bacula's "configure" failed to pick up the correct location and
insisted on using the system-provided binaries and libraries from the
old, system-provided ssl installation, so I guess something is not
right w/r to bacula's configure option for openssl.

I'd be grateful for any hints on how to fix this. I'm aware I'll need
to update the bacula server eventually to Ubuntu 22.04 but for now it
seems a bit overkill just because it fails to talk to a single client
of among 50 others that are still ticking along nicely.


All the best,

Uwe

-- 
Uwe Schürkamp // email: 




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Ubuntu 22.04: bacula-fd from official repo cannot talk to src-compiled server on 20.04

2023-11-09 Thread Uwe Schuerkamp
Hi folks,

we recently upgraded one of our clients from Ubuntu 20.04 to 22.04 and
re-installed bacula-fd from the official bacula.org repos.

Now it seems our bacula-server (13.0.x) compiled from source on 20.04
cannot initiate a TLS connection to the upgraded client:

09-Nov 10:04 -dir JobId 0: Error: openssl.c:81 Connect failure: 
ERR=error:14094417:SSL routines:ssl3_read_bytes:sslv3 alert illegal parameter

09-Nov 10:04 -dir JobId 0: Fatal error: TLS negotiation failed with FD at 
"new.client.upgraded.to.220.04:9102"



Are there any client-side settings we could use so these components
will talk to each other again? :-)

director (compiled from source):

libssl.so.1.1 => /usr/lib/x86_64-linux-gnu/libssl.so.1.1 (0x7f3d8ac51000)
libgnutls.so.30 => /usr/lib/x86_64-linux-gnu/libgnutls.so.30 
(0x7f087cbb5000)


client (from bacula repos):
libssl.so.3 => /lib/x86_64-linux-gnu/libssl.so.3 (0x7f4d8499b000)



Thanks in advance,

Uwe

-- 
Uwe Schürkamp // email: 





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tricky restore issue (bacula 11)

2022-11-02 Thread Uwe Schuerkamp
Hi folks,

thanks for your comments. We managed to clone the damaged VM, booted
it using grml and were able to undo the damage done by the mistaken
"find" command. We're now looking for a downtime to fix the production
system in the next few days hopefully.

All the best,

Uwe





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Tricky restore issue (bacula 11)

2022-10-31 Thread Uwe Schuerkamp
Hi folks,

due to user error part of a file system on one of our backup clients
was compressed by a find command gone haywire. Among them are
/usr/bin/bash & others (kernel, initrd, grub, root, authorized_keys
etc), leading to a situation where we cannot log into the system
remotely nor via the vmware console function anymore. 

I've tried to restore /usr/bin/bash by its uncompressed version as the
bacula-fd on the client is still running (today's incr. backup worked
fine, too), but once the restore starts I'm getting the following
error:

31-Oct 12:23 deniol2378-sd JobId 23316: Forward spacing Volume "deX-0183" 
to addr=269
31-Oct 12:23 deniol2378-sd JobId 23316: Elapsed time=00:00:01, Transfer 
rate=484.9 K Bytes/second
31-Oct 12:22 bacula-fd JobId 23316: Error: create_file.c:227 Could not create 
/usr/bin/bash: ERR=Permission denied
31-Oct 12:23 deniol2378-dir JobId 23316: Bacula deYY-dir 13.0.0 (04Jul22):

the bacula-fd runs as root on the client so it should have no issue
creating the file in the local filesystem... any idea what could be
causing this? It's also weird that I'm seeing a "restore OK" message
at the end of the job output:

  Replace:Always
  Start time: 31-Oct-2022 12:23:02
  End time:   31-Oct-2022 12:23:02
  Elapsed time:   1 sec
  Files Expected: 1
  Files Restored: 1
  Bytes Restored: 0 (0 B)
  Rate:   0.0 KB/s
  FD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Restore OK


I'm clutching at straws here naturally, however before we rebuild the
VM in question from a full backup I'd like to try as many options as
possible.

What makes this case even worse is that LVM is being used for / &
other fileystems, so we cannot simply mount those drives on another VM
(3 physical volumes) to repair the damage, right?


Thanks for reading this far & for your help,

Uwe





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrating from mariadb to postgresql

2022-09-08 Thread Uwe Schuerkamp
Hello Martin,

thanks, this type of cast has helped with the volume names in the
postgres catalog!


On Wed, Sep 07, 2022 at 04:32:58PM +0100, Martin Simmons wrote:

> This might work but I've not tested it:
> 
> cast type tinyblob to text using varbinary-to-string
> 

I've extended the CAST to include "blob" types as well, now it looks
like bacula finds previous jobs, volumes etc. in the converted
database using this pgloader ("import_bacula.lisp") file:

###
LOAD DATABASE
FROM mysql://bacula:@localhost/bacula
INTO postgresql://bacula:X@localhost/bacula

CAST type tinyblob to text using varbinary-to-string,
type blob to text using varbinary-to-string,
type binary to char  using varbinary-to-string


BEFORE LOAD DO
$$ set schema 'bacula' ; $$ ;
###

and then running the following sequence of commands (I'm using local
copies of the bacula scripts in postgres' HOME for convenience):

echo "drop database bacula; drop role bacula; "  | psql
~/create_postgresql_database
~/make_postgresql_tables
~/grant_postgresql_privileges

pgloader import_bacula.lisp

I think switching the schema to "bacula" (which is created by the
scripts) is also critical for bacula to be able to find its stuff in
the postgres catalog.

Some issues remain though:


o auto-purging / recycling doesn't work. I see a message about a disk
volume being purged, but the status in "media" remains "Used" until I
purge the volume in question manually.

o query.sql probably needs to be replaced with a postgres-compatible
version.


Thanks again for your help folks, it's much appreciated. I think we're
one step closer in creating a working pgloader config file to help
folks who want to migrate from mariadb to postgresql :-) 

All the best,

Uwe



-- 
Uwe Schürkamp // email: 




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrating from mariadb to postgresql

2022-09-07 Thread Uwe Schuerkamp
On Mon, Sep 05, 2022 at 11:57:43AM -0500, dmitri maziuk wrote:

> Right, I saw the starting '\x' and looked no further. OP will have to figure
> out what encoding that POS is actually dumping into -- keeping in mind that
> it may be the OS messing it up when saving to text file -- and then figure
> out the right spell to un$#@! it.
> 

Hello Dima et al., 

I don't think pgloader uses any files as intermediate storage but will
connect both to mysql / mariadb and transfer the data between them
(but that's just guessing).

I also tried the other method recommended in this thread, using only
the INSERT statements that mysqldump creates (minus the JobStatus
table which is created & filled by bacula's script) on a postgres
database created by the bacula-provided scripts.

I've gotten as far as successfully importing the resulting mysql dump
file into postgres, the volumenames look ok (no hex strings this time)
but as soon as I start a backup job, bacula complains about not being
able to find an appendable volume.

When I try to label a new volume (again, for testing purposes) I get a
"duplicate key" error from postgres as bacula tries to insert the data
into the media table:

$ echo "drop database bacula;"  | psql
$ ./create_postgresql_database
$ ./make_postgresql_tables
$ ./grant_postgresql_privileges

... snip snip 

GRANT
GRANT
GRANT
GRANT
Privileges for user bacula granted on database bacula.

Then I create the dump, mangling it in a way so that postgres imports
it without errors (I don't care about the mangled \' filenames for now
that are in the File table):

$ (mysqldump -u bacula -p -nc --compatible=postgresql 
--default-character-set=latin1 bacula  | grep INSERT | grep -v JobStatus | sed 
's/"//ig'  | sed "s/\\\'//ig") | psql bacula 

... snip snip 

INSERT 0 14026
INSERT 0 14000
INSERT 0 13746
INSERT 0 13778
INSERT 0 13073
INSERT 0 13817
INSERT 0 6915
INSERT 0 6
INSERT 0 3
INSERT 0 1

$


I can then start up bacula-postgres with the resulting catalog just fine: 

# /server/bacula/etc/bacula stop ; /server/bacula-13.0.1_postgres/etc/bacula 
start 

a "stat dir" looks good w/r to volume names: 

# echo stat dir | bconsole
1000 OK: 10002 zif-dir Version: 13.0.1 (05 August 2022)
Enter a period to cancel a command.
stat dir

Scheduled Jobs:
Level  Type Pri  Scheduled  Job Name   Volume
===
IncrementalBackup10  08-Sep-22 10:45zifzif-incr-0024



Starting a job I receive the "no appendable volumes found" message,
then trying to a label a new volume (for testing purposes) I get this
message:

*label volume="testvolume" pool=Online_zif_incr storage=FileStorage_zif_incr
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Sending label command for Volume "testvolume" Slot 0 ...
3000 OK label. VolBytes=231 VolABytes=0 VolType=1 Volume="testvolume" 
Device="FileStorage_zif_incr" (/backup/online_backup_zif/)
sql_create.c:483 Create DB Media record INSERT INTO Media 
(VolumeName,MediaType,MediaTypeId,PoolId,MaxVolBytes,VolCapacityBytes,Recycle,VolRetention,VolUseDuration,MaxVolJobs,MaxVolFiles,VolStatus,Slot,VolBytes,InChanger,VolReadTime,VolWriteTime,VolType,VolParts,VolCloudParts,LastPartBytes,EndFile,EndBlock,LabelType,StorageId,DeviceId,LocationId,ScratchPoolId,RecyclePoolId,Enabled,ActionOnPurge,CacheRetention)VALUES
 
('testvolume','File_zif',0,5,0,0,1,3456000,0,1,0,'Append',0,231,0,0,0,1,0,0,'0',0,0,0,2,0,0,0,0,1,1,0)
 failed. ERR=ERROR:  duplicate key value violates unique constraint "media_pkey"
DETAIL:  Key (mediaid)=(1) already exists.

Sorry for the longish post & lines, I just wanted to ensure I include
all the info.


All the best, 

Uwe 










___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrating from mariadb to postgresql

2022-09-07 Thread Uwe Schuerkamp
On Mon, Sep 05, 2022 at 05:14:28PM +0100, Martin Simmons wrote:
> >>>>> On Mon, 5 Sep 2022 11:21:52 +0200, Uwe Schuerkamp said:
> > 
> > I've tried casting "blob" and "tinyblob" (the mariadb column types for
> > VolumeName, for example) to "text", but pgloader just hangs when
> > including those cast statements.
> 
> What exact cast statement did you use?
> 

Hello Martin,

I think I used

cast type tinyblob to text

for testing purposes, but I don't have the pgloader config file any more, sorry.

All the best,

Uwe




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrating from mariadb to postgresql

2022-09-05 Thread Uwe Schuerkamp
Hi folks,

I've now tried to migrate my mariadb bacula db to postgres using
Wanderlei's scripts linked below. Sadly I end up with the same "hex
value" volume names in the media table. :.( It looks like newer
mariadb / mysql catalogs might require some extra steps to enable a
successful migration to postgresql.

I've tried casting "blob" and "tinyblob" (the mariadb column types for
VolumeName, for example) to "text", but pgloader just hangs when
including those cast statements.

Also, Dima wrote earlier:

> Because it's a hex text, presumably ;)

> This is likely happening when pulling the data out of mysql, not when
> displaying it in bconsole.

> You could try `select encode(Volume, 'hex') from` whatever table it's in, in
> psql. If that looks OK: `update $table set Volume=encode(Volume, 'hex')` would
> be quick fix. Maybe add a guard along the lines of "where Volume like '\x%'"
> or something.

When doing the "select" outlined above I simply end up with the same
volume names minus the leading "\x":

bacula=# select encode(VolumeName, 'hex') from media; 
   encode   

 7a69662d66756c6c2d30303031
 7a69662d696e63722d30303032
 7a69662d696e63722d30303033
 ...
 

All the best, Uwe




On Tue, Aug 30, 2022 at 08:12:53PM -0300, Wanderlei Huttel wrote:
>Hello Uwe
>I've made a script to migrate MySQL/MariaDB to PostgreSQL
>
> [1]https://github.com/wanderleihuttel/bacula-utils/tree/master/convert_mysql_to_postgresql
>I've found errors only in the Log table.
>I've looked for chars with wrong encoding and make an update in MySQL, did
>a dump and import again for PostgreSQL only this table
>Best regards

-- 
Uwe Schürkamp // email: 





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrating from mariadb to postgresql

2022-09-01 Thread Uwe Schuerkamp
On Thu, Sep 01, 2022 at 01:01:31PM +0100, Martin Simmons wrote:

> The volume name above is "zif-incr-0019" if you decode the hex, so it looks
> like you need to add some translation from the various BLOB types to text in
> the pgloader configuration if that is possible.  By default, pgloader converts
> the BLOB types to binary.
> 

Hello Martin et al., 

I just checked the table definition in postgres (as it's created by
bacula's script) and the fields in question are all of type "text" in
postgres, not binary, even after pgloader has imported the mysql data.

I may well be mis-interpreting psql's output, but this is what I
see when I look at the job table for instance:

\d+ job
Table 
"public.job"
 Column  |Type | Collation | Nullable | 
 Default   | Storage  | Stats target | Description 
-+-+---+--++--+--+-
 jobid   | integer |   | not null | 
nextval('job_jobid_seq'::regclass) | plain|  | 
 job | text|   | not null | 
   | extended |  | 
 name| text|   | not null | 
   | extended |  | 


So I'm wondering why "text" would end up displayed as "hex" in bconsole?

Thanks again for your help (and your patience with a postgres noob :-)),

Uwe


-- 
Uwe Schürkamp // email: 





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrating from mariadb to postgresql

2022-09-01 Thread Uwe Schuerkamp
Hi folks,

On Tue, Aug 30, 2022 at 03:53:47PM +0100, Martin Simmons wrote:

> Bacula will probably not work if pgloader created the schema.  I think you
> should do that part with Bacula's make_postgresql_tables script and configure
> pgloader to keep that schema (i.e. the opposite of most of the "WITH" clauses
> that you mentioned originally.
> 

I've now used the bacula provided scripts to create the database,
tables and grants and have reloaded the catalog using this gploader
file:

LOAD DATABASE
FROM mysql://bacula:vampyre2020k@localhost/bacula
INTO postgresql://bacula:vampyre2020k@localhost/bacula ;


I didn't see any errors during the import and postgres-bacula (hehe
:-)) starts up fine. When I issue a "stat dir" in bconsole I'm seeing
the mangled volume names again though:

cheduled Jobs:
Level  Type Pri  Scheduled  Job Name   Volume
===
IncrementalBackup10  01-Sep-22 10:45zif
\x7a69662d696e63722d30303139


The funny thing is that while the "job" name above looks fine, they're
mangled too when I do a "list jobs", same goes for the pool, media and
other objects that can be inspected using the "list" command. 


Bacula no longer complains about the non-ASCII database scheme so it
seems like the script-created database survived the import by pgload
intact.

I'll also have a look at Wanderlei's scripts to see if they work
fine. I managed to import the db using the "INSERT" filtering outlined
above and omitting the JobStatus table, but then bacula was unable to
find any appendable volumes when trying to run a test job.

All the best,

Uwe

-- 
Uwe Schürkamp // email: 




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrating from mariadb to postgresql

2022-08-30 Thread Uwe Schuerkamp
Hello Charles,

thanks for "INSERT only" idea... it's worked partially. After removing
some double quotes and other characters using sed psql didn't like,
the import runs for a while and then stops with the following error:

INSERT 0 6
ERROR:  duplicate key value violates unique constraint "status_pkey"
DETAIL:  Key (jobstatus)=(A) already exists.


Any idea what could be causing this?

Again thanks for your help!

All the best,

Uwe

-- 
Uwe Schürkamp // email: 




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrating from mariadb to postgresql

2022-08-29 Thread Uwe Schuerkamp
Hello Eric,

thanks much for your reply.

On Thu, Aug 25, 2022 at 03:36:09PM +0200, Eric Bollengier via Bacula-users 
wrote:
> 
> Bacula might have to store characters coming from the different clients
> (filename and
> 
> path mostly), and we have no guarantee that they will be in valid UTF8.
> 

The setup I'm trying to convert is as simple as it gets: a single host
that backs up itself, no other clients are involved.


I've tried using the create_postgres db and table scripts provided by
the bacula install before running pgloader to import the data from
mariadb, but sadly this hasn't helped. The warning in bconsole
disappears, but bacula is still unable to find any volumes from the
old catalog as they're displayed as "\xss850938sdkl" or similar as
described in my previous email.

It might well be that pgloader mangles the db data in some way. Are
there other proven ways to import a mariadb database into postgres
that folks here might have any experience with?

Thanks,

Uwe


-- 
Uwe Schürkamp // email: 




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Migrating from mariadb to postgresql

2022-08-29 Thread Uwe Schuerkamp
On Thu, Aug 25, 2022 at 06:57:46PM +0100, Martin Simmons wrote:
> Do you have non-ASCII characters in your volume, job or client names?  If not,
> then I don't see why the warning would cause them to look quite funny
> (whatever that means).
> 
> __Martin

Hi folks,

thanks for your answers. No, I don't have any non-ascii chars in my
client, job or volume names. By "looking funny" I mean that for
instance the disk volume name "zif-full-0001" turns into
\xverlongrowofrandomcharacters.

Of course I could start with an empty catalog but it'd be nice to
transfer the existing history to the postgres db which I guess would
(or should, even) be possible somehow. In another experiment I
migrated a mariadb django backend to postgresql on the same machine
without any issues, it's just bacula that's acting up in this case.

All the best,

Uwe

-- 
Uwe Schürkamp // email: 





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Migrating from mariadb to postgresql

2022-08-25 Thread Uwe Schuerkamp
Hi folks,

I'm trying (for educational purposes) to migrate an existing bacula
catalog to use with a postgres backend (mariadb10 / postgresql12,
Ubuntu 20.04).


I've imported the bacula catalog using pgloader and this config:

LOAD DATABASE
FROM mysql://bacula:X@localhost/bacula
INTO postgresql://bacula:X@localhost/bacula 

WITH include drop, create tables, no truncate,
 create indexes, reset sequences, foreign keys; 


The import works fine, however when connecting to bacula 13.0.1 (compiled
from source for postgres use) I get the following message in bconsole:

25-Aug 14:08 zif-dir JobId 0: Warning: Encoding error for database
"bacula". Wanted SQL_ASCII, got UTF8

Most web searches are ancient so I'm wondering if the ASCII bit is
still a valid requirement? In bconsole, the volume, job and client
names all look quite funny when doing a stat dir and the volume isn't
found when I run a backup job (probably due to this encoding issue).


I've also checked the configure script for any options related to the
encoding, but could not find anything, also I've used the provided
create_postgres_database script to (hopefully) initialize the catalog
db in postgres with the correct encoding. 

Is there anything I'm doing wrong? Please excuse my ignorance, as I'm
only now getting my feet wet with postgres especially w/r to the
mariadb import...

Thanks in advance & all the best,

Uwe





___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Release 13.0.0

2022-07-14 Thread Uwe Schuerkamp
Hi folks,

I also upgraded our bacula instances (all four of them, using MariaDB
backends) to 13.0 from 11.0.x, no issues so far. Great work!

All the best & thanks,

Uwe

-- 
Uwe Schürkamp // email: 
Arvato Digital, NMM-D1 // Phone: [+49] 5241 - 80 82 423




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volume Corruption?

2022-06-08 Thread Uwe Schuerkamp
First of all I'd make sure that you're suffering from file system
issues, check "dmesg" for example for any problems writing to a disk
volume, timeouts or similar.

Once you are sure your FS / disks are ok, you can use the "purge"
command in bconsole and bacula should recycle those affected volumes
automatically whenever the next backup job requires one.

It's also possible to use the

update volume="" volstatus="used"

or similar to force bacula to set a new volume status.

But again in closing, first of all ensure that your filesystem is ok.

All the best,

Uwe 




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Why isn't bacula recycling these volumes?

2021-12-17 Thread Uwe Schuerkamp
Hello Larry,

did you remember to "update pools" after you made the changes to the
pool definition (maximum volumes) or did you include those from the
get-go? Taking a look at the pool definition in the DB itself might
shed some light on the issue as well.

What happens if you manually prune / purge a volume in one of those
pools, does bacula use one of those for the next backup job then?

All the best,

Uwe


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Query failed: ERROR: relation "filename" does not exist

2021-12-14 Thread Uwe Schuerkamp
Hi folks,

I think it's a problem with the query file. Something changed in 11.x
that broke the standard queries I've been using since 5.x or
thereabouts. Maybe you can check the mailing list archives for earlier
posts w/r to that issue, I seem to recall someone posted a patch. 



All the best,

Uwe


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore to cifs-mounted share fails to restore timestamps

2021-11-03 Thread Uwe Schuerkamp
On Tue, Nov 02, 2021 at 05:16:57PM +, Martin Simmons wrote:

> Maybe, but I think the recommended way to back up a Windows fileserver is to
> run the Windows bacula-fd on it directly, instead of trying to back up the
> share from Linux.
> 
> __Martin

Hello Martin,

sorry for being unclear about this setup, but the original files live
on some sort of netapp device exported using a samba share.

All the best,

Uwe

-- 
Uwe Schürkamp | email: 



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] dynamic fileset and /usr/global

2021-11-03 Thread Uwe Schuerkamp
On Tue, Nov 02, 2021 at 02:07:26PM +, Bill Arlofski via Bacula-users wrote:
> On 11/1/21 21:57, Shaligram Bhagat, Yateen (Nokia - IN/Bangalore) wrote:
> > Hi,
> >
> > We are using Bacula 9.4.4 on Centos.
> >
> > We use dynamic fileset using a script executed on the client.  Also there 
> > is a clientRunBeforeJob script.
> >
> > But we found that the dynamic fileset script is executed first and then the 
> >  clientRunBeforeJob script.
> >
> > We need to have it the other way round.
> >
> > How to accomplish it ?

A workaround would be to create the fileset in your "RunBeforeJob"
script, this way you could order the tasks around as you need them.

We use a few "dynamic" filesets in our setup too, but they're only
dynamic in as far as they point to static files on the client
containing the files and directories to back up.

These files get created dynamically and are then included in the
client's fileset using the "< /var/tmp/fileset.txt" mechanism.

All the best,

Uwe

-- 
Uwe Schürkamp | email: 



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore to cifs-mounted share fails to restore timestamps

2021-11-02 Thread Uwe Schuerkamp
__Martin wrote:

> It might be useful to see if /bin/touch can set these times orrectly
>  on the restored file.  That would clarify if it is a > bug in
>  Bacula.


Apparently "touch" can set the access & modification time but there's
no way to set the "changed" timestamp. 

So would you consider this a "proper" bug then that bacula fails to
set these on restored files?

Thanks & all the best,

Uwe




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore to cifs-mounted share fails to restore timestamps

2021-11-02 Thread Uwe Schuerkamp
Hello Heitor,

thanks for your reply!


> I read somewhere that CIFS' ACLs are not supported from a Linux mount point.
> Maybe is that the cause of your problem?


I don't think these three items that "stat" shows are part of the ACLs, but of 
course I could be wrong.

I checked the "File" table catalog structure and I would have hoped that the 
timestamps might be part of what bacula stores in the "LStat" blob: 

LStat | tinyblob

All the best,

Uwe


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Restore to cifs-mounted share fails to restore timestamps

2021-11-02 Thread Uwe Schuerkamp
Hi folks,

I'm not sure if this is a bug or a feature, but here goes (bacula 11
compiled from source on CentOS Linux):

Restoring a couple of test files from an "accurate" backup of a
windows share onto a different samba / cifs share (windows server OS)
fails to restore the access, modified etc. timestamps on the restored
files. This might have to do with the way the FS is handled by
windows, but I'd just like to ensure that bacula can handle these
cases correctly.

Original files (also located on a windows share and backed up by a
Linux bacula instance):

# stat 2097188.pdf
  File: `2097188.pdf'
  Size: 85428   Blocks: 168IO Block: 16384  regular file
Device: 14h/20d Inode: 5977612 Links: 1
Access: (0555/-r-xr-xr-x)  Uid: (0/root)   Gid: (0/root)
Access: 2025-12-31 00:00:00.0 +0100
Modify: 2017-05-14 15:40:46.137837000 +0200
Change: 2017-09-16 01:04:56.0 +0200

Restored file:

# stat /mnt/netapp/restore_test/2097188.pdf 
  File: `/mnt/netapp/restore_test/2097188.pdf'
  Size: 85428   Blocks: 168IO Block: 16384  regular file
Device: 15h/21d Inode: 5977446 Links: 1
Access: (0555/-r-xr-xr-x)  Uid: (0/root)   Gid: (0/root)
Access: 2021-11-02 12:07:57.0 +0100
Modify: 2021-11-02 12:08:09.029652000 +0100
Change: 2021-11-02 12:07:56.0 +0100

Thanks in advance & all the best,

Uwe

-- 
Uwe Schürkamp | email: 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Deadweight volumes, how to discard?

2021-08-09 Thread Uwe Schuerkamp
Hello there,

I often use mysql directly on the bacula catalog to check for old
volumes that haven't been used for a while for whatever reason. It's
quite simple if you take a look at the "Media" table structure:

# Select all volumes where LastWritten is older than Jan 1st, 2021:

echo 'select VolumeName, LastWritten, VolBytes from Media where LastWritten < 
"2021-01-01";' | mysql -pXXX bacula

VolumeName  LastWritten VolBytes
client0298-0206 2020-12-31 10:37:13 45526845926


You can then pipe that through awk or some other tool to create a bconsole 
"script":

 for v in $(echo 'select VolumeName, LastWritten, VolBytes from Media where 
LastWritten < "2021-01-01";' | mysql -s -p bacula | awk '{print 
$1}'  ) ; do echo purge volume="$v"; done 


purge volume=client0298-0206


and then pipe that directly to bacula / bconsole if you're feeling adventurous 
like so:

( for v in $(echo 'select VolumeName, LastWritten, VolBytes from Media where 
LastWritten < "2021-01-01";' | mysql -s -p bacula | awk '{print 
$1}'  ) ; do echo purge volume="$v"; done ) | bconsole

That's most likely not the most elegant scripting you've ever seen,
but it works and will let you review the consequences of your actions
before actually committing them.

You can also use the sql statement above to restrict the search to
volumes of a certain name pattern ("AND VolumeName LIKE 'OFFLINE%'" for 
instance)
or that are in a certain pool and so on.


All the best,

Uwe 



-- 
Uwe Schürkamp | email: 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Trying to update bacula from 7.4 to 9.6 on ancient CentOS6 system fails (segfault in director)

2021-07-29 Thread Uwe Schuerkamp
Hi folks,

I know CentOS6 isn't exactly "hot of the press" anymore but I have a
legacy system that cannot be updated for various reasons.

In order to backup the replacement system which runs bacula client
9.6.x I tried to update the source-compiled version of the bacula
server to 9.6.7. Compilation works fine, so does updating the catalog
on the centos6 machine, but as soon as I try to run the director it
segfaults:

[Thread debugging using libthread_db enabled]
0x2af83ee5833e in waitpid () from /lib64/libpthread.so.0
$1 = "29-Jul-2021 11:07:06\000\000\000\000\000\000\000\000\000"
$2 = '\000' 
$3 = 0x70f058 "bacula-dir"
$4 = 0x70f098 "/server/bacula-9.6.7/sbin/bacula-dir"
$5 = 0x0
$6 = '\000' 
$7 = 0x2af83ec343b8 "9.6.7 (10 December 2020)"
$8 = 0x2af83ec343d9 "x86_64-pc-linux-gnu"
$9 = 0x2af83ec343ed "redhat"
$10 = 0x2af83ec3406a ""
$11 = "stage", '\000' 
$12 = 0x2af83ec343d1 "redhat "
Environment variable "TestName" not defined.
#0  0x2af83ee5833e in waitpid () from /lib64/libpthread.so.0
#1  0x2af83ec1b840 in signal_handler (sig=11) at signal.c:233
#2  
#3  0x2af83e4f725e in mysql_server_init () from 
/server/bacula-9.6.7/lib/libbaccats-9.6.7.so
#4  0x2af83e4f10e7 in mysql_init () from 
/server/bacula-9.6.7/lib/libbaccats-9.6.7.so
#5  0x2af83e4edb14 in BDB_MYSQL::bdb_open_database (this=0x73c408, jcr=0x0) 
at mysql.c:223
#6  0x0040ebfa in check_catalog (mode=UPDATE_AND_FIX) at dird.c:1272
#7  0x0041178f in main (argc=, argv=) at dird.c:340

Thread 1 (Thread 0x2af8417a1880 (LWP 22245)):
#0  0x2af83ee5833e in waitpid () from /lib64/libpthread.so.0
#1  0x2af83ec1b840 in signal_handler (sig=11) at signal.c:233
#2  
#3  0x2af83e4f725e in mysql_server_init () from 
/server/bacula-9.6.7/lib/libbaccats-9.6.7.so
#4  0x2af83e4f10e7 in mysql_init () from 
/server/bacula-9.6.7/lib/libbaccats-9.6.7.so
#5  0x2af83e4edb14 in BDB_MYSQL::bdb_open_database (this=0x73c408, jcr=0x0) 
at mysql.c:223
#6  0x0040ebfa in check_catalog (mode=UPDATE_AND_FIX) at dird.c:1272
#7  0x0041178f in main (argc=, argv=) at dird.c:340
#0  0x2af83ee5833e in waitpid () from /lib64/libpthread.so.0
No symbol table info available.
#1  0x2af83ec1b840 in signal_handler (sig=11) at signal.c:233
233  waitpid(pid, _status, 0);   /* wait for child to produce 
dump */
sigdefault = {__sigaction_handler = {sa_handler = 0, sa_sigaction = 0}, sa_mask 
= {__val = {18446744067267100671, 18446744073709551615 }}, 
sa_flags = 0, sa_restorer = 0x7ffde5ffa830}
argv = {0x0, 0x0, 0x0, 0x0, 0x0}
pid_buf = "22245", '\000' 

The error seems to be related to opening the mysql catalog database,
but nothing has changed on the server in this regard (again, it's an
old centos6 server).

I configured bacula with the following params:

./configure --prefix=/server/bacula-9.6.7/ --with-mysql -disable-conio 
--with-readline=/usr

Thanks in advance for any info on this,

Uwe




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Which OS is best for bacula ?

2021-07-13 Thread Uwe Schuerkamp
On Mon, Jul 12, 2021 at 06:05:49PM +0200, Kern Sibbald wrote:
> Hello again,
> 
> I can see bacula installation placement is just as hot a topic as it has
> always been.  However, I sense a trend toward accepting the /opt/bacula
> concept.  I do wish we could convince a distro such as Debian, but they have
> their point of view, and I for one do not plan to argue with them.

Dear all,

we've been using bacula since version 3.x or thereabouts (maybe even
earlier but my memory might be fading here cause it's been a really
long time :-)) on a wild variety of platforms and I've never even
bothered with the prepackaged stuff and always compiled bacula from
source. It doesn't have that many dependencies, you can choose to keep
all your files / eggs in one directory / basket and upgrades between
minor versions (usually involving no DB / catalog upgrade) are a beeze
as this enables you to keep as many versions around alongside each
other as you deem fit or prudent. Also, this tends to keep your
installation lean and small with only the stuff compiled in that you
really need as it doesn't pull in the dependencies of a "full blown"
bacula installation that might be required by a prepackaged version
from your OS vendor of choice.

We install bacula into /server/bacula-x.yz using a --configure
argument, then softlink the currently "active" installation to
/server/bacula.

Following the "make install", I simply rsync the var directory and the
conf and query.sql files in etc over to the new install and that's
it. As our init / systemd scripts only reference "/server/bacula", the
changes are transparent to them.


Should things go pear-shaped with the new install we can simply point
/server/bacula to the previous version, no harm done. Also it makes a
bare-metal recovery quite uncomplicated as you can simply copy back
the active bacula install to a newly installed "fresh" backup machine,
set up the catalog and you're done (we keep a few copies of the
"/server/bacula-x.yz" directories and catalog dumps from our bacula
servers around on servers all around our various networks in a sort of
"cross-site" redundancy setup).


This might not be the most elegant solution but it's worked great for
the decades we've been using bacula now.

Thanks Kern & team and all the contributors to bacula for your hard
work over the many years, it's much appreciated!

All the best,

Uwe


-- 
Uwe Schürkamp | email: 
Arvato Systems S4M GmbH | Sitz Köln | Amtsgericht Köln HRB 27038



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 11 - where's my Filename table? :-)

2021-06-16 Thread Uwe Schuerkamp
Thanks for the quick help, folks! 

All the best,

Uwe


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula 11 - where's my Filename table? :-)

2021-06-15 Thread Uwe Schuerkamp
Hi folks,

I just started upgrading a few instances from 9.6.x to 11.0.5 compiled
from source. Following the db upgrade and trying out a "list files for
a selected jobid" query, I noted an error about the query referencing
the no-longer-existing table "Filename". I'm pretty sure that's noted
in the release notes somewhere, but I didn't see any messages here so
I thought I'd better make sure that the upgrade process went smoothly.

The backups & restores are running fine on the version, it's just that
both bweb / bacula web and my old query.sql files are borked (I
usually copy over the query.sql file from my previous version).

Thanks in advance & all the best,

Uwe

-- 
Uwe Schürkamp | email: 



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error on despooling attributes in catreq.c: wanted xxx bytes, maximum permitted 10000000 bytes

2021-02-23 Thread Uwe Schuerkamp
On Tue, Feb 23, 2021 at 05:52:02PM +0100, Josip Deanovic wrote:
> On Monday 2021-02-22 16:06:42 David Brodbeck wrote:
> > On Wed, Feb 17, 2021 at 6:12 AM Josip Deanovic
> > 
> > wrote:
> > > It's interesting that the job had almost 2GB of attributes to
> > > despool. That's quite a large amount of attributes.
> > 
> > I have a few jobs that despool that much, and one that despools over 18
> > GB.  The machine in question has 51 million files.
> 
> Just to be sure... Are you talking about the attribute spool file that
> gets created in running directory and not about the spool file that gets 
> created in the spool directory?
> 

I'd suspect he is talking about the attribute spool file. The size
sounds about right (1,9GB on my end for around 5 million files, 19GB
for his attribute spool with a tenfold amount of files in the backup job).

I'm just wondering why David's job appears to be working fine without a
modification of the attribute spool size parameter in catreq.c as
outlined above... then again our job used to work for years without
any issues, things only started to go wrong on bacula 9.6.5 or
thereabouts.

All the best,

Uwe



-- 
Uwe Schürkamp | email: 
Arvato Systems S4M GmbH | Sitz Köln | Amtsgericht Köln HRB 27038








___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 9.4.x - Tape LTO-08 - comliant

2021-02-19 Thread Uwe Schuerkamp
On Fri, Feb 19, 2021 at 09:19:15AM +0100, mau...@gmx.ch wrote:
> Hello
> 
> Please a tape loader Tandberg LTO-8 or every existing LTO-8 Tape loader, are
> this supported from Bacula Version 9.4
> 
> Thanks
> 
> 
> 

I'm pretty sure that if your operating system (Linux?) sees and can address the 
hardware, then so can bacula.

All the best,

Uwe


-- 
Uwe Schürkamp | email: 
Arvato Systems S4M GmbH | Sitz Köln | Amtsgericht Köln HRB 27038









___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error on despooling attributes in catreq.c: wanted xxx bytes, maximum permitted 10000000 bytes

2021-02-18 Thread Uwe Schuerkamp
On Wed, Feb 17, 2021 at 07:17:17PM +, Martin Simmons wrote:
> > On Wed, 17 Feb 2021 15:11:30 +0100, Josip Deanovic said:
> > 
> > I am not sure if 5M files and directories could account for the
> > attribute spool file of 1.8GB in size.
> 
> That is ~400 bytes per file, which is reasonable if the filenames are longish.
> 
> __Martin
> 

Yep, and given that it's a windows fileserver some very quirky names
are probably par for the course there :-)

I'm just wondering why I'm seeing this error at all because given
today's disk sizes I'm certain there are many such backup jobs of a
similar proportion run by bacula daily all over the world, and ours
ran fine for a long time, too before I saw this issue for the first
time.

All the best,

Uwe 



-- 
Uwe Schürkamp | email: 
Arvato Systems S4M GmbH | Sitz Köln | Amtsgericht Köln HRB 27038








___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error on despooling attributes in catreq.c: wanted xxx bytes, maximum permitted 10000000 bytes

2021-02-16 Thread Uwe Schuerkamp
Hi all,

On Tue, Feb 16, 2021 at 12:32:18PM +, Martin Simmons wrote:

> Why 3x?  The value in tne message is almost 197x the original, but is almost
> certainly a junk value anyway.
> 

sorry, I meant the value that bacula complained about not being able
to allocate upped to 30... instead of 19..., I hope that clears things
up.

> If you have the job log then it would be useful to see it (at least the line
> that contains "Sending spooled attrs to the Director..." which shows number of
> bytes).
> 

Here's the job log entry:
##
14-Feb 11:21 deniol2199-sd JobId 61332: Sending spooled attrs to the Director.
Despooling 1,856,339,469 bytes ...
14-Feb 11:29 deniol2199-dir JobId 61332: Fatal error: catreq.c:762 fread attr 
spool error. Wanted 1969368434 bytes, maximum permitted 1000 bytes
14-Feb 11:29 deniol2199-dir JobId 61332: Error: Bacula deniol2199-dir 9.6.5 (11 
Jun20):
  Build OS:   x86_64-pc-linux-gnu redhat
##


> 
> Also check the space in the Storage daemon's WorkingDirectory as Eric said.

The storage daemon runs on the same system as the director.

All the best & thanks for your help, 

Uwe



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error on despooling attributes in catreq.c: wanted xxx bytes, maximum permitted 10000000 bytes

2021-02-16 Thread Uwe Schuerkamp
Hi folks,

thanks for all your suggestions. I compiled 9.6.7 from source and
"patched" catreq.c to increase the maximum attribute spool size to 3x
the original value.

Sadly I don't have the time to debug this error on a deeper level at
the moment, disk space is plenty on the director (around 400GB free)
so I doub't it's an issue with bacula's "var"-Directory filling up.

I'll observe the behaviour during next weekend's "full" backup and
report back.

All the best,

Uwe 

-- 
Uwe Schürkamp | email: 








___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Error on despooling attributes in catreq.c: wanted xxx bytes, maximum permitted 10000000 bytes

2021-02-15 Thread Uwe Schuerkamp
Hi folks,

during a largish backup job (3,8TB windows fileserver, about 5m files
& directories) I'm seeing this error when the attributes are being
despooled once the job completes:

Fatal error: catreq.c:762 fread attr spool error. Wanted 1969368434
bytes, maximum permitted 1000 bytes

I've searched the mailing list and the documentation for a possible
variable related to attribute spooling, but I can only find options
that regulate disk / job spooling.

I'd be most grateful if someone could direct me to the correct
configuration parameter.[1]

We're using bacula 9.6.5 compiled from source on CentOS7.

Thanks in advance & all the best, 

Uwe

[1] I'd also have no problem with patching catreq.c and recompiling,
but hopefully there's a supported way of adjusting this variable.

-- 
Uwe Schürkamp | email: 
Arvato Systems S4M GmbH | Sitz Köln | Amtsgericht Köln HRB 27038








___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula fails to purge volume, backup hangs with "waiting for appendable volume"

2021-01-12 Thread Uwe Schuerkamp
On Fri, Jan 08, 2021 at 12:36:58PM +0100, Eric Bollengier via Bacula-users 
wrote:
> 
> I would recommend to check the volume content on the catalog side (JobMedia
> 
> mostly), I think that I have seen and fixed a similar issue few months ago.
> (in 11.0) It
> 
> was a loop, the volume was not purged because some orphan jobmedia still
> present
> 
> were blocking the recycling, and the volume was always selected by the Purge
> Oldest algorithm.
> 
> Hope it helps!
> 
> Best Regards,
> 
> Eric

Hello Eric,

thanks for your comment, it's much appreciated. I've now purged the
volume in question manually and the error hasn't reappeared, but I
guess I'll have to wait a few weeks to ensure the problem is really
"fixed" as we're running on a weekly schedule (keeping three fulls and
eight incrementals in two separate pools).


I've also updated the instance to version 9.6.7 (from source) which
had the problem once with a different client, but so far it's been
smooth sailing.

All the best,

Uwe 


-- 
Uwe Schürkamp | email: 
Arvato Systems S4M GmbH | Sitz Köln | Amtsgericht Köln HRB 27038
Geschäftsführer: Ralf Schürmann | Dr. Manfred Heinen








___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula fails to purge volume, backup hangs with "waiting for appendable volume"

2021-01-08 Thread Uwe Schuerkamp
Hi folks,

I'm experiencing a weird issue with one of our bacula servers (9.6.5
on ubuntu 18.04 compiled from source).

Most of the backups work just fine, however for one client they fail /
hang consistently as bacula fails to complete purging of a volume from the pool
(I use separate storages & pools for fulls and incrementals, both disk-based).

I'm seeing this message in bconsole under these circumstances:

08-Jan 07:13 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 07:18 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 07:23 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 07:28 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 07:33 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 07:38 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 07:43 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 07:48 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 07:53 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 07:58 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 08:03 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 08:08 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 08:13 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 08:18 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 08:23 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 08:28 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 08:33 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 08:38 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 08:43 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 08:48 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 08:53 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 08:58 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 09:03 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"
08-Jan 09:08 director-dir JobId 3988: Purging oldest volume "clientXXX-0284"

The volume isn't too big and purging it manually takes only a few seconds.

When looking at the pool at the time of the hang, the volume status is
still shown as "used" for the volume that's being purged.

Once I manually purge the volume in bconsole that bacula is waiting
for and mount the storage manually, the backup job continues &
finishes normally.

DB backend is 10.1.47-MariaDB-0ubuntu0.18.04.1 Ubuntu 18.04.

I'd be most grateful for any ideas on what could be causing this as
all other backups (about 50 clients or so, all of them disk-based)
work without issues; the same can be said for our other bacula
instances (about 500 clients in total with 3 active directors).


Thanks much in advance for your help & all the best,

Uwe


-- 
Uwe Schürkamp | email: 
Arvato Systems S4M GmbH | Sitz Köln | Amtsgericht Köln HRB 27038
Geschäftsführer: Ralf Schürmann | Dr. Manfred Heinen








___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 9.6.x: "volume use duration" not working as expected

2020-03-23 Thread Uwe Schuerkamp
On Sun, Mar 22, 2020 at 01:47:16PM +0100, kern wrote:
> Hello,That volume is set to  a 1 year volume use duration not one day.  
> Possibly you updated the p>ool but forgot to update existing volumes with the 
> new resource.Best regards,KernSent from my Samsu>ng Galaxy smartphone.

Thanks Kern & everybody who replied,

that seems to have done the trick (just got the following console message):

23-Mar 09:05 baculaa-dir JobId 0: Max configured use duration=86,400
sec. exceeded. Marking Volume "TAPE01_13" as Used.


I'll keep an eye on the job and report back.

All the best,

Uwe


-- 
Uwe Schürkamp | email: 








___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula 9.6.x: "volume use duration" not working as expected

2020-03-20 Thread Uwe Schuerkamp
Hi folks,

I have a set of tapes that I'd like to use in a daily rotation to backup online 
disk volumes to tape.

I defined a pool for this like so:

Pool {
  Name = offline_weekly
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 1 year
  Recycle Oldest Volume = yes
  Purge Oldest Volume = yes
  Maximum Volumes = 7
  Volume Use Duration = 1 day
}

Also, I've labelled 7 LTO6 tapes and assigned them to the pool.

The backup works fine, but all the jobs end up on the first tape
(probably until it fills up). I would expect that the "volume use
duration" would cause bacula (9.6.2 compiled from source on Ubuntu
Server LTS 18.04) to load the next tape in the pool after 24 hours (so
when the next offline backup job runs), then the next one and so on
until it reaches tape 1 again, then writing the next offline job on
that volume and so on, until all tapes are full and then it should
start to recycle the oldest volume.

Here's the pool how bacula sees it at the moment (I've also tried
"update pool from resource" without any effect):

+++-+-+-+--+-+--+-+
| PoolId | Name   | NumVols | MaxVols | MaxVolBytes | VolRetention | 
Enabled | PoolType | LabelFormat |
+++-+-+-+--+-+--+-+
| 11 | offline_weekly |   7 |   7 |   0 |   31,536,000 |
   1 | Backup   | *   |
+++-+-+-+--+-+--+-+

And here's what "update pool from resource" shows:

+-+---+---+++---+---+
| PoolId | Name   | NumVols | MaxVols | UseOnce | UseCatalog | 
AcceptAnyVolume | CacheRetention | VolRetention | VolUseDuration | MaxVolJobs | 
MaxVolFiles | MaxVolBytes | AutoPrune | Recycle | ActionOnPurge | PoolType | 
LabelType | LabelFormat | Enabled | ScratchPoolId | RecyclePoolId | NextPoolId 
| MigrationHighBytes | MigrationLowBytes | MigrationTime |
+++-+-+-++-++--+++-+-+---+-+---+--+---+-+-+---+---+++---+---+
| 11 | offline_weekly |   7 |   7 |   0 |  1 |  
 0 |  0 |   31,536,000 | 86,400 |  0 |  
 0 |   0 | 1 |   1 | 0 | Backup   | 0 | 
*   |   1 | 0 | 0 |  0 |
  0 | 0 | 0 |
+++-+-+-++-++--+++-+-+---+-+---+--+---+-+-+---+---+++---+---+


(sorry for the long lines)

and here's an example output of a "list jobs on volume" query for the one tape 
that is currently in active use by bacula in the pool above:

+---+---+-+--+---+---+++
| JobId | Name  | StartTime   | Type | Level | Files | Bytes
  | Status |
+---+---+-+--+---+---+++
|   417 | offline_daily | 2020-03-17 14:08:21 | B| F | 4,167 | 
33,078,435,599 | T  |
|   422 | offline_daily | 2020-03-18 06:21:34 | B| F | 4,167 | 
33,155,008,843 | T  |
|   430 | offline_daily | 2020-03-19 06:21:33 | B| F | 4,324 | 
38,389,879,305 | T  |
|   437 | offline_daily | 2020-03-20 04:01:32 | B| F | 4,328 | 
41,592,356,545 | T  |
+---+---+-+--+---+---+++

Am I misunderstanding something about "volume use duration" here?

Thanks in advance for your help & all the best,

Uwe





-- 
Uwe Schürkamp | email: 









___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] FD / SD Traffic question

2020-01-30 Thread Uwe Schuerkamp
Thanks Heitor!

All the best,

Uwe

-- 
Uwe Schürkamp | email: 









___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] FD / SD Traffic question

2020-01-30 Thread Uwe Schuerkamp
Hi folks,

just a quick question about traffic flow within bacula:

If I configure a 2nd storage daemon on a separate network which
includes a few clients and then run a backup of said client on the
2nd storage daemon, the traffic will not involve the director except for
backup job details (files, attributes etc), correct?

I'm asking because I'd like to use a single director for two networks
separated by firewall with a 1 gigabit connection which would be
clogged up if I backup the other network's client to the local
director / storage daemon.

Thanks in advance & all the best,

Uwe

-- 
Uwe Schürkamp | email: 








___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] deleting data written to volume

2019-07-25 Thread Uwe Schuerkamp
On Thu, Jul 25, 2019 at 01:37:26PM +0100, Adam Weremczuk wrote:
> On 25/07/19 13:15, Uwe Schuerkamp wrote:
> 
> > no, I don't think so. If you run parallel Jobs the data will be interleaved 
> > on the tape, and a tape being a seq. access medium you are out of luck here.
> How about if I set:
> 
> Maximum Concurrent Jobs = 1
> 
> Then all jobs will be written on continuous tape segments.
> 
> Just like separate tracks on old audio cassettes or movies on long VHS
> tapes.
> 
> I can imagine performance suffering significantly.
> 
> In this case would it still be impossible to individually manage jobs data?


This still won't help in deleting a single job from the tape to "make room". 
I'm afraid that's not how tapes work :)

All the best,

Uwe



-- 
Uwe Schürkamp | email: 









___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] deleting data written to volume

2019-07-25 Thread Uwe Schuerkamp
On Thu, Jul 25, 2019 at 01:00:35PM +0100, Adam Weremczuk wrote:
> Hi Uwe,
> 
> I don't want to wipe the whole tape.
> Only data from one specific full backup job.
> I want to write another full backup of the same client to this tape.
> I don't have enough space for two.
> I want all other backups of other clients present on this tape to remain
> intact.
> Is it at all possible with LTO-4 and Bacula?
> 
> Thanks,
> Adam
> 
> 

Hello Adam,

no, I don't think so. If you run parallel Jobs the data will be interleaved on 
the tape, and a tape being a seq. access medium you are out of luck here.

All the best, Uwe

-- 
Uwe Schürkamp | email: 









___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] deleting data written to volume

2019-07-25 Thread Uwe Schuerkamp
Hello Adam,

if you don't mind having to re-label the volume aftwards a

delete volume=

followed by an

mt -f /dev/st0 weof # insert your tape device here

should reliably wipe your tape.

All the best,

Uwe

On Thu, Jul 25, 2019 at 11:54:35AM +0100, Adam Weremczuk wrote:
> Hi all,
> 
> Is it possible with Bacula tools to delete data written to a volume?
> 
> Bacula console commands "delete" and "purge" only seam to affect the catalog
> database.
> 
> For storage I'm using LTO-4 tapes which unfortunately don't support LTFS
> yet.
> 
> I think ideally I would like tape space to be freed up and I'm ok with the
> job still being listed.
> 
> As long as Bacula is aware of the situation and e.g. doesn't try to access
> the deleted data later.
> 
> Obliterating both the data and the job info as if they've never existed is
> also fine.
> 
> Please advise.
> 
> Thanks,
> Adam
> 


> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Uwe Schürkamp | email: 
Senior Service Manager | Phone: [+49] 5241 - 80 82 423
Arvato Systems S4M GmbH | Sitz Köln | Amtsgericht Köln HRB 27038
Geschäftsführer: Ralf Schürmann | Dr. Manfred Heinen








___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Is it possible to tell bacula to not run a job for the next week?

2019-03-01 Thread Uwe Schuerkamp
On Fri, 1 Mar 2019 at 16:05, byron  wrote:
>
> I have 10 jobs that run every night and write to the same pool of tapes.
>
> Tonight is the night they run their monthly full backups but I am short on 
> tapes.  I'd like to put a hold on running some of the lower priority jobs to 
> allow the others to run all the way through to completion.  Then next week 
> when I add in the new tapes I'll restart the remaining jobs that were put on 
> hold.
>
> Is there anyway to do this through bconsole or do I need to remove the jobs 
> that I don't want to run from the configuration files and add them back in 
> next week?
>

Hey Byron, take a look at the

disable job="https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows backup much slower than Linux backup

2019-03-01 Thread Uwe Schuerkamp
I just checked our installation (direct-to-tape backup, lto5, LAN
gigabit connectivity), and I'm not seeing any significant performance
issues between windows and Linux clients.

The evidence is naturally anecdotal though as several backups are
running concurrently, but I'm not seeing anything out of the ordinary
here.

Example for a rather large windows machine:

Full5252701 3.31 TB 2019-02-22 17:00:00 2019-02-23 
13:15:21 20:15:2147.64 MB/s

Linux machine, different client, same bacula host (9.4.2):

Full130206  2.13 TB 2019-02-25 21:17:34 2019-02-26 06:16:48 
08:59:1469.15 MB/s

The windows machine has many small files (file server), so this fact
alone is probably enough to explain the 25mb/sec difference in
throughput.

All the best,

Uwe











___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Long pause with no logging or any activity at all

2019-02-15 Thread Uwe Schuerkamp
On Thu, Feb 14, 2019 at 04:49:40PM +, William Muriithi wrote:
> Hello,
> 
> I have a new bacula setup and for the last two day, has been running the 
> first large scheduled job.  Sometimes early this morning,  it looks like 
> bacula stopped writing to the tape
> 
> I was monitoring it using this command:
> 
> *list volumes

Good morning William,

have you tried the "stat client" command? If you run it a few times you should 
be able to spot if there's still data transfer activity.

All the best,

Uwe


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula 9.2.1 (community edition) & openstack volumes

2018-11-02 Thread Uwe Schuerkamp
Hi folks,

we're planning to backup some openstack-based cinder volumes (LVM). I
was wondering if bacula 9 already supports openstack and has some
custom ways of dealing with backup volumes (community edition) or
wether we'd need to roll our own in the form of runbeforejob /
runafterjob scripts... any help would be greatly appreciated.

All the best & thanks in advance,

Uwe


-- 
Uwe Schürkamp | email: 








___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Overlapping backups causes two full backups - how to fix?

2018-10-29 Thread Uwe Schuerkamp
On Mon, Oct 29, 2018 at 11:31:33AM +, Philip Pemberton wrote:
> Hi,
> 
> I'm using Bacula to back up a 6TB ZFS array onto LTO, with a fortnightly
> full backup and daily incrementals.
> 
> The problem I have is that the full backup sometimes takes more than 24h to
> run. This means that Bacula starts the next backup while the Full is still
> running, creating two full backups, wasting time and filling the tapes.
> 
> Is there some way I can avoid this? Either skipping an incremental (because
> the full backup is running) or delaying the incremental would be fine.
> 
> I'm also going to be splitting the backup into multiple jobs - is there
> anything I need to do to the database when I modify the configuration?
> 
> Thanks
> Phil


Hi Phil,

take a look at the "cancel duplicate jobs" directive.

All the best,

Uwe


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 9.2.1: update pool from resource doesn't work as expected

2018-10-19 Thread Uwe Schuerkamp
Hi folks,

I found the answer to my own question: You do in fact seem to have to
delete the extra volume(s) from the pool in question before you can
resize it.

I used "delete volume" for the pool in question, deleting the oldest
one of the bunch, then did another "update pool from resource" and
then both maxvols and numvols showed the correct values.

All the best,

Uwe

-- 
Uwe Schürkamp | email: 








___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula 9.2.1: update pool from resource doesn't work as expected

2018-10-19 Thread Uwe Schuerkamp
Hi folks,

I need to reduce the number of volumes in a pool from 3 full backups
to just 2 (each client has its own pools for incr. and full backups
with the appropriate number of volumes; MaxVolJobs is set to 1
naturally).

I updated the Maximum volumes parameter in the pool definition
(bacula-dir.conf, changed it from 3 to 2) and did an

update pools from resource 

after restarting bacula.

However the update command output still has "MaxVols" at 3 instead of
the new value, "2".

I also tried updating the pool table (hehe :)) manually using an
update statement in mysql, but after a short while the "MaxVols"
paramater for the pool in question changed back to "3" instead of "2".

I'm 100% certain bacula uses the updated configuration (I checked for
stray bacula processes after etc/bacula stop).

It would be great if someone could shed some light on this. Maybe I
need to delete the oldest volumes from the pool first? However I don't
see how that should affect the MaxVols parameter of the pool
definition.

Details:
CentOS 6
bacula 9.2.1 compiled from source
MariaDB backend


All the best & bye for now,

Uwe
-- 
Uwe Schürkamp | email: 








___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula enterprise resellers in EMEA?

2018-10-18 Thread Uwe Schuerkamp
On Thu, Oct 18, 2018 at 12:53:04PM +0300, Eero Volotinen wrote:
> Hi List,
> 
> Looking for company that resells bacula enterprise in eu? any clues.
> 
> br,
> Eero

AFAIK bacula systems is based in Switzerland, maybe you should contact them.

All the best,

Uwe


-- 
Uwe Schürkamp | email: 









___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] File relocation

2018-08-24 Thread Uwe Schuerkamp
On Fri, Aug 24, 2018 at 11:33:08AM -0500, Steven Hammond wrote:
> Dumb question: How do I change the file location path during restore?

Once you run the restore job, you'll have a chance to modify its parameters 
using the "mod" command.

Simply direct the restore job to the correct directory that files should be 
restored to and you should be good to go.

The path doesn't need to exist, bacula will create it once the restore job 
starts.

All the best, Uwe

-- 
Uwe Schürkamp | email: 
Geschäftsführer: Ralf Schürmann | Dr. Manfred Heinen







--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Wrong "JobId 0: No prior Full backup Job record found." when estimating a remote host job

2018-08-24 Thread Uwe Schuerkamp
On Thu, Aug 23, 2018 at 10:44:40PM +0300, George Anchev via Bacula-users wrote:

> 23-Aug 20:54 pc-dir JobId 0: No prior Full backup Job record found.
> 23-Aug 20:54 pc-dir JobId 0: No prior or suitable Full backup found in 
> catalog. Doing FULL backup

Hello George,

have you checked the mysql catalog to see if there any Job entries in
the "Job" table for the client in question? If so, what do they look
like? Have media types or pools been changed along the way at some
point?

I don't think it's an issue with the client being "remote" as I think
the director uses the same method of connecting to the client even if
it's running locally on a single machine.


All the best,

Uwe
-- 
Uwe Schürkamp | email: 
Arvato Systems S4M GmbH | Sitz Köln | Amtsgericht Köln HRB 27038








--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] any way to "collapse" back to a single director?

2018-08-09 Thread Uwe Schuerkamp
As long as you keep the "old" director running you should be fine. Simply
move the client to the new director once the next "full" is up and remove
it (or disable the job) from the old one. Once you have all your stuff on
the new director, you should be good to go.

If you have ancient tapes and / or online volumes you could always use some
clever bextract / bls hacks to extract the old data on those volumes.

All the best, Uwe


On 9 August 2018 at 17:47, Matthew Arguin  wrote:

> But then you lose the historical stuff?  Was hoping for a way to sort of
> migrate everything from one to another.  I will say that I don’t expect
> that this is doable with out more work than it is worth.
>
>
>
> *From:* Uwe Schuerkamp [mailto:uwe.schuerk...@gmail.com]
> *Sent:* Thursday, August 9, 2018 11:41 AM
> *To:* Matthew Arguin 
> *Cc:* bacula-users@lists.sourceforge.net
> *Subject:* Re: [Bacula-users] any way to "collapse" back to a single
> director?
>
>
>
> I guess the easiest way would be to migrate your clients back to a single
> (new) director one by one. Takes longer, but should be more reliable than
> trying to merge several different catalogues back into a single instance.
>
>
>
> All the best, Uwe
>
>
>
>
>
> On 9 August 2018 at 17:03, Matthew Arguin  wrote:
>
> Looking for a way to (if at a feasible) collapse multiple bacula directors
> in to a single one.  Anyone done this or “seen” this done?
>
>
>
> -matt
>
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] any way to "collapse" back to a single director?

2018-08-09 Thread Uwe Schuerkamp
I guess the easiest way would be to migrate your clients back to a single
(new) director one by one. Takes longer, but should be more reliable than
trying to merge several different catalogues back into a single instance.

All the best, Uwe


On 9 August 2018 at 17:03, Matthew Arguin  wrote:

> Looking for a way to (if at a feasible) collapse multiple bacula directors
> in to a single one.  Anyone done this or “seen” this done?
>
>
>
> -matt
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Incrementals use 170GB on-disk volume storage, but only 60GB when restored

2018-07-10 Thread Uwe Schuerkamp
On Fri, Jul 06, 2018 at 06:53:08PM +0100, Martin Simmons wrote:
> Maybe your fileset causes files to be copied more than once (e.g. by listing
> the same directory via different File= lines)?
> 
> Also, what kind of filesystem are you using for the restored directory?  Maybe
> it is compressing the data on restore (since du reports the disk used, not the
> size of the files)?
> 
> You could try running bls on the volume and use your favourite scripting
> language to add up the file sizes that it prints to see the expected total
> size.
> 
> Also, use bls -j to check that the volume really does contain just one job.
> 
> __Martin
> 


Hi all,

thanks again for your reply. Closer inspection of the fileset in
question indeed showed a directory being listed twice (or, rather, it
was a subdirectory of a dir already included in another "File="
statement).

I've fixed the fileset now and will check the contents of the job once
it completes. 

All the best, Uwe



-- 
Uwe Schürkamp | email: 
Geschäftsführer: Ralf Schürmann | Dr. Manfred Heinen







--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Incrementals use 170GB on-disk volume storage, but only 60GB when restored

2018-07-06 Thread Uwe Schuerkamp
Dear Olivier,

thanks for your reply.

On Fri, Jul 06, 2018 at 08:35:45AM +0200, Olivier Delestre wrote:

> Do you use the Aligned plugin for the SD ?
>  I do not know about your mystery
> keep us in touch

We're not using any plugins, just a plain bacula install compiled from
source. Our online volume storage is an HP MSA dumb disk array
(classical "spinning rust" hds, no SDDs involved at all not even for
catalog db backend storage).

Would it help to include the bextract log? We've tried both methods (running a 
restore job from "bconsole" and using "bextract" to extract the entire 
incremental volume content).

All the best,

Uwe

-- 
Uwe Schürkamp | email: 
Geschäftsführer: Ralf Schürmann | Dr. Manfred Heinen







--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Incrementals use 170GB on-disk volume storage, but only 60GB when restored

2018-07-05 Thread Uwe Schuerkamp
Dear bacula list,

I'm having a weird problem with incremental backups of one of our machines.

We do an online (to disk backup) compressed incremental backup into
single volumes (one volume per incremental job, 10 volumes recycled
automatically) and everything appears to work fine.

However, the on-disk usage of the volume is a whopping 170GB, but when
I do a complete restore of this incremental job the restored directory
occupies only around 60GB of diskspace.

Here's a bacula web statistic from one of the incremental jobs:

460031 Incr 1920 170.45 GB 2018-06-14 16:52:11 2018-06-14 18:52:32
02:00:21 24.17 MB/s 0.05

And here's the "du -sh" report on the fully restored incemental ("mark
*"  in "/") job listed above:

bacula_restore]# du -sh .
96G .

We don't get any error messages during restore except for two tomcat
logfiles which changed in size during the backup, but those are only a
few hundred MB and don't account for the huge difference in disk space
usage we're experiencing here.

Also, the restore logs shows some messages in the form of

"file  has been deleted",

but those are also mostly files that are nowhere near large enough to
explain the discrepancy in size. We've also tried removing the
"accurate" flag from the job in question and reloading the config, but
that hasn't solved the problem, either.


We've been running bacula for over a decade now on six backup server
instances with hundreds of clients and a few dozen TB of archive
space, so I'd not consider myself a bacula newbie in the sense of the
word, but this one has me puzzled... any advice would be greatly
appreciated.

Server version: 9.0.6 compiled from source on centos6
Catalog DB: MariaDB 10.x
Client fd version:   bacula-client-5.0.0-13.el6.x86_64
bacula-common-5.0.0-13.el6.x86_64

Thanks very much in advance for any ideas on the matter & all the best,

Uwe

-- 
Uwe Schürkamp | email: 
Geschäftsführer: Ralf Schürmann | Dr. Manfred Heinen







--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] [9.0.0] Disabled Jobs don't show in log, "run"-Command nor in Job table

2017-07-21 Thread Uwe Schuerkamp
On Thu, Jul 20, 2017 at 05:39:56PM +0200, Kern Sibbald wrote:
> This is a feature that was requested by users.
> 
> Jobs that are disabled are no longer listed in the list of jobs available to
> be run.  If you manually run a disabled job, it will still run.
> 
> Best regards,
> 
> Kern
> 
> 

Thanks for the clarification, Kern.

All the best, Uwe

--
Uwe Schürkamp | email: 
Senior Service Manager | Phone: [+49] 5241 - 80 82 423
Arvato Systems S4M GmbH | Sitz Köln | Amtsgericht Köln HRB 27038
Geschäftsführer: Ralf Schürmann | Dr. Manfred Heinen







--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] [9.0.0] Disabled Jobs don't show in log, "run"-Command nor in Job table

2017-07-20 Thread Uwe Schuerkamp
Hi folks,

I have a few jobs that I run manually from time to time and so I've
set them to "Enabled = False" in their job definition.

It seems that as of 9.0.0, running these jobs no longer shows up in
bacula's log nor in the Job table. Also, they're not listed in the job
list anymore when I simply type "run", however when I type

run job=""

they're still recognized. 

Everything worked fine in 7.4.4, is this a bug or a feature?

All the best & TIA,

Uwe

-- 
Uwe Schürkamp | email: 








--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Backup fails after update to 9.0.0

2017-07-17 Thread Uwe Schuerkamp
On Mon, Jul 17, 2017 at 12:23:34PM +0200, Kern Sibbald wrote:
> Hello Uwe,
> 
> My best guess is that for some reason, your Director may not have lz4
> properly built, so it does not support the new Bacula comm line compression,
> and on the client that failed, it is doing comm line compression.  That is
> typically with version 9.0 what leads to the packet size being too big.
> One way to test this is to explicitly set:
> 
>   CommCompression = no
> 
> In your Client resource.
> 
> You can also try setting the same directive in the bacula-dir.conf for the
> Director resource.
> 
> If you still have problems, I would like to see exactly what Bacula version,
> hardware (32bit/64bit, CPU architecture, ...), and OS the client is running.
> 
> Best regards,
> 
> Kern

Thanks for your reply, Kern.

It turned out I forgot to update one bacula instance which was still
running 7.4.4, so naturally this director got into trouble trying to
communicate with a 9.0.0 instance on the other end.

All the best, Uwe

--
Uwe Schürkamp | email: 
Geschäftsführer: Ralf Schürmann | Dr. Manfred Heinen







--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backup fails after update to 9.0.0

2017-07-17 Thread Uwe Schuerkamp
Hi folks,

I recently updated our four bacula servers to 9.0.0. Everything went
rather smoothly except for one job which now fails with the following
error message.

17-Jul 11:35 deniolX-dir JobId 0: Fatal error: bsock.c:569 Packet
size=1073741835 too big from "Client: -fd::9102. Terminating connection.

I've compiled 9.0.0 from source (CentOS 6 running on all servers).

Any idea what could be causing this? Many thanks in advance for your
comments & suggestions,

Uwe



-- 
Uwe Schürkamp | email: 
Arvato Systems S4M GmbH | Sitz Köln | Amtsgericht Köln HRB 27038







--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Release 9.0. 0 - Suse - ACL Error

2017-07-13 Thread Uwe Schuerkamp
On Thu, Jul 13, 2017 at 06:40:52PM +0100, Mick wrote:
> Hi,
> 
> Thanks for your quick response.
> 
> I removed the 32 bit packages and re-tried but got the same error.
> 
> I'll file this as a bug later today.
> 

Have you tried installing the libattr-devel files as well? Those came
up as a depency when I compiled 9.0.0 on CentOS 6.

All the best, Uwe

--
Uwe Schürkamp | email: 








--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] How to install latest bacula-client on Centos 6.7 and 7

2017-03-09 Thread Uwe Schuerkamp
On Thu, Mar 09, 2017 at 10:59:26AM -0500, Petar Kozić wrote:
> Hi folks,
> 
> I have one question. Does anyone know where can I find Centos 6.7 and
> Centos 7 repo for latest versions of bacula-client.
> I can’t build client on all my instances and development tools for build is
> need much space on disk.
> 
> Or maybe someone have some advice for installation.
> 
> Thank you.

Hello Petar,

if you have a lot of identical centos machines you can use one which
has the dev tools to build the binary (prefix /usr/local, say,
--enable-client-only) and then create a tar archive and copy that over
to your clients (or go the whole nine yards and build an RPM).

All the best, Uwe

> 
> *—*
> 
> *Petar Kozić*

> --
> Announcing the Oxford Dictionaries API! The API offers world-renowned
> dictionary content that is easy and intuitive to access. Sign up for an
> account today to start using our lexical data to power your apps and
> projects. Get started today and enter our developer competition.
> http://sdm.link/oxford

> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Uwe Schürkamp | email: 
Senior Service Manager | Phone: [+49] 5241 - 80 82 423
Arvato Systems S4M GmbH | Sitz Köln | Amtsgericht Köln HRB 27038
Geschäftsführer: Ralf Schürmann | Dr. Manfred Heinen







--
Announcing the Oxford Dictionaries API! The API offers world-renowned
dictionary content that is easy and intuitive to access. Sign up for an
account today to start using our lexical data to power your apps and
projects. Get started today and enter our developer competition.
http://sdm.link/oxford
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Sudden Crippling Issues with Bacula.

2017-02-10 Thread Uwe Schuerkamp
On Thu, Feb 09, 2017 at 09:57:54AM -0400, Alejandro M wrote:
> Hello all, for the past few weeks I've had a Bacula deployment running with
> no issues. But for the past few days there has been a few issues thats has
> pretty much rendered my Bacula deployment completely useless.
> 

I'm wondering if you've hit your pool limitations after the "past few
weeks" and something is wrong with auto-recycling the used volumes?

Some more info on your setup (cfg files) would be helpful.

All the best, Uwe

--
Uwe Schürkamp | email: 
Arvato Systems S4M GmbH | Sitz Köln | Amtsgericht Köln HRB 27038
Geschäftsführer: Ralf Schürmann | Dr. Manfred Heinen







--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] stop/resume of a 6TB job fills /tmp-Partition

2017-02-02 Thread Uwe Schuerkamp
Hi folks,

I tried the new "stop / resume" functions on our bacula server for the
first time today (7.4.4 / MariaDB / CentOS6 compiled from source).

While stop seemed to work ok and left the job in an "incomplete" state
after finishing the "spooling attributes" bit, "resume" just sat there
for an hour and quietly filled up the servers /tmp-Partition.

I've now set MariaDB's TMPDIR to a partition with 11TB space, but I
was wondering if there's a rule of thumb to calculate how much space
mysql / mariadb will require for the tmp tables it creates before the
job is able to resume successfully.

A normal "full" is about 8TB and contains some 70,000,000 files,
the "accurate" flag is set.

Thanks in advance for your comments,

Uwe



-- 
Uwe Schürkamp | email: 







--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Move content of a volume to another volume

2017-01-05 Thread Uwe Schuerkamp
On Thu, Jan 05, 2017 at 06:36:10PM +0100, Lukas Hejtmanek wrote:
> On Thu, Jan 05, 2017 at 05:14:38PM +0100, Uwe Schuerkamp wrote:
> > Hm, if you have an error on the tape, how are you going to recover the
> > data off of it? Or are you saying that you have a tape volume in
> > status "Error" within your bacula setup?
> 
> the tape has write error, so I suppose it is still readable..
> 
> -- 
> Lukáš Hejtmánek

If you have enough disk space you could try rescuing the data using dd
or cat and creating a replacement volume like this (tape in /dev/st0):



cat /dev/st0 > /big_file_system/tapedata.dat

mtx unload





mtx load 
mt -f /dev/st0 rewind
mt -f /dev/st0 weof # probably unnecessary for a new tape

cat /big_file_system/tapedata.dat >/dev/st0

I have no idea if this would work, but a verbatim copy of the data
on a new tape should be accepted by bacula just fine.

All the best, Uwe 



-- 
Uwe Schürkamp | email: <uwe.schuerk...@nionex.net>






--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Move content of a volume to another volume

2017-01-05 Thread Uwe Schuerkamp
On Thu, Jan 05, 2017 at 03:19:00PM +0100, Lukas Hejtmanek wrote:
> Hello,
> 
> is there a way in bacula to move all the data from one volume to another
> volume in the same pool? I tried migrate job but it seems to be possible to
> migrate only from one pool to another.
> 
> I just need to move off the data from one tape to replace the tape because of
> error of the tape. But I do not want to change relabel a different tape,
> I would like to replace volume name of the stored data, i.e., just move the
> backup data to a different volume, update db and so on. Is there way?
> 
> -- 
> Lukáš Hejtmánek
> 

Hm, if you have an error on the tape, how are you going to recover the
data off of it? Or are you saying that you have a tape volume in
status "Error" within your bacula setup?

All the best, Uwe


-- 
Uwe Schürkamp | email: 








--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Pruning is taking too long; downside of force update status to recycle.

2016-12-05 Thread Uwe Schuerkamp
Are you seeing any high loads on the server while pruning job is
running? It looks like the pruning job is stuck in some sort of
loop. Given your machine specs, db backend and catalog size the job
should be through in an instant.

If you're feeling adventurous you could also try mannually purging the
volume.


All the best, Uwe











--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Pruning is taking too long; downside of force update status to recycle.

2016-12-05 Thread Uwe Schuerkamp
On Mon, Dec 05, 2016 at 03:18:45PM +0800, Gi Dot wrote:
> Hello,
> 
> I have this problem with one of my client experiencing pruning of a volume
> that is taking too long (and in the end I ended up recycling it manually by
> updating the volume status). I have googled up on this and from what I
> understand it is mostly due to the database indexing (to be honest I don't
> entirely understand this part).
> 
> My question is, is there any downside or side effect if I were to include a
> script that looks up for Used volume and update it to Recycle before the
> backup runs for the day. I am using this script on another client and
> things are going fine over there, but I'm just worried if there is any
> impact in a long run.
> 
> If anyone would be so kind to explain to me what exactly it means by
> pruning; as in what bacula does when it runs pruning on a volume, it is
> much appreciated as well. I have read somewhere that bacula removes the
> jobs associated  with the volume from the catalog.
> 

Hello Gidot,

some more info could be useful to help you in analyzing your setup
further.

- Hardware specs of the director (assuming all components run on a
single machine)

- Which database are you using for the catalog?

- Amount of RAM available to the DB / backend storage (disks, ssds?)

- Catalog size (file table rows)

- Bacula version

What exactly do you mean by "too long"? Does bacula encounter a
timeout during the pruning from a database error?

All the best,

Uwe


--
Uwe Schürkamp | email: 








--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bacula-sd keeps crashing

2016-10-27 Thread Uwe Schuerkamp
Hi folks,

recently the storage daemon in our largest bacula installation has
started crashing randomly (7.4.0 compiled from source on CentOS 6 64
bit).

Any idea how to track down / debug this? The director usually gets a
timeout receiving data from the SD, like so:

Error: bsock.c:393 Write error sending 60261 bytes to Storage
daemon:deniol186:9103: ERR=Connection timed out
27-Oct 16:58 bacula-fd JobId 326086: Fatal error: backup.c:1024 Network send 
error to SD. ERR=Connection
timed out
27-Oct 16:58 bacula-dir JobId 326086: Error: Director's connection to
SD for this Job was lost.

Both daemons run on the same machine btw, so it's not a firewall /
network connectivity issue I guess (also no iptables / selinux).

Thanks in advance for and ideas! I'll upgrade to 7.4.4 soon but I'd
like to know what's going on if possible.

Uwe


-- 
Uwe Schürkamp | email: 







--
The Command Line: Reinvented for Modern Developers
Did the resurgence of CLI tooling catch you by surprise?
Reconnect with the command line and become more productive. 
Learn the new .NET and ASP.NET CLI. Get your free copy!
http://sdm.link/telerik
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula in the cloud

2016-10-18 Thread Uwe Schuerkamp
Hello Jason,

On Mon, Oct 17, 2016 at 09:37:12PM -0500, Jason Voorhees wrote:
> Hello guys:
> 
> Based on your experience, what alternative do we have for backing up
> information to the cloud preferably using Bacula?
> 

I wrote a script a while ago that runs as a RunAfterJob element which
encrypts (gpg) and copies a full backup of a client (or its disk
volume rather) to an S3 bucket using the aws shell client.

It's still very rudimentary but it does the job nicely when it comes
to keeping a full backup safe (and secure) from a local disaster.

I seem to recall "cloud support" (whatever that may mean in today's
buzzword bingo) was announced for Bacula 8.

All the best,

Uwe

-- 
Uwe Schürkamp | email: 








--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Mark files/directories take 28hours

2016-08-26 Thread Uwe Schuerkamp
On Tue, Aug 23, 2016 at 03:52:49PM +, keithb...@yahoo.com wrote:
> Hi there,
> 
> I was trying to restore a huge data to local harddisk from an offline backup 
> harddisk. There are 36,225,746 files and the data size is 1.7TB.
> 
> Steps to restore data:
> 1. Rebuild catalog using bscan
> 2. bconsole > restore > option 3 "Enter list of comma separated JobIds to 
> select" > enter {jobid} > mark folder_to_be_restored
> 

Hi Keith,

how large is your catalog database?

If you have plenty of space you could have tried bextract to extract
the entire volume content, but I don't know if that's feasable without
a proper catalog that reflects the actual volume status.

All the best,

Uwe 








--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Unnecessarily duplicating backups

2016-07-19 Thread Uwe Schuerkamp
On Tue, Jul 19, 2016 at 02:49:01PM +0200, Ian Douglas wrote:
> Hi All
> 
> It seems to me that Bacula makes unnecessary duplicate backups.
> 
> I have
> 1. Daily incremental
> 2. Monthly differential
> 3. Annual full.
> 

Have you looked into "cancel duplicate jobs" and "allow duplicate
jobs"?

All the best, Uwe

--
Uwe Schürkamp | email: 







--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity planning
reports.http://sdm.link/zohodev2dev
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] very slow backups, around 5mb/sec

2016-06-22 Thread Uwe Schuerkamp
I think the sqlite backend and the large number of small files are
most likely slowing you down. Isn't sqlite explicitly *not*
recommended for production use in the bacula docs?

Also, using compression won't help with raw backup speed unless you
switch to LZO which enables near disk-speed reads while still
resulting in "acceptable" compression rates for most types of files. 

All the best,

Uwe









--
Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 5 minute email notification interval

2016-06-14 Thread Uwe Schuerkamp
On Tue, Jun 14, 2016 at 03:06:00PM +0200, Kern Sibbald wrote:
> It is probably because the default polling interval has been changed from 30
> mins to 5 mins.  Set the polling interval very long and perhaps the problem
> will go away. If it does, I would be interested to know, because then it
> should be relatively easy to fix.
> 

Thanks Kern, I'll try your suggestion for our next offline backup
which sometimes requires intervention (or it thinks it does due to
operator error ;-))

All the best,

Uwe

--
Uwe Schürkamp | email: 







--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 5 minute email notification interval

2016-06-14 Thread Uwe Schuerkamp
On Mon, Jun 13, 2016 at 03:32:11PM +0200, Radosław Korzeniewski wrote:
> Hello,
> 
> The most valid list for this kind of requests is a bacula-devel mailing
> list not a users one.
> As a Bacula is a community project I propose you to prepare a patch which
> changes this behavior.
> 
> best regards
> 
> 2016-06-10 14:46 GMT+02:00 Andreas Koch  >:
> 
> > Hello all,
> >
> > is there a way to change this back to a saner scheme (exponential back-off,
> > maybe)? When I am out of the office (or asleep) flooding my mailbox every
> > five
> > minutes won't help Bacula in getting its desired tape any sooner ...
> >
> > Best,
> >   Andreas

I'm not sure I agree. The interval was perfectly fine in previous
community versions, now this behaviour has (been) changed without any
way for the community user to configure the interval to his / her
liking. 


All the best, Uwe

--
Uwe Schürkamp | email: 







--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] incr job dies with OOM error

2016-05-25 Thread Uwe Schuerkamp
On Tue, May 24, 2016 at 09:30:23PM +0200, Kern Sibbald wrote:
> Hello Uwe,
> 
> I believe this is clearly documented, but when you turn on accurate, the FD
> receives a full list of all the files that are currently backed up -- this
> requires a *lot* of memory.  10-20 million files is already a lot, but 60
> million will probably run out of memory unless you have hundreds of GB of
> memory.
> 
> Best regards,
> Kern
> 

Hi Kern,

thanks for your answer. I had my versioning wrong, actually it's 7.4.0
both on the server and on the client (latest community release).

I'm wondering why full "accurate" backups work fine while incrementals
die in the manner described above... i've turned off "accurate" for
now and the last incremental went through fine.

The client machine has 16GB with 64GB on the director.

All the best, Uwe


-- 
Uwe Schürkamp | email: 







--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] incr job dies with OOM error

2016-05-24 Thread Uwe Schuerkamp
Hi folks,

I'm trying to run an "accurate" incr backup on a server with about 60 million
files (4TB total). After 50 minutes or so, the fd (7.0.4 compiled from
source) on the client dies with an OOM error:

 May 24 19:45:11 deni kernel: [ 6016] 0  6016  2821721
 27552965430 1620 0 bacula-fd
 May 24 19:45:11 deniol kernel: Out of memory: Kill process 6016
 (bacula-fd) score 583 or sacrifice child
 May 24 19:45:11 deni kernel: Killed process 6016 (bacula-fd)
 total-vm:11286884kB, anon-rss:11021184kB, file-rss:0kB

I've now removed the "accurate" flag from the job and am currently
trying to run another incr. backup, so I was wondering if this is a
bug in bacula-fd or the expected behaviour on a such a large file
system?

Both director and client run 7.0.4 on CentOS7 64bit compiled from
source.

All the best & TIA,

Uwe

-- 
Uwe Schürkamp | email: 








--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula Release 7.4.0

2016-01-20 Thread Uwe Schuerkamp
Thanks for the update, Kern!

I upgraded my first server (out of five) today from 7.2.0 compling
from source on CentOS 6.x, and so far everything has worked well, ran
a quick backup & restore job but will wait until after the next full
backups over the weekend before upgrading the other instances.

Keep up the good work & kudos to everyone in the community and at
Bacula Systems for providing us with such a great, free backup
solution.

Cheers, Uwe








--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup speed

2015-12-17 Thread Uwe Schuerkamp
On Tue, Dec 15, 2015 at 05:16:42PM +, Alan Brown wrote:
> 
> MySQL works ok for small sites but doesn't scale well. PostgreSQL is a 
> heavy load on small installations but will keep running long after MySQL 
> has decided to use all your system ram and swap too. The breakeven point 
> is about 10-15 million entries. Beyond that point MySQL needs endless 
> tuning and PostgreSQL doesn't.


Hm, I cannot confirm your observation here. Our largest catalog has
600,000,000 file table entries on a 64GB server that also runs the
director with about half of that allocated to the innodb buffers, and
while it's not exactly a speed daemon when restoring stuff it's also
no slouch, backing up over 300 clients and about 10TB of data on
average every day. I doubt postgres would perform much better with a
similar sized catalog on the same hardware.

We're using MariaDB 5.5x if that has anything to do with it.

All the best,

Uwe









--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup speed

2015-12-16 Thread Uwe Schuerkamp
I fully agree with Bryn here, 8GB would be overkill for a 300MB
database. Just make sure mysql has enough memory to keep your largest
DB in RAM, so increasing the buffer pool if necessary always seems
like a good option.


> > We have 659,172 entries in the File table.

That is quite a small catalog, indeed. Given that you're using innodb
as the db engine your bacula instance should be performing much better
than it currently is. It might be a good idea to check the db for any
corrupt tables or indices then.

All the best, Uwe

--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup speed

2015-12-15 Thread Uwe Schuerkamp
On Mon, Dec 14, 2015 at 09:12:06PM +, Lewis, Dave wrote:
> Hi,
> 
> Thanks. I ran it again with attribute spooling. That sped up the backup of 
> data to the disk pool - instead of 6 hours it took less than 2 - but writing 
> the file metadata afterwards took nearly 6 hours.
> 
> 12-Dec 18:24 jubjub-sd JobId 583: Job write elapsed time = 01:51:55, Transfer 
> rate = 703.0 K Bytes/second
> 12-Dec 18:24 jubjub-sd JobId 583: Sending spooled attrs to the Director.  
> Despooling 120,266,153 bytes ...
> 13-Dec 00:11 jubjub-dir JobId 583: Bacula jubjub-dir 5.2.6 (21Feb12):
>   Elapsed time:   7 hours 39 mins 13 secs
>   FD Files Written:   391,552
>   SD Files Written:   391,552
>   FD Bytes Written:   4,486,007,552 (4.486 GB)
>   SD Bytes Written:   4,720,742,979 (4.720 GB)
>   Rate:   162.8 KB/s
>   Software Compression:   None
>   Encryption: yes
>   Accurate:   no
> 
> So the transfer rate increased from about 200 KB/s to about 700 KB/s, but the 
> total elapsed time increased.
> 

Hi Dave,

how large is your catalog database? How many entries do you have in
your File table, for instance? Attribute despooling should be much
faster than what you're seeing even on SATA disks. 

I guess your mysql setup could do with some optimization w/r to buffer
pool size (I hope you're using InnoDB as the backend db engine) and
lock / write strategies.

As your DB runs on the director machine, I'd assign at last 50% of the
available RAM if your catalog has a similar size.

A quick google search came up with the following query to determine
your catalog db size: 

SELECT table_schema "DB Name",
Round(Sum(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB" FROM 
information_schema.tables GROUP BY table_schema;

All the best, Uwe




--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 7.2.0 hangs with "Device doing acquire"

2015-12-14 Thread Uwe Schuerkamp
On Thu, Dec 10, 2015 at 11:37:58PM +0100, Ana Emília M. Arruda wrote:
> Hello Uwe,
> 
> Do you have concurrent jobs configured for this one-drive TL? It seems that
> when you issue the mount command the drive is busy with another job/volume.
> Have you checked this?
> 

Hello Emilia,

we have configured concurrent jobs, but only one job was active at the
time and failed to load the required volume.

> If this is the case, you can try to run a release command before the
> mount/update slots scan commands.

I'll look into the release command, thanks!

All the best Uwe




--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup speed

2015-12-10 Thread Uwe Schuerkamp
On Thu, Dec 10, 2015 at 12:06:42AM +, Lewis, Dave wrote:
> Hi,
> 
> I'm configuring Bacula backups and sometimes it is very slow to back up to 
> disk or tape, around 1 MB/s and sometimes slower. I'm wondering why it is 
> sometimes so slow and if there is something I can do differently that will 
> speed up the backups. I want to do backups of about 1 TB (or more) of user 
> data, and 1 MB/s is far too slow. I also want to back up operating systems of 
> various Linux servers and imaging data (currently stored locally).
> 
> As a test, I ran a Bacula backup of several operating system directories of 
> the backup computer, and it took about 6 hours. Here are details:
> The directories were /bin, /boot, /etc, /lib, /lib64, /opt, /root, /sbin, 
> /srv, /usr
> Level = Full
> Disk pool
> Computed SHA1 signature
> >From the log file:
> 02-Dec 20:16 jubjub-sd JobId 547: Job write elapsed time = 06:03:29, Transfer 
> rate = 216.4 K Bytes/second
> Elapsed time:   6 hours 13 mins 54 secs
> FD Files Written:   391,549
> SD Files Written:   391,549
> FD Bytes Written:   4,486,000,544 (4.486 GB)
> SD Bytes Written:   4,720,733,845 (4.720 GB)
> Rate:   200.0 KB/s
> Software Compression:   None
> Encryption: yes
> Accurate:   no
> 

Hello Dave,

well, it depends. ;-) On a lot of things, actually. What DB backend
are you using? Is the db optimized for bacula usage? How fast are your
disks? Are you using attribute / job spooling? What's the hardware
spec of the director / sd machine? What's your connection / max
throughput to the clients?

All the best, Uwe

-- 








--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 7.2.0 hangs with "Device doing acquire"

2015-12-08 Thread Uwe Schuerkamp
On Mon, Dec 07, 2015 at 09:50:30AM -0200, Heitor Faria wrote:
> 

> I think your TL may be looking for a tape in an erratic way. Maybe
> your Baculadrives configuration order does not match the physical
> drives order. Maybe your TL is just crazy and need a power cycle.  >
> Verify the TL webconsole for error messages and also why can't it feed
> the tape Bacula expects for backup.


We only have one drive in the library, so I guess it's not a problem
with drive ordering.

The error is very likely to occur after und "update slots scan" on the
device. I consider this an error because all our tapes are initialized
or pre-labelled before we attempt to use them in the autochanger (we
have three sets in rotation at the moment).

Also, as a bacula restart usually is enough to clear the error as
opposed to power-cycling the tape library, I guess something is amiss
within bacula here.

Uwe








--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 7.2.0 hangs with "Device doing acquire"

2015-12-08 Thread Uwe Schuerkamp
On Tue, Dec 08, 2015 at 12:24:56PM -0200, Heitor Faria wrote:
> Uwe: why do you use an update slots scan instead of a regular: update slots?
> Don't you have bar codes on your tapes?
> 

Yep, that's the reason: we're not using bar codes.

All the best, Uwe









--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 7.2.0 hangs with "Device doing acquire"

2015-12-08 Thread Uwe Schuerkamp
On Tue, Dec 08, 2015 at 12:36:57PM -0200, Heitor Faria wrote:

> 
> Uwe: if your tape library has a bar code header please use that for your own 
> benefit. You can generate your own labels using some online free web services.
> The "update slots scan" forces Bacula to insert each tape into your drive in 
> order to verify it's content, that's why you will have "Device  is doing 
> acquire." status for so long.
> Again, I truly believe this is not an error. Eventually Bacula will scan 
> every tapes from your slots and you may resume normal operation.
> 


Hello Heitor,

I'm well aware that that the scan will load each tape into the drive,
that's why I usually update only the slots that were changed using the

"slots=3,5,8" option for instance.

Bacula definitely does not return to a working state once all the
slots have been scanned, we've had it sitting doing nothing for an
entire weekend blocking all the other backups (the update scan command
usually completes within a couple of minutes depending on the number
of slots being affected).

I don't really see the advantage in using barcodes (I don't even think
our library supports them ATM) as we've developed a decent workflow
for our offline backups over the years without them. 

I'll look into their usage further, in the meantime it would be nice
if bacula wouldn't lock up for days using the "acquire" dance ;)

All the best, Uwe











--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula 7.2.0 hangs with "Device doing acquire"

2015-12-07 Thread Uwe Schuerkamp
Hi folks,

every once in a while our bacula director hangs with the following
error message:

* mount storage=lto4 slot=5
3001 Device ""lto4" (/dev/nst0)" is doing acquire.

* stat storage


Used Volume status:
Reserved volume: OFFLINE08_18 on tape device "lto4" (/dev/nst0)
Reader=0 writers=0 reserves=0 volinuse=1




Attr spooling: 1 active jobs, 35,507,927,395 bytes; 1464 total jobs, 
35,507,927,395 max bytes.

This can only be fixed by a complete restart of all bacula components
which usually isn't always an option due to long-running backup jobs.

Is there anything else I can try to rectify this situation?

Bacula 7.2.0 compiled from Source on CentOS 6.x, LTO-library connected
to the director machine, MariaDB backend.



All the best & TIA,

Uwe




--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restoring Files backed up on Windows Client to FreeBSD Client, files all wrong size.

2015-12-02 Thread Uwe Schuerkamp
On Tue, Dec 01, 2015 at 12:29:53PM -0600, dweimer wrote:
> 
> I will do some more test restores to the client now that its up, to see 
> if its only when restoring to the freebsd client. I have verified 
> restores to the FreeBSD client of itself restore correctly. just curious 
> if someone else has seen this?
> 

Yep, we've seen the exact same issue on a few windows servers on one
bacula instance. At first I thought the tape was corrupt, but
restoring a Linux client from the same volume worked fine.

We discovered the issue by pure chance, client used on the windows box
is 5.2.13, but we've also seen the issue on a client running a
licensed Enterprise 7 version.

Cheers, Uwe









--
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911=/4140
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows Client Version 7.2.x availability

2015-10-30 Thread Uwe Schuerkamp
On Wed, Oct 28, 2015 at 01:12:25PM -0200, Wanderlei Huttel wrote:
> Hi Uwe
> 
> I'm not sure, but I guess no.
> 

Thanks Wanderlei. It's not really urgent as 5.x clients appear to be
working fine for us, so I'll just wait for some news about new, free
win clients for now.

Uwe









--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Windows Client Version 7.2.x availability

2015-10-28 Thread Uwe Schuerkamp
Hi folks,

can somebody tell me if a 7.2.x windows client is already available
somewhere? We purchased some licenses for 5.x machines a while ago,
would those be transferable?

Thanks, Uwe










--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] "column "volabytes" does not exist"

2015-10-16 Thread Uwe Schuerkamp
On Thu, Oct 15, 2015 at 05:50:11PM -0400, Phil Stracchino wrote:
> On 10/15/15 14:02, Doug Sampson wrote:
> > I've revised the version to 14 and executed the script giving me this error 
> > message:
> > 
> > 
> > root@pisces:/usr/local/share/bacula# ./update_postgresql_tables
> >  

Just to be clear about this: You're using a postgres backend, and the
script you're executing tells you it'll update a MySQL database?

That seems a bit weird, but as I've never used a pg based backend I'm
not sure if the message from the script is the same for either
backend.

All the best, Uwe









--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 7.2 mysql issue?

2015-10-14 Thread Uwe Schuerkamp
On Mon, Oct 12, 2015 at 07:33:46AM -0700, Stephen Thompson wrote:
> 
> update...
> 
> After adding more RAM, we are back to getting a about 3 queries a day
> that run longer than 15 minutes.  This was our norm before upgrading. 
> No job errors since the first couple days from this month (Oct).  Not 
> sure if the reduction in long running queries was actually from 
> additional RAM or not, since last week before adding RAM, the number of 
> long running queries per day had already greatly diminished since 
> beginning of month.
> 
> So, I guess, problem solved for now, though I'm not completely confident 
> about what actually happened or if I did anything to fix it.
> Oh, well.
> 
> Stephen

Hi Stephen,

you might also try giving MariaDB a shot which has been performing
fine as a drop-in mysql replacement for us for the last few years with
catalogs of similar size.

Cheers, Uwe









--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] is 7.2 ready for prime time?

2015-09-28 Thread Uwe Schuerkamp
On Fri, Sep 25, 2015 at 09:01:29AM -0700, Stephen Thompson wrote:
> 
> 
> 
> I run daily backups of my database and had finished my monthly full run 
> for September, so I was technically covered.  However I was not looking 
> forward to restoring a 900+Gb mysql database from a text dump which on 
> my system would take days, if not an entire week.  The last time I had 
> to restore database from backup it was 4 or so years ago and my database 
> was only 300-400Gb back then.
> 
> Stephen
> 
> 

One word: innobackupex. You'll be up running in a few minutes again as
opposed to a week when using a text file created by mysqldump (which,
much like MyISAM tables, should be taken round the back and taken out
of its misery once and for all ;))

All the best,

Uwe








--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] is 7.2 ready for prime time?

2015-09-25 Thread Uwe Schuerkamp
On Fri, Sep 25, 2015 at 06:43:57AM -0700, Stephen Thompson wrote:
> 
> Thanks, I'll be upgrading soon.
> 
> What known bugs are in the update_bacula_tables scripts?
> 
> thanks,
> Stephen
> 

Hi Stephen,

not real "bugs", but rather some weird messages about the EOT tag not
being found and other stuff when you run them, the scripts seem to
have worked fine on my catalogs though.

All the best,

Uwe


--
arvato Systems S4M GmbH | Sitz Köln | Amtsgericht Köln HRB 27038








--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] is 7.2 ready for prime time?

2015-09-24 Thread Uwe Schuerkamp
On Thu, Sep 24, 2015 at 08:40:05AM -0700, Stephen Thompson wrote:
> 
> All,
> 
> I typically patch bacula pretty frequently, but I saw the somewhat 
> unusual notice on the latest release notes that warns it may not be 
> ready for use in production.  How stable is it?  I don't really have the 
> resources to test this out, but rather would have to go straight to 
> production with it.  I could always roll back, but that might entail the 
> recovery from dump of a 900GB database.  Opinions?
> 

I upgraded five bacula instances of varying size over the last four
weeks or so, starting with the smallest (all were on 7.0.5 compiled
from source on CentOS), no issues so far apart from the little bugs in
the update_bacula_tables script.

Cheers, Uwe

-- 
arvato Systems S4M GmbH | Sitz Köln | Amtsgericht Köln HRB 27038








--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Volume labels + removing old volumes

2015-09-21 Thread Uwe Schuerkamp
Did you do an "update pool from resource" and or "update all volumes
from pool" after changing your pool parameters?

All the best,

Uwe










--
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


  1   2   3   4   5   >