Re: [bareos-users] Re: FIleset

2019-04-07 Thread Spadajspadaj


On 06.04.2019 09:25, muravey.novosibi...@gmail.com wrote:

четверг, 30 ноября 2017 г., 11:27:43 UTC+7 пользователь oldt...@gmail.com 
написал:

On Friday, November 17, 2017 at 6:18:40 AM UTC-5, Nikitin Artem wrote:

Hello.

I’m executing a sequence of commands in the Fileset resource (File = “ls –l 
 | awk ‘{print $NF}’ etc. . .). I need this, for getting the list of 
files to backup .

The problem is, these commands is executing on Director, not Client.


How to execute them remotely (on FIledaemon’s host). Please help.

To run commands client-side, you can use "Client Run Before Job" or "Client Run After 
Job". You can also look at "Run Script", although I've never used it. You can find them on (or 
near) pages 70 and 80 of the manual, section 9.2.

You could then use this to run a script that creates a list of files and writes 
to a file which you might call files.txt. You can then put something like this 
in your FileSet:

File = "\\
Hello.
Yes, it's good idea. But in my backup system it's not working. "Before job" script and "after 
job" script run on client normally and I get "Backup OK" after job execution, but backup size 
is 0 bytes.
I start FD with "-d 100" and analyze debug log after backup job show that read FileSet 
executing before "client run before job".
Please help!

P.S. Sorry for my English.

It's way too complicated. The most important thing for the original 
poster is this excerpt from the manual:


"If the vertical bar (|) in front of my_partitions is preceded by a 
backslash as in ∖|, the program will be executed on the Client’s machine 
instead of on the Director’s machine."


Of course you need to escape the backslash, so you'll probably end up 
with something like that (example from one of my configs):


FileSet {
    Name = "Local-offline-archives"
    Include {
    File = "\\| find /srv/archives -type f -not -path '*backup*' 
-ctime +365"   }

}

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To post to this group, send email to bareos-users@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [bareos-users] 3999 Device "vchanger-1" not found or could not be opened.

2019-03-18 Thread Spadajspadaj
Thanks for the config but if I'm not mistaken it won't let me do what I 
mainly wanted to achieve with vchanger - it won't let me plug in and out 
external drives. That's the whole point of using vchanger for me so I 
can take one external disk and move it somewhere offline or even 
off-site, Your storage seems to depend on a directory permanently 
mounted on the SD.


Best regards,

Mariusz

On 18.03.2019 17:10, Bartek R wrote:

Hi,

I have tried to start vchanger long time ago but i found it a bit too 
complex. Since then i am running following configuration:


Autochanger {
    Name = local-sd
    Changer Device = /dev/null
    Changer Command = ""
    Device = drive-0, drive-1, drive-2, drive-3
    }

Device {
    Name = drive-0
    Device Type = File
    Media Type = File
    Archive Device = /var/lib/bareos/storage
    Automatic Mount = yes
    Always Open = yes
    RemovableMedia = no
    RequiresMount = no
    Autochanger = yes
    Drive Index = 0
    Maximum Concurrent Jobs = 1
    RandomAccess = yes
    Label Media = yes
    }

Device {
    Name = drive-1
    Device Type = File
    Media Type = File
    Archive Device = /var/lib/bareos/storage
    Automatic Mount = yes
    Always Open = yes
    RemovableMedia = no
    RequiresMount = no
    Autochanger = yes
    Drive Index = 1
    Maximum Concurrent Jobs = 1
    RandomAccess = yes
    Label Media = yes
    }

Device {
    Name = drive-2
    Device Type = File
    Media Type = File
    Archive Device = /var/lib/bareos/storage
    Automatic Mount = yes
    Always Open = yes
    RemovableMedia = no
    RequiresMount = no
    Autochanger = yes
    Drive Index = 2
    Maximum Concurrent Jobs = 1
    RandomAccess = yes
    Label Media = yes
    }

Device {
    Name = drive-3
    Device Type = File
    Media Type = File
    Archive Device = /var/lib/bareos/storage
    Automatic Mount = yes
    Always Open = yes
    RemovableMedia = no
    RequiresMount = no
    Autochanger = yes
    Drive Index = 3
    Maximum Concurrent Jobs = 1
    RandomAccess = yes
    Label Media = yes
    }

Storage {
  Name = bareos-sd
  Maximum Concurrent Jobs = 4

  # remove comment from "Plugin Directory" to load plugins from 
specified directory.
  # if "Plugin Names" is defined, only the specified plugins will be 
loaded,

  # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
  #
  # Plugin Directory = "/usr/lib64/bareos/plugins"
  # Plugin Names = ""
}

Pool {
  Name = default
  Pool Type = Backup
  Recycle = no                       # No recycling, volumes used once 
and then truncated

  AutoPrune = yes                     # Prune expired volumes
  Action On Purge = Truncate
  File Retention = 70 days
  Job Retention = 70 days
  Volume Retention = 70 days
#  Use Volume Once = yes
  Maximum Volume Jobs = 1
  Label Format = "default-${JobId}"
  Storage = local-sd
  }

I belive this setup is a bit tricky but it works perfectly for me. The 
trick is to have only one job per volume and having number of jobs 
limited on SD level to the total sum of devices. I am not sure but i 
think it was set that way in order to avoid having the same volume 
opened by multiple jobs simultanously.


Hope this can be helpful but feel free to correct me if you find this 
configuration to be a total mess.


PS. It is possible that this setup require a helper script in order to 
have expired volume files (and jobs) correctly removed.


Kind Regards,
Bartłomiej

pon., 18 mar 2019 o 14:33 Go Away > napisał(a):


Hello.

I'm trying to set up an environment with removable disks managed
by vchanger.
I managed to install vchanger, I created the media files but I
cannot seem to be able to make bareos see my virtual autochanger.
And I'm a bit stuck.
Whenever I try to do "update slots" I'm getting the 3999 error.

Relevant parts of my config:

# cat /etc/vchanger/vchanger.conf
Storage Resource = vchanger-1
User = bareos
Group = bareos
Logfile = /var/log/vchanger/vchanger-1.log
Work Dir = /var/spool/vchanger/vchanger-1
Log Level = 7
Magazine = /srv/backupstor/252f8c87-02bd-4509-aa63-fa2fe8ee105d
bconsole config = /etc/bareos/bconsole.conf


# cat bareos-sd.d/autochanger/vchanger1
Autochanger {
    Name = vchanger-1
    Device = vchanger-1-0
#    Changer Command = "/usr/local/bin/vchanger -u bareos -g
bareos %c %o %S %a %d"
    Changer Command = "/usr/local/bin/vchanger %c %o %S %a %d"
    Changer Device = /etc/vchanger/vchanger.conf
}

# cat bareos-sd.d/device/vchanger1.conf

Device {
    Name = vchanger-1-0
    DriveIndex = 0
    Autochanger = yes
    Device Type = File
    Media Type = Offsite-File
    Label Media = no
    Random Access = yes
    Removable Media = yes
    Automatic Mount = yes
    Archive Device = /var/spool/vchanger/vchanger-1/0
}

# cat bareos-dir.d/storage/vchanger-1.conf

Re: [bareos-users] failed to handle the OVA files using sparse option

2019-07-12 Thread Spadajspadaj
Are you sure the OVA file is a sparse one? AFAIR, thin provisioning 
means that the file size should sum up to already provisioned chunks of 
data.


In other words, if inside the virtual machine you use 4G and have a 4G 
file even though the maximum disk size is 30G you'd have a 4G file. But 
it wouldn't be a sparse file.


If it were a sparse file, the filesystem would report a file size of 30G 
but with only 4G of actual contents.



I hope I'm not making this overly confusing :-)


On 12.07.2019 09:56, levindecaro wrote:

Hi,

I'm using bareos 18.2, to backup bunch of ova files exported from 
RHEV, the ova file size is the actual used size of the image after 
export(thin image). However bareos still allocate the real size of 
image during backup or restore. The sparse=yes options seems cannot 
handle ova files.


Anyone has experience to workaround that?

Thank you!



--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To post to this group, send email to bareos-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/af423b5f-64bf-fb47-2a5c-f56ecb8d02b9%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [bareos-users] failed to handle the OVA files using sparse option

2019-07-15 Thread Spadajspadaj
The question that comes to mint is "how sparse is the sparse file". I 
know it sounds a bit strange but the bareos sparcity logic, as I see in 
the docs is that it checks whether a 32k block is made entirely of 
zeros. So if your sparse file doesn't have continuous 32k blocks 
(properly aligned I suppose) it won't get treated as sparse.


On 13.07.2019 08:19, levindecaro wrote:

It is a sparse file i believe, by confirm that

[root@server export-domain]# du -hs VM1.ova
20G VM1.ova <--- actual size on disk

[root@server export-domain]# ls -l VM1.ova
-rw---. 1 root root 179142214656 Jul 13 02:56 VM1.ova  <--- real file size

After backup on bareos, it shown 167GB is backed up



After restore

[root@server export-domain]# ls -l ../restored/VM1.ova

-rw---. 1 root root 177794008064 Jul 12 15:07 ../restored/VM1.ova

[root@mgnt21 export-domain]# du ../restored/VM1.ova

166G../restored/VM1.ova <--- reverted to full allocated file.





On Friday, July 12, 2019 at 10:40:31 PM UTC+8, Spadajspadaj wrote:

Are you sure the OVA file is a sparse one? AFAIR, thin provisioning
means that the file size should sum up to already provisioned chunks of
data.

In other words, if inside the virtual machine you use 4G and have a 4G
file even though the maximum disk size is 30G you'd have a 4G file. But
it wouldn't be a sparse file.

If it were a sparse file, the filesystem would report a file size of 30G
but with only 4G of actual contents.

I hope I'm not making this overly confusing :-)


On 12.07.2019 09:56, levindecaro wrote:

Hi,

I'm using bareos 18.2, to backup bunch of ova files exported from
RHEV, the ova file size is the actual used size of the image after
export(thin image). However bareos still allocate the real size of
image during backup or restore. The sparse=yes options seems cannot
handle ova files.

Anyone has experience to workaround that?

Thank you!



--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To post to this group, send email to bareos-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/0b6a95d4-6467-92dd-c115-eb72d02245ea%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [bareos-users] Re: Bareos active client network setup not working

2019-09-13 Thread Spadajspadaj
Firstly, let me say that - from the security point of view - it's usualy 
best idea to let the connection come from the director to the clients 
(you usually connect from safer zone to less safe one).


Secondly - 
https://docs.bareos.org/TasksAndConcepts/NetworkSetup.html#section-clientinitiatedconnection


"When both connection directions are allowed, the Bareos Director

1. checks, if there is a waiting connection from this client.
2. tries to connect to the client (using the usual timeouts).
3. waits for a client connection to appear (using the same timeout as
   when trying to connect to a client)."

So I'd try to run debug on client first (run the client with appropriate 
-d level, run tcpdump/wireshark) to see whether the client tries to 
connect to daemon. If it does it's up to you to find on the network 
level why it fails.


I'm also not sure how SELinux copes with client-initiated connections 
(in case you use SELinux of course).



Best regards,

MK

On 13.09.2019 11:45, Jörg Steffens wrote:

The first thing you should check is if the client is connected to the
Director.

For this, use the bconsole.
In there use the command
"status dir"
It shows you the list of clients that are connected to the Director.
Header is:
Client Initiated Connections (waiting for jobs):

If your client does not show up there, it is not connected to the
Director and will therefore fail.

regards,
Jörg

On 13.09.19 at 10:35 wrote John Saruni:

Hi Listers,

I am running Bareos (Director and FD) Version: 18.2.5. I have clients
behind a NAT gateway. It is not feasible to configure 1:1 NAT for all
the clients. A little research pointed me to a client initiated network
connection model. My config files for this model are:

1.Director's client resource

[root@bareos ~]# cat /etc/bareos/bareos-dir.d/client/activeclient.conf
Client {
   Name = activeclient
   Address = ww.xx.yy.zz
   Password = xxx
   Connection From Director To Client = no
   Connection From Client To Director = yes
   Heartbeat Interval = 60
}
[root@bareos ~]#
  


2.FD's director resource

[root@backup ~]# cat /etc/bareos/bareos-fd.d/director/bareos-dir.conf
Director {
   Name = bareos-dir
   Address = zz.yy.xx.ww
   Password = "[md5]xx"
   Connection From Client To Director = yes
}
[root@backup ~]#

All the other director configs (schedule, fileset, jobdef, job, etc) are
as per the default model (where the Bareos Director connects to the
clients).
The backup job fails with the following errors:
Fatal error: Failed to connect to client "activeclient".
Fatal error: No Job status returned from FD.

This means the director is still initiating requests.
I have confirmed that the FD is running and respective Bareos ports
allowed on the firewall
Has anyone successfully implemented the active client model? Please assist

Thanks in advance.

--
You received this message because you are subscribed to the Google
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to bareos-users+unsubscr...@googlegroups.com
.
To view this discussion on the web visit
https://groups.google.com/d/msgid/bareos-users/ecf0c4bf-abcc-483b-a5da-b40d739c788e%40googlegroups.com
.




--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/069ab8f3-be82-9a68-5451-d49e0120b46b%40gmail.com.


Re: [bareos-users] Changing default ports

2019-08-04 Thread Spadajspadaj

On 04.08.2019 10:18, Roman Starun wrote:}

But as soon as i change SDport to 8103, SD does not start.
Bareos ver 18.2.5, Centos 7.



Since you're using CentOS, there is a big chance that you have SELinux 
enabled. So SELinux is preventing a bind to non-labeled port. You have 
to label port 8103 with bacula_port_t type.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/a81137df-d940-21cd-a9bf-9965cf2eda5e%40gmail.com.


Re: [bareos-users] failed to handle the OVA files using sparse option

2019-07-16 Thread Spadajspadaj

On 16.07.2019 08:30, Andreas Rogge wrote:

"By turning on the *sparse* option, Bareos will specifically look for
empty space in the file, and any empty space will not be written to the
Volume, nor will it be restored."
(https://docs.bareos.org/Configuration/Director.html#fileset-resource 
)Sorry. My bad.

The docs is correct. Bareos will puch a hole during restore if the
sparse option was set in the backup fileset.
The misleading behaviour was that this is done whether or not the
original file was actually sparse or just contained regions of 32K
contigous zeros.


You're getting me confused now :-)

First you wrote that fd doesn't write the file as sparse when restoring. 
Now you say that fd "punches a hole" which - IMHO - suggests creating a 
sparse file.


Maybe it's a matter of wording but in this context "punch a hole" I'd 
interpret as creating a "holed" file. Which suggests strongly a sparse file.


I understand of course that sparsiness logic just looks for contiguous 
regions of zeroes (do they need to be aligned to 32k boundary or do they 
need to just be contiguous 32k zeros, BTW?).


But from what I understood from writing the sparse option description 
(and I believe the original poster inferred the same) was that if the 
sparse option is set, the fd looks for blocks of zeroes (regardless of 
whether the file is indeed a sparse file on the underlying filesystem or 
is just a plain fully-alocated file filled with zeros), writes the file 
as sparse to the sd and then on restore creates a "holed" file without 
those zeroed blocks. Which, for me, meant sparse.


It's a bit confusing since indeed the word "sparse" can be understood 
differently on different levels so maybe it's worth to add a word or two 
to the docs that the sparsiness of the backup is meant only in regard to 
the method of storing the backup on backup media, not to the sparsiness 
of the file itself in terms of the underlying filesystem?




--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To post to this group, send email to bareos-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/baa659fb-ea24-2d48-825c-bcefaabdcfd6%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [bareos-users] failed to handle the OVA files using sparse option

2019-07-16 Thread Spadajspadaj


On 15.07.2019 09:05, Andreas Rogge wrote:

[root@server export-domain]# ls -l ../restored/VM1.ova

-rw---. 1 root root 177794008064 Jul 12 15:07 ../restored/VM1.ova

[root@mgnt21 export-domain]# du ../restored/VM1.ova

166G../restored/VM1.ova <--- reverted to full allocated file.

That's also correct. Bareos does not punch holes in files during restore.

The problem is the logic for sparse files: Bareos does detect contiguous
streams of zeroes, but it doesn't check whether the region was sparse or
not (I'm not aware of a portable way to do so right now).
So we don't know what regions of a backed up file are sparse and
therefore cannot punch the correct holes into the file during restore.


The docs are misleading then.

"By turning on the *sparse* option, Bareos will specifically look for 
empty space in the file, and any empty space will not be written to the 
Volume, nor will it be restored." 
(https://docs.bareos.org/Configuration/Director.html#fileset-resource )


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To post to this group, send email to bareos-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/e5388d64-8dca-1ef3-532a-cba9c5e390dc%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [bareos-users] Re: Waiting for a client

2019-11-12 Thread Spadajspadaj

On 12.11.2019 09:11, Jörg Steffens wrote:

On 12.11.19 at 07:59 wrote Spadajspadaj:

Hi there.

I added a laptop client to my bareos setup and everything runs mostly
fine except for the fact that if the job is scheduled for - let's say -
9pm, it tries to run, tries to connect to fd and since the laptop is
down, the job fails.

I was wondering how I can avoid those failed jobs and possibly have
bareos server wait for a client to appear.

I pondered two options:

1) Have completely manual "schedule", so the job never runs on its own.
I need to manually run the job every time I have a laptop available in
my network. But I'm not sure I can specify a schedule which never runs
on its own. Is it even possible? Or

2) Have bareos server wait for the fd to appear on the network. Now
that's trickier. I thought about writing a script and running it as a
pre-job script effectively delaying start of the job itself until I can
reach the fd but:

a) I'm not sure if the pre-job script is run before the initial
connection to fd or after. (haven't tried it yet; just thought about it
today driving to work :-)) and

b) If I understand correctly, such job with a pre-job script would
effectively block other jobs from running in the same time, right? So if
I have a schedule for all jobs to start at 9pm, then other jobs (for now
let's not complicate matters with priorities) would wait for my script
to time-out with the Max Run Time limit?

Any other ideas?

What I would ideally want to achieve is to have bareos do a backup of my
laptop when it gets available on the network (when I come back home and
power on the laptop) but of course not more than once a day.

Of course I can (and probably will if there are no other options) do an
external script which probes the network for the laptop and if found
runs the apropriate job but I was wondering if I can do something like
that "from within" bareos.

You can solve this with a combination of
https://docs.bareos.org/TasksAndConcepts/NetworkSetup.html#client-initiated-connection
and the external script
https://github.com/bareos/bareos-contrib/blob/master/misc/triggerjob/triggerjob.py
run by cron.

Triggerjob will look for all clients connected to the Director.
If is finds a job named backup-{clientname} that did not successfully
run during the specified time period, it will trigger this job.

The upcoming Bareos 19.2.4 will also add an additional feature to the
Bareos core.


Thanks for suggestions. I'll look into it :-)

MK


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/c35f71cb-9173-808f-93e9-552da91dbcb9%40gmail.com.


Re: [bareos-users] Backup Computer von die Netwerk

2019-11-21 Thread Spadajspadaj

This is not a good approach.

Firstly, if you mount a remote directory as a network drive (a letter 
z:) for a particular user. A bareos client runs in different user's 
context so it doesn't see that mounted path.


Secondly even if you managed to force bareos to see the network 
directory (by providing a UNC path and making sure bareos can access 
said directory), you don't have VSS support on the remote computer so 
backing up opened files may result in inconsistent backup state and 
unrecoverable backups (i.e. damaged database files).


The proper approach would be to deploy a filedaemon on the host which 
you want to back up and back up "local" drives from said client


MK

On 21.11.2019 10:03, peter wrote:

Hello,

I am an apprentice with this software, I would like to know if there 
is any way to make a backup of a network computer (WorkGroup). backups 
work for me if I do from disk C or any other path of the client 
computer. but with a network computer I am not successful


Bildschirmfoto 2019-11-21 um 09.52.35.png 




With this configuration I cannot make a backup of the networked computer

Bildschirmfoto 2019-11-21 um 09.59.25.png



I am new to this I would like to know if it is normal or if there is 
any way to do this type of backups






--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/c9e27cc7-8281-469e-8898-f335057d2b10%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/bc8a539b-5f0e-8b30-c028-26ee83863325%40gmail.com.


Re: [bareos-users] Howto get used tapes in bareos

2019-12-03 Thread Spadajspadaj



On 03.12.2019 11:45, Adam Podstawka wrote:

Hi,

i have a little problem, we build a new backup system and wanted to use
our old tapes in it. But the tapes are all labeled already through the
old system.
An "label barcodes" doesn't add them to the pool, as they are already
labeld.
i can't get to work an "echo 'add pool=Scratch volumename=01L6' |
bconsole", but i don't want to mount all tapes and delete the
label/empty them to be able to use them in the new system.

Any hints? AS this are like 300 tapes, and doing "add" in bconsole by
hand will take to long.


Won't update slots (or update slots scan) help?

Regards,

MK

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/978d931b-5278-d996-05eb-95fc06846582%40gmail.com.


[bareos-users] Windows backups fail

2019-10-16 Thread Spadajspadaj

Hi.

I have a setup where I backup few linux machines and one Windows 
workstation.


All linux clients work fine, the Windows machine sometimes does work OK 
but sometimes the jobs fail. Typical failed job run:


15-Oct 21:00 backup1-dir JobId 1589: Start Backup JobId 1589, 
Job=Windows_system_backup.2019-10-15_21.00.00_26
15-Oct 21:00 backup1-dir JobId 1589: Connected Storage daemon at backup1:9103, 
encryption: PSK-AES256-CBC-SHA
15-Oct 21:00 backup1-dir JobId 1589: Using Device "vchanger-1-0" to write.
15-Oct 21:00 backup1-dir JobId 1589: Connected Client: dziura-fd at 
172.16.0.4:9102, encryption: None
15-Oct 21:00 backup1-dir JobId 1589:  Handshake: Cleartext
15-Oct 21:00 backup1-dir JobId 1589:  Encryption: None
15-Oct 21:00 dziura-fd JobId 1589: Created 27 wildcard excludes from 
FilesNotToBackup Registry key
15-Oct 21:00 dziura-fd JobId 1589: shell command: run ClientBeforeJob "wbadmin start 
backup -allCritical -backupTarget:d: -quiet"
15-Oct 21:00 dziura-fd JobId 1589: ClientBeforeJob: wbadmin 1.0 - Backup 
command-line tool
15-Oct 21:00 dziura-fd JobId 1589: ClientBeforeJob: (C) Copyright 2013 
Microsoft Corporation. All rights reserved.
15-Oct 21:00 dziura-fd JobId 1589: ClientBeforeJob:
15-Oct 21:00 dziura-fd JobId 1589: ClientBeforeJob: Retrieving volume 
information...
15-Oct 21:00 dziura-fd JobId 1589: ClientBeforeJob: This will back up (EFI 
System Partition),Recovery (450.00 MB),(C:) to d:.
15-Oct 21:00 dziura-fd JobId 1589: ClientBeforeJob: The backup operation to D: 
is starting.
15-Oct 21:00 dziura-fd JobId 1589: ClientBeforeJob: Creating a shadow copy of 
the volumes specified for backup...
15-Oct 21:00 dziura-fd JobId 1589: ClientBeforeJob: Creating a shadow copy of 
the volumes specified for backup...
15-Oct 21:00 dziura-fd JobId 1589: ClientBeforeJob: Creating a shadow copy of 
the volumes specified for backup...
15-Oct 21:00 dziura-fd JobId 1589: ClientBeforeJob: Creating a shadow copy of 
the volumes specified for backup...
15-Oct 21:00 dziura-fd JobId 1589: ClientBeforeJob: Creating a backup of volume 
(EFI System Partition) (100.00 MB), copied (0%).
15-Oct 21:01 dziura-fd JobId 1589: ClientBeforeJob: Creating a backup of volume 
(EFI System Partition) (100.00 MB), copied (0%).
15-Oct 21:01 dziura-fd JobId 1589: ClientBeforeJob: Creating a backup of volume 
(EFI System Partition) (100.00 MB), copied (100%).
15-Oct 21:01 dziura-fd JobId 1589: ClientBeforeJob: The backup of volume (EFI 
System Partition) (100.00 MB) completed successfully.
15-Oct 21:01 dziura-fd JobId 1589: ClientBeforeJob: Creating a backup of volume 
Recovery (450.00 MB), copied (99%).
15-Oct 21:01 dziura-fd JobId 1589: ClientBeforeJob: The backup of volume 
Recovery (450.00 MB) completed successfully.
15-Oct 21:01 dziura-fd JobId 1589: ClientBeforeJob: Creating a backup of volume 
(C:), copied (0%).
[cut for clarity]
15-Oct 21:51 dziura-fd JobId 1589: ClientBeforeJob: Creating a backup of volume 
(C:), copied (100%).
15-Oct 21:51 dziura-fd JobId 1589: ClientBeforeJob: The backup of volume (C:) 
completed successfully.
15-Oct 21:51 dziura-fd JobId 1589: ClientBeforeJob: Summary of the backup 
operation:
15-Oct 21:51 dziura-fd JobId 1589: ClientBeforeJob: --
15-Oct 21:51 dziura-fd JobId 1589: ClientBeforeJob:
15-Oct 21:51 dziura-fd JobId 1589: ClientBeforeJob: The backup operation 
successfully completed.
15-Oct 21:51 dziura-fd JobId 1589: ClientBeforeJob: The backup of volume (EFI 
System Partition) (100.00 MB) completed successfully.
15-Oct 21:51 dziura-fd JobId 1589: ClientBeforeJob: The backup of volume 
Recovery (450.00 MB) completed successfully.
15-Oct 21:51 dziura-fd JobId 1589: ClientBeforeJob: The backup of volume (C:) 
completed successfully.
15-Oct 21:51 dziura-fd JobId 1589: ClientBeforeJob: Log of files successfully 
backed up:
15-Oct 21:51 dziura-fd JobId 1589: ClientBeforeJob: 
C:\WINDOWS\Logs\WindowsBackup\Backup-15-10-2019_19-00-16.log
15-Oct 21:51 dziura-fd JobId 1589: ClientBeforeJob:
15-Oct 21:51 dziura-fd JobId 1589: Warning: XATTR support requested in fileset 
but not available on this platform. Disabling ...
15-Oct 21:00 bareos-sd JobId 1589: Connected File Daemon at 172.16.0.4:9102, 
encryption: None
15-Oct 21:51 bareos-sd JobId 1589: Volume "vchanger-1_1_0057" previously 
written, moving to end of data.
15-Oct 21:51 bareos-sd JobId 1589: Ready to append to end of Volume 
"vchanger-1_1_0057" size=24826826838
15-Oct 21:51 dziura-fd JobId 1589: Generate VSS snapshots. Driver="Win64 VSS", 
Drive(s)="D"
15-Oct 21:51 dziura-fd JobId 1589: VolumeMountpoints are not processed as onefs 
= yes.
15-Oct 23:52 backup1-dir JobId 1589: Fatal error: Network error with FD during 
Backup: ERR=Connection reset by peer
15-Oct 23:52 bareos-sd JobId 1589: Error: lib/bsock_tcp.cc:627 Read error from 
File Daemon:172.16.0.4:9102: ERR=Interrupted system call
15-Oct 23:52 backup1-dir JobId 1589: Fatal error: Director's comm line to SD 
dropped.
15-Oct 23:52 backup1-dir JobId 1589: Fatal error: 

Re: [bareos-users] Windows backups fail

2019-10-23 Thread Spadajspadaj

Firstly, sorry for replying personally to you, not to the list before.

Hit "Reply" instead of "Reply to the list".

On 16.10.2019 09:11, Spadajspadaj wrote:

On 16.10.2019 09:01, Andreas Rogge wrote:

Am 16.10.19 um 08:17 schrieb Spadajspadaj:
15-Oct 23:52 backup1-dir JobId 1589: Fatal error: Network error with 
FD during Backup: ERR=Connection reset by peer
15-Oct 23:52 bareos-sd JobId 1589: Error: lib/bsock_tcp.cc:627 Read 
error from File Daemon:172.16.0.4:9102: ERR=Interrupted system call

Do you have any kind of firewall between director and fd?
Looks like something kills the connection from director to fd. Maybe
configuring heartbeat would help.

See:
https://docs.bareos.org/Configuration/Director.html#config-Dir_Director_HeartbeatInterval 




Nope. Both hosts are in the same network segment. The server is 
running on a Centos box with firewall disabled. The client is on 
Windows 10 with system firewall (with bareos rules added by the 
installer in place).


So there should not be any non-standard solution interfering with the 
network. But I'll try with the heartbeat. Thx for suggestion.




Anyway, setting heartbeat didn't help. Still getting connection erors.

Log from tonight follows.

I was wondering whether it could be caused by the fact that the bareos 
server is connected via wi-fi to the rest of my network (galvanic 
isolation :-)) and maybe I'm hitting some wi-fi renegotiation periods 
but then it would affect all my backup jobs, not just the windows box ones.


22-Oct 21:00 backup1-dir JobId 1643: Start Backup JobId 1643, 
Job=Windows_system_backup.2019-10-22_21.00.00_57
22-Oct 21:00 backup1-dir JobId 1643: Connected Storage daemon at backup1:9103, 
encryption: PSK-AES256-CBC-SHA
22-Oct 21:00 backup1-dir JobId 1643: Using Device "vchanger-1-0" to write.
22-Oct 21:00 backup1-dir JobId 1643: Connected Client: dziura-fd at 
172.16.0.4:9102, encryption: None
22-Oct 21:00 backup1-dir JobId 1643:  Handshake: Cleartext
22-Oct 21:00 backup1-dir JobId 1643:  Encryption: None
22-Oct 21:00 dziura-fd JobId 1643: Created 27 wildcard excludes from 
FilesNotToBackup Registry key
22-Oct 21:00 dziura-fd JobId 1643: shell command: run ClientBeforeJob "wbadmin start 
backup -allCritical -backupTarget:d: -quiet"
22-Oct 21:00 dziura-fd JobId 1643: ClientBeforeJob: wbadmin 1.0 - Backup 
command-line tool
22-Oct 21:00 dziura-fd JobId 1643: ClientBeforeJob: (C) Copyright 2013 
Microsoft Corporation. All rights reserved.
22-Oct 21:00 dziura-fd JobId 1643: ClientBeforeJob:
22-Oct 21:00 dziura-fd JobId 1643: ClientBeforeJob: Retrieving volume 
information...
22-Oct 21:00 dziura-fd JobId 1643: ClientBeforeJob: This will back up (EFI 
System Partition),Recovery (450.00 MB),(C:) to d:.
22-Oct 21:00 dziura-fd JobId 1643: ClientBeforeJob: The backup operation to D: 
is starting.
22-Oct 21:00 dziura-fd JobId 1643: ClientBeforeJob: Creating a shadow copy of 
the volumes specified for backup...
22-Oct 21:00 dziura-fd JobId 1643: ClientBeforeJob: Creating a shadow copy of 
the volumes specified for backup...
22-Oct 21:00 dziura-fd JobId 1643: ClientBeforeJob: Creating a shadow copy of 
the volumes specified for backup...
22-Oct 21:00 dziura-fd JobId 1643: ClientBeforeJob: Creating a shadow copy of 
the volumes specified for backup...
22-Oct 21:00 dziura-fd JobId 1643: ClientBeforeJob: Creating a backup of volume 
(EFI System Partition) (100.00 MB), copied (0%).
22-Oct 21:01 dziura-fd JobId 1643: ClientBeforeJob: Creating a backup of volume 
(EFI System Partition) (100.00 MB), copied (99%).
22-Oct 21:01 dziura-fd JobId 1643: ClientBeforeJob: Creating a backup of volume 
(EFI System Partition) (100.00 MB), copied (100%).
22-Oct 21:01 dziura-fd JobId 1643: ClientBeforeJob: The backup of volume (EFI 
System Partition) (100.00 MB) completed successfully.
22-Oct 21:01 dziura-fd JobId 1643: ClientBeforeJob: Creating a backup of volume 
Recovery (450.00 MB), copied (99%).
22-Oct 21:01 dziura-fd JobId 1643: ClientBeforeJob: The backup of volume 
Recovery (450.00 MB) completed successfully.
22-Oct 21:01 dziura-fd JobId 1643: ClientBeforeJob: Creating a backup of volume 
(C:), copied (0%).
22-Oct 21:01 dziura-fd JobId 1643: ClientBeforeJob: Creating a backup of volume 
(C:), copied (1%).
22-Oct 21:01 dziura-fd JobId 1643: ClientBeforeJob: Creating a backup of volume 
(C:), copied (1%).
22-Oct 21:02 dziura-fd JobId 1643: ClientBeforeJob: Creating a backup of volume 
(C:), copied (1%).
22-Oct 21:02 dziura-fd JobId 1643: ClientBeforeJob: Creating a backup of volume 
(C:), copied (1%).
22-Oct 21:02 dziura-fd JobId 1643: ClientBeforeJob: Creating a backup of volume 
(C:), copied (2%).
22-Oct 21:02 dziura-fd JobId 1643: ClientBeforeJob: Creating a backup of volume 
(C:), copied (3%).
22-Oct 21:02 dziura-fd JobId 1643: ClientBeforeJob: Creating a backup of volume 
(C:), copied (4%).
22-Oct 21:02 dziura-fd JobId 1643: ClientBeforeJob: Creating a backup of volume 
(C:), copied (5%).
22-Oct 21:

Re: [bareos-users] Segmentation fault (core dumped)

2019-11-27 Thread Spadajspadaj
On that note - is there any "blessed" way to migrate existing 
installation from MySQL to Postgres? I can easily google some 
not-very-official recipes for bacula but are there any advices for bareos?


(and any more reasonable way to migrate than "export everything to csv 
and pull that csv into Postgres"?)


Best regards,

MK

On 27.11.2019 12:05, Andreas Rogge wrote:

Hi,

first of all: please do not use MySQL. The Backend is supported, but for
new Installations PostgreSQL is strongly preferred, as it provides much
better performance and is more thoroughly tested.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/184cd704-5142-efd5-0a01-88b339f87bbb%40gmail.com.


Re: [bareos-users] Re: bareos-fd client limit access to files

2019-11-23 Thread Spadajspadaj

I'd add a thing or two to Jörg's answer.

Firstly, if you don't trust the backup provider, the whole backup setup 
is highly questionable. Remember that even though you can encrypt the 
file contents, you keep the filenames in clear text in the database, so 
there is at least a vector of enumeration of files on your system which 
could potentially lead to abuse.


You can however make a formal agreement (which is out of technical scope 
of bareos itself) with the backup provider that limits the backup job 
only to specific files. But to be able to verify whether the backup 
provider keeps to its end of the deal you can configure logging on the 
filedaemon so you have some kind of accounting.


Thirdly, running bareos-fd as a non-root user can have its drawbacks in 
terms of file access. As an alternative you could try using SELinux and 
creating specific policy which allows backups of only selected files but 
it will probably be complicated and error-prone.


MK

On 23.11.2019 17:57, Spiros Papageorgiou wrote:

Thanx for the clear answer!

In any case it would be a nice feature to be able to control which 
files are allowed to be backed up, by the bareos-fd.


Sp

On Saturday, 23 November 2019 18:23:34 UTC+2, Jörg Steffens wrote:

On 23.11.19 at 16:37 wrote Spiros Papageorgiou:
> Hi all,
>
> I have a linux machine that produces some data that I want to
backup. I
> want to use a centralized backup service (based on bareos) that
I have
> access to. So, they told me to install bareos-fd and tell them
which
> files, I want them to backup.
>
> My problem is that I would like to limit the files that
bareos-fd has
> access to, because the centralized backup service has potentialy
the
> capability of backing up all the files of my linux , which is
something
> i don't want.
>
> So, Can i limit the access of bareos-fd to a specific set of
files on my
> linux server?

Typically, this is solved in another way. If you use
https://docs.bareos.org/master/TasksAndConcepts/DataEncryption.html
,
the
Bareos Director can still retrieve all files, but all the backup data
will be encrypted before it is transferred to the server and only you
client can deencrypt it. (the content of the files is encrypted.
Meta-data like filenames and timestamps are still readable.)

Alternately, the bareos-fd normally runs as root to get access to all
files. You can run it as another user and therefore the bareos-fd can
only access the files accessible by that user.

In any case, you should also disable or at least limit run
scripts, as
otherwise the admin can retrieve data with these scripts. Also
Plugins
should be disabled or restricted.
So take a look at
https://docs.bareos.org/master/Configuration/FileDaemon.html


  * Allowed Job Command
  * Allowed Script Dir
  * Plugin Directory
  * Plugin Names

Regards,
Jörg

-- 
 Jörg Steffens joerg@bareos.com 

 Bareos GmbH & Co. KG            Phone: +49 221 630693-91
http://www.bareos.com         Fax:   +49 221 630693-10

 Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
 Komplementär: Bareos Verwaltungs-GmbH
 Geschäftsführer:
 S. Dühr, M. Außendorf, Jörg Steffens, P. Storz

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/7e76b38b-e2f6-48e4-8980-96d730353e0c%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/052cbc62-4b1d-9c90-df23-f440fc999d74%40gmail.com.


Re: [bareos-users] show files of a job?

2019-11-04 Thread Spadajspadaj

Hi Sven.

I'd go for joining info from File and Path tables in bareos database 
selecting by File.JobId. For size you'd need to decode LStat field of 
File Table (I'm pretty sure I'd seen some decoders somewhere on the 
Internet).



Best regards

MK

On 02.11.2019 08:47, Sven Gehr wrote:

hello everyone,

I have the problem that the daily backup (INC) of a client requires a 
lot of backup memory.


Now I want to see which files have been backed up within a job to 
understand the behavior.


How can I do this?

best regards
sven
--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/9a2dbdcb-06e5-410b-acac-fbe41525f4dc%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/0f19b648-6184-fa2a-d557-456239b39bfd%40gmail.com.


[bareos-users] Waiting for a client

2019-11-11 Thread Spadajspadaj

Hi there.

I added a laptop client to my bareos setup and everything runs mostly 
fine except for the fact that if the job is scheduled for - let's say - 
9pm, it tries to run, tries to connect to fd and since the laptop is 
down, the job fails.


I was wondering how I can avoid those failed jobs and possibly have 
bareos server wait for a client to appear.


I pondered two options:

1) Have completely manual "schedule", so the job never runs on its own. 
I need to manually run the job every time I have a laptop available in 
my network. But I'm not sure I can specify a schedule which never runs 
on its own. Is it even possible? Or


2) Have bareos server wait for the fd to appear on the network. Now 
that's trickier. I thought about writing a script and running it as a 
pre-job script effectively delaying start of the job itself until I can 
reach the fd but:


a) I'm not sure if the pre-job script is run before the initial 
connection to fd or after. (haven't tried it yet; just thought about it 
today driving to work :-)) and


b) If I understand correctly, such job with a pre-job script would 
effectively block other jobs from running in the same time, right? So if 
I have a schedule for all jobs to start at 9pm, then other jobs (for now 
let's not complicate matters with priorities) would wait for my script 
to time-out with the Max Run Time limit?


Any other ideas?

What I would ideally want to achieve is to have bareos do a backup of my 
laptop when it gets available on the network (when I come back home and 
power on the laptop) but of course not more than once a day.


Of course I can (and probably will if there are no other options) do an 
external script which probes the network for the laptop and if found 
runs the apropriate job but I was wondering if I can do something like 
that "from within" bareos.



Regards,

MK


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/12b0a1b9-2569-4b81-104a-0d515d4637d5%40gmail.com.


Re: [bareos-users] Compression and Encryption on bareos Clients

2019-09-25 Thread Spadajspadaj
Firstly, it's perfectly normal that hardware compression rate drops when 
dealing with encrypted data. Compression ratio depends heavily on 
entropy of input data and good encryption assures uniform distribution 
of encrypted data so there's no point in compressing data _after_ 
encryption.


Secondly, in Bareos encryption is the property of a filedaemon so it's 
applied to all the data from a configured client. Compression however is 
defined on the FileSet resource level so you can mix compressed and 
uncompressed files within one job. So it only makes sense to encrypt 
data after the job is being initially pulled from the client.


You can of course look into the sources (Bareos is open-source after all 
;-)) to confirm it but I think it should be reasonable enough.



Best regards,

MK

On 23.09.2019 17:14, Steffen Knauf wrote:

Hello,

we enabled encryption on the clients. Hardware compression is enabled,
too. By activating encryption the compression Rate is nearly fallen to
2-5% (from 50%). I plan to enable compression on the bareos Clients and
disable Harware compression. Can someone confirm that encryption is
done after compression on the bareos clients?

Thanks & greets

Steffen



--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/e7bc07dc-9b42-cb23-ba62-a9d29fe8f048%40gmail.com.


Re: [bareos-users] restore is not restoring files

2020-02-12 Thread Spadajspadaj
Apache log shows HTTP transactions, not the bareos logs. So the "client" 
in this context is the computer with your web browser. Hence the IP has 
nothing to do with the computer you want to restore.


Regardless of the underlying cause which I don't know, the code you 
provided shows that PHP wants to assign a variable based on data 
provided in request whereas your request (cited in apache log) lacked 
said parameter ('type').



Best Regards

MK

On 11.02.2020 23:08, aeronex...@gmail.com wrote:

I decided to run bareos-dbcheck and found a issues

over 41000 orphaned path records

8 orphaned Fileset records

and 39 Restore Records.

I then fixed the database using bareos-dbcheck.

Still cannot restore the file. I also checked a different file with no 
spaces but still no luck. same bconsole output as before.


I looked at /var/log/apache2/error.log and have a slightly different 
error message than before as follows:


Tue Feb 11 16:40:34.942033 2020] [php7:notice] [pid 8905] [client 
192.168.1.151:46774] PHP Notice:  Undefined index: type in 
/usr/share/bareos-webui/module/Restore/src/Restore/Form/RestoreForm.php 
on line 91, referer: 
https://linux-server/bareos-webui/restore/?jobid=7204=BEE-XPS15-fd=0=0=2000


the difference from before is that the IP address specified is now the 
one from which I am initiating the restore function. also jobid-7204 
is where that file is located. Given it is line 91, does that mean 
somehow the info relevant to the BEE-XPS15-fd is somehow messed up? 
admitted, I really do not understand the program.



On 2/11/20 3:59 PM, aeronex...@gmail.com wrote:
I cannot get bconsole to restore the file either. (I have tried other 
files but no success there either). Bconsole will list it but it will 
not select it for download. below is my final attempt to get bconsole 
to restore the file.



To select the JobIds, you have the following choices:
 1: List last 20 Jobs run
 2: List Jobs where a given File is saved
 3: Enter list of comma separated JobIds to select
 4: Enter SQL list command
 5: Select the most recent backup for a client
 6: Select backup for a client before a specified time
 7: Enter a list of files to restore
 8: Enter a list of files to restore before a specified time
 9: Find the JobIds of the most recent backup for a client
    10: Find the JobIds for a backup for a client before a specified 
time

    11: Enter a list of directories to restore for found JobIds
    12: Select full restore to a specified Job date
    13: Cancel
Select item:  (1-13): 2
Enter Filename (no path):CSMART Integration v03.docx
+---++-+-+---+--+---+ 

| jobid | name | starttime   | jobtype | jobstatus | jobfiles 
| jobbytes  |
+---++-+-+---+--+---+ 

| 7218  | /media/windows/Users/My 
Name/Documents/Projects/VPL-NASA/2020/SOA/CSMART Integration v03.docx 
| 2020-02-07 23:05:01 | B   | T | 8578 | 904808687 |
| 7204  | /media/windows/Users/My 
Name/Documents/Projects/VPL-NASA/2020/SOA/CSMART Integration v03.docx 
| 2020-02-05 23:05:02 | B   | T | 6929 | 2004146945    |
| 7197  | /media/windows/Users/My 
Name/Documents/Projects/VPL-NASA/2020/SOA/CSMART Integration v03.docx 
| 2020-02-04 23:05:11 | B   | T | 5665 | 1648408939    |
| 7193  | /media/windows/Users/My 
Name/Documents/Projects/VPL-NASA/2020/SOA/CSMART Integration v03.docx 
| 2020-02-03 23:05:02 | B   | T | 9602 | 2342841806    |
+---++-+-+---+--+---+ 


To select the JobIds, you have the following choices:
 1: List last 20 Jobs run
 2: List Jobs where a given File is saved
 3: Enter list of comma separated JobIds to select
 4: Enter SQL list command
 5: Select the most recent backup for a client
 6: Select backup for a client before a specified time
 7: Enter a list of files to restore
 8: Enter a list of files to restore before a specified time
 9: Find the JobIds of the most recent backup for a client
    10: Find the JobIds for a backup for a client before a specified 
time

    11: Enter a list of directories to restore for found JobIds
    12: Select full restore to a specified Job date
    13: Cancel
Select item:  (1-13): 7
Enter file names with paths, or < to enter a filename
containing a list of file names with paths, and Terminate
them with a blank line.
Enter full filename: CSMART Integration v03.docx
No database record found for: CSMART Integration v03.docx

Also looking at web-ui, it shows a RestoreFiles - 

Re: [bareos-users] Backing up WwebDAV

2020-02-28 Thread Spadajspadaj
With Bareos it's usually not a question whether it is possible but how 
to do it ;-)


But seriously - since WebDAV is not a storage as such, just a method of 
access, you have two options. Either you have access to the server from 
which the DAV share is served and you back it up locally. But I suspect 
it's not the case since you're asking here. Second option is to use a 
plugin to emulate a file for bareos fd. (you always need to have a fd 
from which you're performing a backup). Since there is no dedicated 
plugin for DAV connectivity, you'd need to either use bpipe or python 
plugin to back up/restore a data stream which you'll have to provide 
from a custom script retrieving data from DAV share. You have to have a 
"helper" host though so you'd connect to fd installed on host A from 
which you launch script connecting to host B which hosts the DAV share 
and downloading data from it.


Best Regards,

MK

On 28.02.2020 07:48, tklassen wrote:

Hello list,
i haven´t found a way to backup a WebDAV storage, is it even possible 
with Bareos?


Im trying to backup my voip system if possible.

Greetings!

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/3fb7c32b-07d7-04e5-187d-50cd58867fb9%40schokokeks.org 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/dd8aa4ef-719c-f3e9-4f32-f6dfb9314e1b%40gmail.com.


Re: [bareos-users] How to get a warning of an unmounted shared folder (without any files or folders)?

2020-02-24 Thread Spadajspadaj
Bareos is very flexible in terms of preparing a job. You can run a 
"pre-job" script. It can be run either on server's side or client's 
side. I suppose you'd prefer client's side in this case.


https://docs.bareos.org/Configuration/Director.html#config-Dir_Job_RunScript

And you can fail the backup job if the script returns an error (non-zero 
exit status).



Best regards,

MK

On 24.02.2020 15:52, 'DUCARROZ Birgit' via bareos-users wrote:

Hi list,

I try to figure out how to configure bareos to send me a warning or 
error message in case of an empty directory.


I.ex. I mount a share from an external server on the bareos server 
which I will backup. Let's say this mount in now suddenly unmounted.


Bareos will backup now an empty folder and returns an ok.

Is it possible to make it detect that this folder is empty and to 
return a warning?


Thank you so much for any help.

Kind regards,
Birgit



--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/0e63b1a4-2398-fce0-419e-7b8219a90b94%40gmail.com.


Re: [bareos-users] EXTERNAL CUSTOMER BACKUP - BACKUP DE CLIENTE EXTERNO

2020-01-13 Thread Spadajspadaj
Remember that in default installation the bareos director reaches to the 
fd to initiate the backup job but then the fd connects to sd to send the 
backup data. If you don't allow for incoming connections (which is 
understandable in case of i.e. DMZ-located clients), you need to use 
passive clients. 
https://docs.bareos.org/TasksAndConcepts/NetworkSetup.html#passive-clients


On 13.01.2020 19:37, Milton Alves wrote:

Good Night Guys I'm new to the group wanted a help.

I have a project on backing up by users who have a database, but the 
client is external, I've already made all the releases through the 
firewall or getting them when connecting with the normal client, but 
when backing up it takes a long time. after the mistake

Has anyone had this problem?


BRAZIL

Boa noite pessoal Eu sou novo no grupo e queria uma ajuda.

Eu tenho um projeto de backup feito por usuários que possuem um banco 
de dados, mas o cliente é externo, já fiz todas as liberações através 
do firewall ou as obtive ao conectar com o cliente normal, mas ao 
fazer o backup demora muito tempo . depois do erro

Alguém já teve esse problema?


--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/e82cb0a0-824e-4b88-b504-24581399b688%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/a72b18aa-cd89-191e-4a03-2c247f81d0b2%40gmail.com.


Re: [bareos-users] "Volume Retention" changes does not affect purged volumes

2020-01-20 Thread Spadajspadaj
If you change settings in the config file they will be applied to new 
volumes only as you already noticed. If old volumes are purged, only 
their status is changed they are not deleted and created anew. So you 
have to manually update volume using bconsole "update" command.


On 20.01.2020 13:23, Micha Ballmann wrote:

Hello,

i have strange problem. I changed the days of "Volume Retention".

For example:

Pool {
  Name                  = DataIncremental
  Pool Type             = Backup
  Recycle               = yes
  AutoPrune             = yes
*Volume Retention      = 15 days*
  Maximum Volume Bytes  = 50G
  Label Format          = "DataIncremental-"
 }

changed to:

Pool {
  Name                  = DataIncremental
  Pool Type             = Backup
  Recycle               = yes
  AutoPrune             = yes
*  Volume Retention      = 30 days*
  Maximum Volume Bytes  = 50G
  Label Format          = "DataIncremental-"
 }

My problem now, new volumes were created correctly, but purged volumes 
didnt. The purged volumes were created with the same old "Volume 
Retention" in the origin. In my case:


New Volumes = 30 days retention

Purged Volumes = 15 days retention

Why the purged volumes inherit the retention time from his old volumes 
and doesnt set new one?


Server:
-Ubuntu 18.04
-Postgresql 12
-Bareos 18.2.5

Best regards
--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/e481eb6f-dfa3-41bd-b643-8c77ca661594%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/9a028ea7-c55c-bf22-292f-77d710151204%40gmail.com.


Re: [bareos-users] File compression

2020-01-21 Thread Spadajspadaj
The question is whether the job output shows Compression. A volume is a 
storage unit. It may be a fixed file, it may be a tape. We don't know 
your configuration here. I, for example, have fixed-size 40G file-based 
volumes so the volumes size don't change but the jobs can be bigger or 
smaller depending on a compression used.


Sample output from a job utilizing compression from my installation:

*list joblog jobid=2403
 2020-01-21 02:07:05 backup1-dir JobId 2403: Start Backup JobId 2403, 
Job=backup_srv2_MySQL.2020-01-21_01.00.00_11
 2020-01-21 02:07:05 backup1-dir JobId 2403: Connected Storage daemon 
at backup1:9103, encryption: PSK-AES256-CBC-SHA
 2020-01-21 02:07:05 backup1-dir JobId 2403: Using Device 
"vchanger-1-0" to write.
 2020-01-21 02:07:05 backup1-dir JobId 2403: Connected Client: srv2-fd 
at 172.16.2.193:9102, encryption: PSK-AES256-CBC-SHA
 2020-01-21 02:07:05 backup1-dir JobId 2403:  Handshake: Immediate TLS  
2020-01-21 02:07:05 backup1-dir JobId 2403:  Encryption: PSK-AES256-CBC-SHA
 2020-01-21 02:07:08 srv2-fd JobId 2403: Extended attribute support is 
enabled

 2020-01-21 02:07:08 srv2-fd JobId 2403: ACL support is enabled
 2020-01-21 02:07:06 bareos-sd JobId 2403: Connected File Daemon at 
172.16.2.193:9102, encryption: PSK-AES256-CBC-SHA
 2020-01-21 02:07:08 bareos-sd JobId 2403: Volume "vchanger-1_2_0076" 
previously written, moving to end of data.
 2020-01-21 02:07:08 bareos-sd JobId 2403: Ready to append to end of 
Volume "vchanger-1_2_0076" size=19859852960
 2020-01-21 02:07:08 srv2-fd JobId 2403: python-fd: Starting backup of 
/_mysqlbackups_/mysql.sql
 2020-01-21 02:07:09 srv2-fd JobId 2403: python-fd: Starting backup of 
/_mysqlbackups_/mail.sql
 2020-01-21 02:07:09 srv2-fd JobId 2403: python-fd: Starting backup of 
/_mysqlbackups_/gts.sql
 2020-01-21 02:07:19 srv2-fd JobId 2403: python-fd: Starting backup of 
/_mysqlbackups_/epsilone_rcube.sql
 2020-01-21 02:07:20 bareos-sd JobId 2403: Releasing device 
"vchanger-1-0" (/var/spool/vchanger/vchanger-1/0).
 2020-01-21 02:07:20 bareos-sd JobId 2403: Elapsed time=00:00:12, 
Transfer rate=1.065 M Bytes/second
 2020-01-21 02:07:20 backup1-dir JobId 2403: Insert of attributes batch 
table with 4 entries start
 2020-01-21 02:07:20 backup1-dir JobId 2403: Insert of attributes batch 
table done
 2020-01-21 02:07:20 backup1-dir JobId 2403: Bareos backup1-dir 
19.2.4~rc1 (19Dec19):
  Build OS:   Linux-5.3.14-200.fc30.x86_64 redhat CentOS 
Linux release 7.7.1908 (Core)

  JobId:  2403
  Job:    backup_srv2_MySQL.2020-01-21_01.00.00_11
  Backup Level:   Incremental, since=2020-01-20 01:00:05
  Client: "srv2-fd" 18.2.5 (30Jan19) 
Linux-4.4.92-6.18-default,redhat,CentOS Linux release 7.6.1810 (Core) 
,CentOS_7,x86_64

  FileSet:    "MySQL - all databases" 2019-04-10 01:00:00
  Pool:   "Offsite-eSATA" (From Job resource)
  Catalog:    "MyCatalog" (From Client resource)
  Storage:    "vchanger-1-changer" (From Pool resource)
  Scheduled time: 21-Jan-2020 01:00:00
  Start time: 21-Jan-2020 02:07:08
  End time:   21-Jan-2020 02:07:20
  Elapsed time:   12 secs
  Priority:   10
  FD Files Written:   4
  SD Files Written:   4
  FD Bytes Written:   12,790,416 (12.79 MB)
  SD Bytes Written:   12,791,390 (12.79 MB)
  Rate:   1065.9 KB/s
  Software Compression:   82.7 % (gzip)
  VSS:    no
  Encryption: no
  Accurate:   no
  Volume name(s): vchanger-1_2_0076
  Volume Session Id:  35
  Volume Session Time:    1579171330
  Last Volume Bytes:  19,872,665,728 (19.87 GB)
  Non-fatal FD errors:    0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Bareos binary info: pre-release version: Get official binaries 
and vendor support on bareos.com

  Termination:    Backup OK

Remember that compression takes place on FD on a per-file basis so to 
verify that a job is indeed compressed, apart from the compression: 
field of job log is to just create a file with a known given size (let's 
say - 1GB), fill it up with zeros and then create a job to back up just 
this one file with compression. If it does work as it should, you should 
see a job with a very low "FD bytes written" and "SD bytes written" 
values since empty files compress very well.


And about the compression ratio - it all depends on the entropy of the 
input data. It's impossible to know without making some assumptions 
regarding the input data to tell how much said data will compress. As 
you can see, my job has about 83% of compression ratio and that's pretty 
typical for text data. Other types of data may compress much worse or 
even grow a bit (already compressed data).


But most important thing is that the job size doesn't have to (unless 
you're creating a volume-per-job media) 

Re: [bareos-users] Fileset update

2020-01-04 Thread Spadajspadaj
Firstly, try to respond to the group, not to me personaly. (It's a 
common mistake :-))


Secondly, literarily taking the docs, it should just mean that it won't 
get upgraded to full backup because the fileset changed but the fileset 
itself should work as defined. But here someone more experienced with 
the bareos internals should shed some light.


On 01.01.2020 20:31, aeronex...@gmail.com wrote:


So I found nothing wrong or at least obvious to the amature but

The changes I made were in the options section.

I get a hint from the manual that this may not cause an updated file 
to be used (under the fileset resource section). it says


"Any change to the list of the included files will cause Bareos to 
automatically create a new FileSet (defined by the name and an MD5 
checksum of the Include/Exclude File directives contents). Each time a 
new FileSet is created Bareos will ensure that the next backup is 
always a full backup. However, this does only apply to changes in 
directives |File (Dir->Fileset->Include)| and |File 
(Dir->Fileset->Exclude)|. Changes in other directives or the FileSet 
Options Ressource 
<https://docs.bareos.org/Configuration/Director.html?highlight=fileset#fileset-options> 
do not result in upgrade to a full backup."


Does this also mean that changes in the options sections of the 
fileset do not trigger usage of the updated fileset. I do not see in 
the test file a parameter for this but do find


bareos-dir (900): lib/parse_conf.cc:687-0 Item=IgnoreFileSetChanges 
def=yes defval=false


IgnoreFileSetChanges  I have never played with in my configuration files.

bee

On 1/1/20 12:41 PM, spadaj wrote:

Confusing.
I'd try "bareos-dir -t" with reasonably high debug level (-d900, 
probably) to see which files are being read and which directives are 
being parsed. I know, it dumps a lot of stuff (you definitely want to 
redirect it to a file for reading with your favourite editor) but it 
should show what's going on.


Regards,
MK

W dniu 01.01.2020 o 18:11, aeronex...@gmail.com pisze:
Thanks MK but I did do the show fileset="Full Laptop Set" and it 
displayed the correct information.


I used webui to see the fileset used in the backup, But looking at 
the actual backups, It used a previous version of the file (one 
specifically date as 8 Jan 2019).


The only way I could get it to update (shown below) was to change 
the resource name in the Job definition to "Full_Laptop_Set" and of 
course change the name of in the fileset definition in the fileset 
directory.


before

Job {
   Name = "BEE-XPS15_1bkup"
   Level = Incremental
   Storage = "BEE-XPS15-storage"
   Pool = "BEE-XPS15"
   Client = "BEE-XPS15-fd"
   FileSet = "Full Laptop Set"
   JobDefs = "Common_Attributes"
}

After

Job {
   Name = "BEE-XPS15_1bkup"
   Level = Incremental
   Storage = "BEE-XPS15-storage"
   Pool = "BEE-XPS15"
   Client = "BEE-XPS15-fd"
   FileSet = "Full_Laptop_Set"
   JobDefs = "Common_Attributes"
}

and in fileset

from

FileSet {
   Name = "Full Laptop Set"
   Include {
 Options {
   Exclude = Yes

...

    }

 Options {
   Signature = SHA1
   Compression = GZIP6
 }
 File = "/home/bee"
 File = "/media/windows/Users/bee"
   }
}

to

FileSet {
   Name = "Full_Laptop_Set"
...

bee

On 1/1/20 11:35 AM, Spadajspadaj wrote:

On 01.01.2020 17:24, aeronex...@gmail.com wrote:
I have update my fileset to include a new exclude statement. I 
have restarted Bareos (including reboot the server). 
Unfortunately, Bareos continues to use the old version of the 
fileset definition for the backup. I do not find an update 
statement in the manual to force Bareos to use the new fileset 
definition. bconsole does show the updated fileset I have created. 
So what is the correct way to get Bareos to use the updated 
fileset in the backups.


I am on Ubuntu 18.04 server using Bareos 18.2.5-131.1 according to 
webui


tia



Question is how did you "update" the fileset and is it really used 
in the job definition.


First things to check - do a "show job=" and see 
what's the fileset name and if it matches the fileset you were 
editing.


Example from my installation:

*show job=srv2-linux
Job {
  Name = "srv2-linux"
  Client = "srv2-fd"
  FileSet = "Linux-minimal-fileset"
  JobDefs = "DefaultJob"
}

(to be on the safe side you can also do the "show client" on the 
client name to be sure you got the right machine configured for 
backup with this job)



Now if you have a FileSet (in my case it's called 
"Linux-minimal-fileset"), you do a "show fileset=" 
command. Like this:


*show fileset=Linux-minimal-fileset
FileSet {
  Name = "Linux-minimal-fileset"
  Include {
    Optio

Re: [bareos-users] Fileset update

2020-01-01 Thread Spadajspadaj

On 01.01.2020 17:24, aeronex...@gmail.com wrote:
I have update my fileset to include a new exclude statement. I have 
restarted Bareos (including reboot the server). Unfortunately, Bareos 
continues to use the old version of the fileset definition for the 
backup. I do not find an update statement in the manual to force 
Bareos to use the new fileset definition. bconsole does show the 
updated fileset I have created. So what is the correct way to get 
Bareos to use the updated fileset in the backups.


I am on Ubuntu 18.04 server using Bareos 18.2.5-131.1 according to webui

tia



Question is how did you "update" the fileset and is it really used in 
the job definition.


First things to check - do a "show job=" and see what's 
the fileset name and if it matches the fileset you were editing.


Example from my installation:

*show job=srv2-linux
Job {
  Name = "srv2-linux"
  Client = "srv2-fd"
  FileSet = "Linux-minimal-fileset"
  JobDefs = "DefaultJob"
}

(to be on the safe side you can also do the "show client" on the client 
name to be sure you got the right machine configured for backup with 
this job)



Now if you have a FileSet (in my case it's called 
"Linux-minimal-fileset"), you do a "show fileset=" 
command. Like this:


*show fileset=Linux-minimal-fileset
FileSet {
  Name = "Linux-minimal-fileset"
  Include {
    Options {
  Signature = MD5
  Compression = GZIP6
  OneFS = No
  AclSupport = Yes
  XattrSupport = Yes
  Fs Type = "btrfs"
  Fs Type = "ext2"
  Fs Type = "ext3"
  Fs Type = "ext4"
  Fs Type = "reiserfs"
  Fs Type = "jfs"
  Fs Type = "xfs"
  Fs Type = "zfs"
    }
    File = "/root"
    File = "/etc"
    File = "/home"
    File = "/var"
    File = "/srv"
    Plugin = "bpipe:file=/_rpmlist_/rpm.lst:reader=/usr/bin/rpm 
-qa:writer=/bin/bash -c ''xargs yum install -y''"

  }
  Exclude {
    File = "/var/lib/bareos"
    File = "/var/lib/bareos/storage"
    File = "/proc"
    File = "/tmp"
    File = "/var/tmp"
    File = "/.journal"
    File = "/.fsck"
    File = "/srv/backups"
  }
}

If the name of the fileset matches what you edited, but the contents 
don't, it probably means that you edited a wrong file (try grepping for 
the fileset name on your /etc, /opt, or /usr/local, depending on how you 
installed and configured Bareos).


BTW, it's not necessary to restart director after configuration change. 
It's enough to do a "reload" from bconsole.


Regards,

MK

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/c718c4d9-09ac-a98d-7953-db63a2d60dff%40gmail.com.


[bareos-users] Re: MySQL->Postgres database converter strange behaviour

2020-03-12 Thread Spadajspadaj
BTW, If I'm seeing correctly, the dbcopy tool is inserting entries with 
INSERT INTO even though the help and the docs say that unless I use -i 
option, it should be using COPY FROM STDIN.


It seems that when I force some absurdly high limit on rows with -l 
option (which is undocumented at the time and not shown in help; I found 
it in sources) it tries to copy the rows.


We'll see how it goes.

On 11.03.2020 20:26, Spadajspadaj wrote:

Hello.

I've been trying to migrate my setup from MySQL to Postgres using the 
bareos-dbcopy utility. It is almost working. Almost, because it copies 
only one record from each table.


I ran it with strace and it seems that it's not me, it's him ;-)

Strace excerpt from the File table conversion:

write(1, "== table File ==\n", 25== table File ==
) = 25
write(1, "--> checking destination table...\n", 34--> checking 
destination table...

) = 34
sendto(4, "Q\0\0\0\37SELECT * FROM File LIMIT 1\0", 32, MSG_NOSIGNAL, 
NULL, 0) = 32
poll([{fd=4, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=4, 
revents=POLLIN}])
recvfrom(4, 
"T\0\0\1\27\0\vfileid\0\0\0C\34\0\1\0\0\0\24\0\10\377\377\377\377\0\0fileindex\0\0\0C\34\0\2\
0\0\0\27\0\4\377\377\377\377\0\0jobid\0\0\0C\34\0\3\0\0\0\27\0\4\377\377\377\377\0\0pathid\0\0\0C\34\0\4\ 

0\0\0\27\0\4\377\377\377\377\0\0deltaseq\0\0\0C\34\0\5\0\0\0\25\0\2\377\377\377\377\0\0markid\0\0\0C\34\0 

\6\0\0\0\27\0\4\377\377\377\377\0\0fhinfo\0\0\0C\34\0\7\0\0\6\244\377\377\0\24\0\4\0\0fhnode\0\0\0C\34\0\ 

10\0\0\6\244\377\377\0\24\0\4\0\0lstat\0\0\0C\34\0\t\0\0\0\31\377\377\377\377\377\377\0\0md5\0\0\0C\34\0\ 

n\0\0\0\31\377\377\377\377\377\377\0\0name\0\0\0C\34\0\v\0\0\0\31\377\377\377\377\377\377\0\0C\0\0\0\rSEL 


ECT 0\0Z\0\0\0\5I", 16384, 0, NULL, NULL) = 300
sendto(4, "Q\0\0\0\nBEGIN\0", 11, MSG_NOSIGNAL, NULL, 0) = 11
poll([{fd=4, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=4, 
revents=POLLIN}])

recvfrom(4, "C\0\0\0\nBEGIN\0Z\0\0\0\5T", 16384, 0, NULL, NULL) = 17
write(1, "--> copying...\n", 15--> copying...
)    = 15
poll([{fd=3, events=POLLIN|POLLPRI}], 1, 0) = 0 (Timeout)
write(3, "\204\0\0\0\3SELECT `DeltaSeq`, `Fhinfo`, `Fhnode`, `FileId`, 
`FileIndex`, `JobId`, `LStat`, `MD

5`, `MarkId`, `Name`, `PathId` FROM File LIMIT 1", 136) = 136
read(3, 
"\1\0\0\1\v4\0\0\2\3def\6bareos\4File\4File\10DeltaSeq\10DeltaSeq\f?\0\5\0\0\0\2 
\0\0\0\\0\0\
3\3def\6bareos\4File\4File\6Fhinfo\6Fhinfo\f?\0\25\0\0\0\366\0\0\0\0\\0\0\4\3def\6bareos\4File\4File\ 

6Fhnode\6Fhnode\f?\0\25\0\0\0\366\0\0\0\0\\0\0\5\3def\6bareos\4File\4File\6FileId\6FileId\f?\0\24\0\0 

\0\10#B\0\0\0006\0\0\6\3def\6bareos\4File\4File\tFileIndex\tFileIndex\f?\0\n\0\0\0\3 
@\0\0\0.\0\0\7\3def\
6bareos\4File\4File\5JobId\5JobId\f?\0\n\0\0\0\3)P\0\0\0.\0\0\10\3def\6bareos\4File\4File\5LStat\5LStat\f 

?\0\377\0\0\0\374\221\20\0\0\0*\0\0\t\3def\6bareos\4File\4File\3MD5\3MD5\f?\0\377\0\0\0\374\221\20\0\0\00 

00\0\0\n\3def\6bareos\4File\4File\6MarkId\6MarkId\f?\0\n\0\0\0\3 
\0\0\0\0,\0\0\v\3def\6bareos\4File\4File
\4Name\4Name\f?\0\377\377\0\0\374\221P\0\0\\0\0\f\3def\6bareos\4File\4File\6PathId\6PathId\f?\0\n\0\0 

\0\3)P\0\0\0\5\0\0\r\376\0\0\"\0t\0\0\16\0010\0010\0010\01020815008\0011\00423492A 
A IH/ B A A CA eK A A
BXgj8z Bx9k46 Bbgeq5 A A 
L\26oFI6qIvho1W8LctK9rWsqQ\0010\fkarnawal.tex\0043097\5\0\0\17\376\0\0\"\0", 
163

84) = 711
sendto(4, "Q\0\0\0\377INSERT INTO File (deltaseq, fhinfo, fhnode, 
fileid, fileindex, jobid, lstat, md5, m
arkid, name, pathid) VALUES ('0','0','0','20815008','1','2349','A A 
IH/ B A A CA eK A A BXgj8z Bx9k46 Bbg
eq5 A A L','oFI6qIvho1W8LctK9rWsqQ','0','karnawal.tex','3097')\0", 
256, MSG_NOSIGNAL, NULL, 0) = 256
poll([{fd=4, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=4, 
revents=POLLIN}])
recvfrom(4, "C\0\0\0\17INSERT 0 1\0Z\0\0\0\5T", 16384, 0, NULL, NULL) 
= 22

write(1, "--> updating sequence\n", 22--> updating sequence
) = 22
sendto(4, "Q\0\0\0Fselect setval(' file_fileid_seq', (select 
max(fileid) from file))\0", 71, MSG_NOSIGNAL

, NULL, 0) = 71
poll([{fd=4, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=4, 
revents=POLLIN}])
recvfrom(4, 
"T\0\0\0\37\0\1setval\0\0\0\0\0\0\0\0\0\0\24\0\10\377\377\377\377\0\0D\0\0\0\22\0\1\0\0\0\010

20815008C\0\0\0\rSELECT 1\0Z\0\0\0\5T", 16384, 0, NULL, NULL) = 71
sendto(4, "Q\0\0\0\vCOMMIT\0", 12, MSG_NOSIGNAL, NULL, 0) = 12
poll([{fd=4, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=4, 
revents=POLLIN}])

recvfrom(4, "C\0\0\0\vCOMMIT\0Z\0\0\0\5I", 16384, 0, NULL, NULL) = 18
write(1, "--> success\n", 12--> success
)   = 12

As you can see, it does SELECT with LIMIT 1 so there's no way it's 
gonna migrate more entries that one. Or I'm missing something here.


Anyone encountered something similar?

I'm using the 19.2.6 release from Centos 7 RPM packets.

Best regards,

MK



-

Re: [bareos-users] Re: MySQL->Postgres database converter strange behaviour

2020-03-12 Thread Spadajspadaj
It seems that with -l 1 (i have only some 5 mil entries in File 
table) the migration completed and now my instance is running ok on 
postgres.


On 12.03.2020 08:25, Spadajspadaj wrote:
BTW, If I'm seeing correctly, the dbcopy tool is inserting entries 
with INSERT INTO even though the help and the docs say that unless I 
use -i option, it should be using COPY FROM STDIN.


It seems that when I force some absurdly high limit on rows with -l 
option (which is undocumented at the time and not shown in help; I 
found it in sources) it tries to copy the rows.


We'll see how it goes.

On 11.03.2020 20:26, Spadajspadaj wrote:

Hello.

I've been trying to migrate my setup from MySQL to Postgres using the 
bareos-dbcopy utility. It is almost working. Almost, because it 
copies only one record from each table.


I ran it with strace and it seems that it's not me, it's him ;-)

Strace excerpt from the File table conversion:

write(1, "== table File ==\n", 25== table File ==
) = 25
write(1, "--> checking destination table...\n", 34--> checking 
destination table...

) = 34
sendto(4, "Q\0\0\0\37SELECT * FROM File LIMIT 1\0", 32, MSG_NOSIGNAL, 
NULL, 0) = 32
poll([{fd=4, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=4, 
revents=POLLIN}])
recvfrom(4, 
"T\0\0\1\27\0\vfileid\0\0\0C\34\0\1\0\0\0\24\0\10\377\377\377\377\0\0fileindex\0\0\0C\34\0\2\
0\0\0\27\0\4\377\377\377\377\0\0jobid\0\0\0C\34\0\3\0\0\0\27\0\4\377\377\377\377\0\0pathid\0\0\0C\34\0\4\ 

0\0\0\27\0\4\377\377\377\377\0\0deltaseq\0\0\0C\34\0\5\0\0\0\25\0\2\377\377\377\377\0\0markid\0\0\0C\34\0 

\6\0\0\0\27\0\4\377\377\377\377\0\0fhinfo\0\0\0C\34\0\7\0\0\6\244\377\377\0\24\0\4\0\0fhnode\0\0\0C\34\0\ 

10\0\0\6\244\377\377\0\24\0\4\0\0lstat\0\0\0C\34\0\t\0\0\0\31\377\377\377\377\377\377\0\0md5\0\0\0C\34\0\ 

n\0\0\0\31\377\377\377\377\377\377\0\0name\0\0\0C\34\0\v\0\0\0\31\377\377\377\377\377\377\0\0C\0\0\0\rSEL 


ECT 0\0Z\0\0\0\5I", 16384, 0, NULL, NULL) = 300
sendto(4, "Q\0\0\0\nBEGIN\0", 11, MSG_NOSIGNAL, NULL, 0) = 11
poll([{fd=4, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=4, 
revents=POLLIN}])

recvfrom(4, "C\0\0\0\nBEGIN\0Z\0\0\0\5T", 16384, 0, NULL, NULL) = 17
write(1, "--> copying...\n", 15--> copying...
)    = 15
poll([{fd=3, events=POLLIN|POLLPRI}], 1, 0) = 0 (Timeout)
write(3, "\204\0\0\0\3SELECT `DeltaSeq`, `Fhinfo`, `Fhnode`, 
`FileId`, `FileIndex`, `JobId`, `LStat`, `MD

5`, `MarkId`, `Name`, `PathId` FROM File LIMIT 1", 136) = 136
read(3, 
"\1\0\0\1\v4\0\0\2\3def\6bareos\4File\4File\10DeltaSeq\10DeltaSeq\f?\0\5\0\0\0\2 
\0\0\0\\0\0\
3\3def\6bareos\4File\4File\6Fhinfo\6Fhinfo\f?\0\25\0\0\0\366\0\0\0\0\\0\0\4\3def\6bareos\4File\4File\ 

6Fhnode\6Fhnode\f?\0\25\0\0\0\366\0\0\0\0\\0\0\5\3def\6bareos\4File\4File\6FileId\6FileId\f?\0\24\0\0 

\0\10#B\0\0\0006\0\0\6\3def\6bareos\4File\4File\tFileIndex\tFileIndex\f?\0\n\0\0\0\3 
@\0\0\0.\0\0\7\3def\
6bareos\4File\4File\5JobId\5JobId\f?\0\n\0\0\0\3)P\0\0\0.\0\0\10\3def\6bareos\4File\4File\5LStat\5LStat\f 

?\0\377\0\0\0\374\221\20\0\0\0*\0\0\t\3def\6bareos\4File\4File\3MD5\3MD5\f?\0\377\0\0\0\374\221\20\0\0\00 

00\0\0\n\3def\6bareos\4File\4File\6MarkId\6MarkId\f?\0\n\0\0\0\3 
\0\0\0\0,\0\0\v\3def\6bareos\4File\4File
\4Name\4Name\f?\0\377\377\0\0\374\221P\0\0\\0\0\f\3def\6bareos\4File\4File\6PathId\6PathId\f?\0\n\0\0 

\0\3)P\0\0\0\5\0\0\r\376\0\0\"\0t\0\0\16\0010\0010\0010\01020815008\0011\00423492A 
A IH/ B A A CA eK A A
BXgj8z Bx9k46 Bbgeq5 A A 
L\26oFI6qIvho1W8LctK9rWsqQ\0010\fkarnawal.tex\0043097\5\0\0\17\376\0\0\"\0", 
163

84) = 711
sendto(4, "Q\0\0\0\377INSERT INTO File (deltaseq, fhinfo, fhnode, 
fileid, fileindex, jobid, lstat, md5, m
arkid, name, pathid) VALUES ('0','0','0','20815008','1','2349','A A 
IH/ B A A CA eK A A BXgj8z Bx9k46 Bbg
eq5 A A L','oFI6qIvho1W8LctK9rWsqQ','0','karnawal.tex','3097')\0", 
256, MSG_NOSIGNAL, NULL, 0) = 256
poll([{fd=4, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=4, 
revents=POLLIN}])
recvfrom(4, "C\0\0\0\17INSERT 0 1\0Z\0\0\0\5T", 16384, 0, NULL, NULL) 
= 22

write(1, "--> updating sequence\n", 22--> updating sequence
) = 22
sendto(4, "Q\0\0\0Fselect setval(' file_fileid_seq', (select 
max(fileid) from file))\0", 71, MSG_NOSIGNAL

, NULL, 0) = 71
poll([{fd=4, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=4, 
revents=POLLIN}])
recvfrom(4, 
"T\0\0\0\37\0\1setval\0\0\0\0\0\0\0\0\0\0\24\0\10\377\377\377\377\0\0D\0\0\0\22\0\1\0\0\0\010

20815008C\0\0\0\rSELECT 1\0Z\0\0\0\5T", 16384, 0, NULL, NULL) = 71
sendto(4, "Q\0\0\0\vCOMMIT\0", 12, MSG_NOSIGNAL, NULL, 0) = 12
poll([{fd=4, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=4, 
revents=POLLIN}])

recvfrom(4, "C\0\0\0\vCOMMIT\0Z\0\0\0\5I", 16384, 0, NULL, NULL) = 18
write(1, "--> success\n", 12--> success
)   = 12

As you can see, it does SELECT with LIMIT 1 so there's no w

Re: [bareos-users] Howto move postgresql to another share?

2020-03-09 Thread Spadajspadaj

Every way is safe as long as you prepare for it :-)

But seriously, you have two main options

1) Do a database dump and restore to a bigger server. (the "logical 
migration")


2) Stop the postgresql service, make a new filesystem on a bigger 
device, move the database files there and mount the device under 
/var/lib/postgresql. (the "physical migration").


I'm sure there are tons of howtos around the web since that's not 
bareos-specific topic.


Of course if you move your database to a different server you'll have to 
point the bareos director to the new server (update the config).


Best regards

MK

On 09.03.2020 11:59, 'DUCARROZ Birgit' via bareos-users wrote:

Hi list,

I wonder how it is the safest way to move my database from 
/var/lib/postgresql to a bigger disk


Actually my local disk /dev/sdb1 has a size of 157G and the database 
consumes already 80G. In 2 months I will be out of space.


Please can you help me? I'm a total newbe with postresql

Regards,
Birgit



--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/f5bcd381-ba33-ceec-8976-45a38d8483be%40gmail.com.


[bareos-users] MySQL->Postgres database converter strange behaviour

2020-03-11 Thread Spadajspadaj

Hello.

I've been trying to migrate my setup from MySQL to Postgres using the 
bareos-dbcopy utility. It is almost working. Almost, because it copies 
only one record from each table.


I ran it with strace and it seems that it's not me, it's him ;-)

Strace excerpt from the File table conversion:

write(1, "== table File ==\n", 25== table File ==
) = 25
write(1, "--> checking destination table...\n", 34--> checking 
destination table...

) = 34
sendto(4, "Q\0\0\0\37SELECT * FROM File LIMIT 1\0", 32, MSG_NOSIGNAL, 
NULL, 0) = 32

poll([{fd=4, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=4, revents=POLLIN}])
recvfrom(4, 
"T\0\0\1\27\0\vfileid\0\0\0C\34\0\1\0\0\0\24\0\10\377\377\377\377\0\0fileindex\0\0\0C\34\0\2\

0\0\0\27\0\4\377\377\377\377\0\0jobid\0\0\0C\34\0\3\0\0\0\27\0\4\377\377\377\377\0\0pathid\0\0\0C\34\0\4\
0\0\0\27\0\4\377\377\377\377\0\0deltaseq\0\0\0C\34\0\5\0\0\0\25\0\2\377\377\377\377\0\0markid\0\0\0C\34\0
\6\0\0\0\27\0\4\377\377\377\377\0\0fhinfo\0\0\0C\34\0\7\0\0\6\244\377\377\0\24\0\4\0\0fhnode\0\0\0C\34\0\
10\0\0\6\244\377\377\0\24\0\4\0\0lstat\0\0\0C\34\0\t\0\0\0\31\377\377\377\377\377\377\0\0md5\0\0\0C\34\0\
n\0\0\0\31\377\377\377\377\377\377\0\0name\0\0\0C\34\0\v\0\0\0\31\377\377\377\377\377\377\0\0C\0\0\0\rSEL
ECT 0\0Z\0\0\0\5I", 16384, 0, NULL, NULL) = 300
sendto(4, "Q\0\0\0\nBEGIN\0", 11, MSG_NOSIGNAL, NULL, 0) = 11
poll([{fd=4, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=4, revents=POLLIN}])
recvfrom(4, "C\0\0\0\nBEGIN\0Z\0\0\0\5T", 16384, 0, NULL, NULL) = 17
write(1, "--> copying...\n", 15--> copying...
)    = 15
poll([{fd=3, events=POLLIN|POLLPRI}], 1, 0) = 0 (Timeout)
write(3, "\204\0\0\0\3SELECT `DeltaSeq`, `Fhinfo`, `Fhnode`, `FileId`, 
`FileIndex`, `JobId`, `LStat`, `MD

5`, `MarkId`, `Name`, `PathId` FROM File LIMIT 1", 136) = 136
read(3, 
"\1\0\0\1\v4\0\0\2\3def\6bareos\4File\4File\10DeltaSeq\10DeltaSeq\f?\0\5\0\0\0\2 
\0\0\0\\0\0\

3\3def\6bareos\4File\4File\6Fhinfo\6Fhinfo\f?\0\25\0\0\0\366\0\0\0\0\\0\0\4\3def\6bareos\4File\4File\
6Fhnode\6Fhnode\f?\0\25\0\0\0\366\0\0\0\0\\0\0\5\3def\6bareos\4File\4File\6FileId\6FileId\f?\0\24\0\0
\0\10#B\0\0\0006\0\0\6\3def\6bareos\4File\4File\tFileIndex\tFileIndex\f?\0\n\0\0\0\3 
@\0\0\0.\0\0\7\3def\

6bareos\4File\4File\5JobId\5JobId\f?\0\n\0\0\0\3)P\0\0\0.\0\0\10\3def\6bareos\4File\4File\5LStat\5LStat\f
?\0\377\0\0\0\374\221\20\0\0\0*\0\0\t\3def\6bareos\4File\4File\3MD5\3MD5\f?\0\377\0\0\0\374\221\20\0\0\00
00\0\0\n\3def\6bareos\4File\4File\6MarkId\6MarkId\f?\0\n\0\0\0\3 
\0\0\0\0,\0\0\v\3def\6bareos\4File\4File

\4Name\4Name\f?\0\377\377\0\0\374\221P\0\0\\0\0\f\3def\6bareos\4File\4File\6PathId\6PathId\f?\0\n\0\0
\0\3)P\0\0\0\5\0\0\r\376\0\0\"\0t\0\0\16\0010\0010\0010\01020815008\0011\00423492A 
A IH/ B A A CA eK A A
BXgj8z Bx9k46 Bbgeq5 A A 
L\26oFI6qIvho1W8LctK9rWsqQ\0010\fkarnawal.tex\0043097\5\0\0\17\376\0\0\"\0", 
163

84) = 711
sendto(4, "Q\0\0\0\377INSERT INTO File (deltaseq, fhinfo, fhnode, 
fileid, fileindex, jobid, lstat, md5, m
arkid, name, pathid) VALUES ('0','0','0','20815008','1','2349','A A IH/ 
B A A CA eK A A BXgj8z Bx9k46 Bbg
eq5 A A L','oFI6qIvho1W8LctK9rWsqQ','0','karnawal.tex','3097')\0", 256, 
MSG_NOSIGNAL, NULL, 0) = 256

poll([{fd=4, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=4, revents=POLLIN}])
recvfrom(4, "C\0\0\0\17INSERT 0 1\0Z\0\0\0\5T", 16384, 0, NULL, NULL) = 22
write(1, "--> updating sequence\n", 22--> updating sequence
) = 22
sendto(4, "Q\0\0\0Fselect setval(' file_fileid_seq', (select max(fileid) 
from file))\0", 71, MSG_NOSIGNAL

, NULL, 0) = 71
poll([{fd=4, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=4, revents=POLLIN}])
recvfrom(4, 
"T\0\0\0\37\0\1setval\0\0\0\0\0\0\0\0\0\0\24\0\10\377\377\377\377\0\0D\0\0\0\22\0\1\0\0\0\010

20815008C\0\0\0\rSELECT 1\0Z\0\0\0\5T", 16384, 0, NULL, NULL) = 71
sendto(4, "Q\0\0\0\vCOMMIT\0", 12, MSG_NOSIGNAL, NULL, 0) = 12
poll([{fd=4, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=4, revents=POLLIN}])
recvfrom(4, "C\0\0\0\vCOMMIT\0Z\0\0\0\5I", 16384, 0, NULL, NULL) = 18
write(1, "--> success\n", 12--> success
)   = 12

As you can see, it does SELECT with LIMIT 1 so there's no way it's gonna 
migrate more entries that one. Or I'm missing something here.


Anyone encountered something similar?

I'm using the 19.2.6 release from Centos 7 RPM packets.

Best regards,

MK

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/83f11feb-0c9a-7bc2-42a6-3af5cc91f74e%40gmail.com.


Re: [bareos-users] backup does not move to next storage

2020-04-21 Thread Spadajspadaj
Unless you really need the automatic fall-over to the next storage, you 
can also set up a vchanger. That way you can also control fixed number 
of fixed-size volumes. I prefer this approach to dynamicaly created 
media files but YMMV.



On 21.04.2020 16:34, Brock Palen wrote:

It is possible to move volumes between devices see:
https://docs.bareos.org/TasksAndConcepts/HowToManuallyTransferDataVolumes.html

That said if your volumes and devices are all interchangeable I would configure 
the system to all work with them as a single entry.

Generally this means all your disk based volumes are in a single folder (really 
not an issue) so the device path all point to the same location.

For disk volumes devices are ‘virtual’ and almost not real but a hold over from 
scheduling tape drives and other external media.


Brock Palen
1 (989) 277-6075
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting




On Apr 20, 2020, at 4:01 PM, Erich Eckner  wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Mon, 20 Apr 2020, Erich Eckner wrote:


Hi,

I have a file-based backup with bareos with mutliple disks to store the files 
on:

- 8<-
Device {
Name = mnt
Media Type = File
Archive Device = /var/lib/bareos/storage/mnt
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = yes;
Description = "File device. A connecting Director must have the same Name and 
MediaType."
Maximum Concurrent Jobs = 10
}
- >8-
and
- 8<-
Device {
Name = mnt2
Media Type = File
Archive Device = /var/lib/bareos/storage/mnt2
LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = yes;
Description = "File device. A connecting Director must have the same Name and 
MediaType."
Maximum Concurrent Jobs = 10
}
- >8
upto mnt5.
It has used other stores than mnt in the past:

# df -h /var/lib/bareos/storage/mnt*
Filesystem   Size  Used Avail Use% Mounted on
/dev/mapper/bcrypt   1.4T  1.3T 0 100% /var/lib/bareos/storage/mnt
/dev/mapper/bcrypt2  1.4T  973G  332G  75% /var/lib/bareos/storage/mnt2
/dev/mapper/bcrypt3  1.4T  299G 1006G  23% /var/lib/bareos/storage/mnt3
/dev/mapper/bcrypt4  916G   77M  870G   1% /var/lib/bareos/storage/mnt4
/dev/mapper/bcrypt5  916G   77M  870G   1% /var/lib/bareos/storage/mnt5

However, now it is stuck, trying to put "Differential-0464" onto "mnt", which 
is full (as you can see above):

20-Apr 08:02 bareos-sd JobId 869: Warning: stored/mount.cc:274 Open device "mnt" 
(/var/lib/bareos/storage/mnt) Volume "Differential-0464" failed: ERR=stored/dev.cc:746 
Could not open: /var/lib/bareos/storage/mnt/Differential-0464, ERR=No such file or directory

Where is my error?

regards,
Erich


Hi,

I restarted the machine which runs the storage daemon and the director (which 
aborted all the differential backups). This evening, it scheduled the usual 
incremental backups, reused some old volumes and now runs out of incremental 
volume space:

20-Apr 21:50 bareos-sd JobId 896: Warning: stored/mount.cc:274 Open device "mnt" 
(/var/lib/bareos/storage/mnt) Volume "Incremental-0471" failed: ERR=stored/dev.cc:746 
Could not open: /var/lib/bareos/storage/mnt/Incremental-0471, ERR=No such file or directory

I fear, I have done something really simple wrong, because this looks like it 
should be a simple setup :-(

Just for sake of completeness, here my 
/etc/bareos/bareos-dir.d/storage/File.conf:

- 8<
Storage {
  Name = File
  Address = sd.example.com
  Password = "secret"
  Device = mnt
  Device = mnt2
  Device = mnt3
  Device = mnt4
  Device = mnt5
  Media Type = File
  Maximum Concurrent Jobs = 20
}
- >8

Another question as a workaround: Is it possible to move volumes from one 
device to another? If so: how? Can I simply move the file and bareos will 
correctly identify it again? Or is there a command in bconsole to do that (I 
could not find one)?

regards,
Erich

-BEGIN PGP SIGNATURE-

iQIzBAEBCAAdFiEE3p92iMrPBP64GmxZCu7JB1Xae1oFAl6d/5UACgkQCu7JB1Xa
e1oCWQ/+Poq05o9LpfP3p2dlvOMSkVqYlelEtdFfDj+dfIv6kXTsaqdiyciOWym4
D42aRaZ1v/iKas7DGgUWJMqJAsi+88nBiOFlLtXYnD8Fi2ovy2zfblJBG89n3LvT
z/B/oGUsOKyeUcEFuugeqU4/zGVWDvLMbrnXHToRfhl6YwNfcwwsVsBN2jMzuQbl
TmRJc9c1zeHhy5IV6+eHXs0BsYd4WxV++VGtDmGgciWiHsLCgiIit0v4ieIF9H3d
pObsJK0zaEnTADO6ZzW+erFv+mifeMNQUGMSC7dshAi0SwbdAZI2bRcXuHRWrPmD
AQ/ZKJwWTkilKqjRfTE8Cbtz3PvtaelA5GZJWsB/8ybM0a9JDVzXYqS1Qwh83yQD
iNsfIv5ZmEjFJlJCgfzhHy30xC4QgxpP7/pBEXbRfxisuA53+qjf3AuThl3AeIRr
DMqIX6Zmqp88mUWIFq9KAxT45TL5hFkyw8XkmxN4lm/tb0B/Jba9oXAsCWV90GdH
iDx28EKfEedTLgbrcTb79kFKn1LZVn4+0gHyA0xB6KcCqKFEp3//Zx9rV+322tt9
d0vaHBKppkKsCAISxvmW7a8hhMXpv4niR7M8/80ww1r3mbqojKoDoyexXTReveoG
Fn/wP2OrHRRTYNy/MgnrBJeoZc0eLIivm18iCVZCwn76UJ8+NPY=
=UXIc
-END PGP SIGNATURE-

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop 

[bareos-users] Modulo schedules with weekdays

2020-03-31 Thread Spadajspadaj

Hi.

Just to make sure that I understand my config good, because the manual 
is a bit unclear on this.


What I wanted was to make a schedule that was run every other week on a 
given day of the week (i.e. every second saturday, or every third thursday).


The examples of the modulo scheduler use just day numbers or week 
numbers so it's not clear how to achieve what I wanted. But I tried


Schedule {
  Name = "BiCycle"
  Run = Full w01/w02 sat at 21:00
  Run = Differential w02/w02 sat at 21:00
  Run = Incremental mon-fri at 21:00
}

And after reloading the config got

*show schedule=BiCycle
Schedule {
  Name = "BiCycle"
  run = Full Sat 
w00,w02,w04,w06,w08,w10,w12,w14,w16,w18,w20,w22,w24,w26,w28,w30,w32,w34,w36,w38,w40,w42,w44,w46,w48,w50,w52 
at 21:00
  run = Differential Sat 
w01,w03,w05,w07,w09,w11,w13,w15,w17,w19,w21,w23,w25,w27,w29,w31,w33,w35,w37,w39,w41,w43,w45,w47,w49,w51,w53 
at 21:00

  run = Incremental Mon-Fri at 21:00
}

So I assume that's pretty much what I wanted to do, right? It could use 
some more straightforward description in the manual though. :-)



Best regards,

MK

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/c994b659-04af-fdbf-5f80-659de19cdd91%40gmail.com.


Re: [bareos-users] job.conf - disable job

2020-03-26 Thread Spadajspadaj

https://docs.bareos.org/Configuration/Director.html#config-Dir_Job_Enabled

On 26.03.2020 10:07, Martin Krämer wrote:

Hi All,

via bareos-webui -> "Jobs "-> "Actions" I can disable individual jobs.
As said in the action button comment "Disabling is a temporary 
operation until the director reloads".


Is there a way I can disable a job by default via the job.conf file so 
that this stays active even after reloading the director?


I am using Bareos Version: 19.2.6 (11 February 2020) 
Linux-5.4.7-100.fc30.x86_64 debian Debian GNU/Linux 10 (buster)


Thanks
--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/81bfa520-58b9-4c8e-9ca3-320479080ce2%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/6c848eba-32cb-90a1-c8d0-162cff0f0cf3%40gmail.com.


Re: [bareos-users] Deleting old volumes

2020-04-25 Thread Spadajspadaj
Firstly, you cannot have bareos delete files without dirty tricks. It 
can truncate volumes on purge as someone already pointed out.


If I were you and wanted to have fixed number of backups regardless of 
any other parameters, I'd go for maximum volume jobs=1 and apropriate 
retention and Maximum Volumes. Then I'd go for a separate pool for each 
client.


This way you'd have a fixed number of volumes in rotation, you'd have a 
separate media file for each job and with appropriate retention settings 
you'd recycle oldest volume each time.


One caveat - if you happen to have a job return an error state and want 
to rerun a job earlier than it's normally scheduled, you'd have more 
volumes used than you planned so you'd have to manually purge the volume 
containing errored job.


On 24.04.2020 22:37, Valentin Dzhorov wrote:
I understand the core philosophy of bareos and the fact that as it 
predecessor bacula are meant to backup tapes. My storage however is 
consisted of HDDs only and there is no need of changing tape drives. I 
am reading through the documentation and I thought I got it right, but 
apparently I didn't. My goal is the following: I want to have full 7 
days of backups at any given time for each client that is backed up. A 
job is configured like so:


[root@directorbareos ~]# cat 
/etc/bareos/bareos-dir.d/job/ufo1.delta.bg.conf

Job {
  Name = example.com
  Type = Backup
  Level = Full
  Client = example.com
  FileSet= LinuxAll
  Messages = Standard
  Storage = examplestorage
  Pool = LinuxAll
  Schedule = Weekly_10pm
  Priority = 20
  Allow Mixed Priority = yes
  Reschedule On Error = yes
  Reschedule Interval = 180
  Reschedule Times = 2
  Run Script {
    Console = ".bvfs_update jobid=%i"
    RunsWhen = After
    RunsOnClient = No
  }
}


And my volume is configured like so:


Pool {
  Name = example.com
  Pool Type = Backup
  Recycle = yes
  Auto Prune = yes
  Volume Retention = 7 days
  Label Format = "example.com-"
  Volume Use Duration = 14d
  Recycle Oldest Volume = yes
  Storage = examplestorage
}


So, how can I achieve having full 7 days of backup the most efficient 
way without wasting space and having the old files automatically 
deleted from the volume, not just the catalog?

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/bfdceca4-6811-4ef2-9f55-80a598b286ac%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/e1c8028c-de37-1c28-a0f2-cd1f69b07ed4%40gmail.com.


Re: [bareos-users] backup does not move to next storage

2020-04-22 Thread Spadajspadaj


On 21.04.2020 17:44, Erich Eckner wrote:

On Tue, 21 Apr 2020, Spadajspadaj wrote:

Unless you really need the automatic fall-over to the next storage, 
you can also set up a vchanger. That way you can also control fixed 
number of fixed-size volumes. I prefer this approach to dynamicaly 
created media files but YMMV.


Yes, I thought to use something like this, in the beginning, too - but 
I thought, it would be more complex, thus decided for the current 
variant. Can you share the relevant vchanger config with me, please?


Firstly I must say that for me it was essential to have a removable 
storage. This is my home backup setup so I'm backing up to a 
usb-attached disks. I have a single USB-SATA cradle in which I'm simply 
changing disks so I needed a low-effort solution that I could easily 
keep some backups offline. I'm not sure how it would work with a 
permanently attached storage.


I used a howto as reference but I see that it's no longer available. 
Maybe it's permanent, maybe not. Anyway, for reference, the howto used 
to be here http://www.revpol.com/offsitebackups


Having said that, here goes my setup (on CentOS if it has any significance).

First element of the config is the automount setup.

/etc/auto.master:
+dir:/etc/auto.master.d
+auto.master
/srv/backupstor /etc/auto.master.d/auto.vchanger --timeout=30

/etc/auto.master.d/auto.vchanger:
* -fstype=auto,rw :/dev/disk/by-uuid/&

With this setup I can just do a cd /srv/backupstor/UUID and the 
automounter should mount the drive if I know the UUID (easily obtainable 
by blkid).


So in case of:

blkid /dev/sda1

/dev/sda1: UUID="7B06F568090A6704" TYPE="ntfs" PTTYPE="dos" 
PARTUUID="aae626c7-da6c-4707-8c91-0cb52703893c"


I just do cd /srv/backupstor/7B06F568090A6704 and am in the partition's 
root directory (regardless of the filesystem type - in this case it's NTFS).



Second part is the vchanger setup

/etc/vchanger/vchanger.conf:

Storage Resource = vchanger-1
User = bareos
Group = bareos
Logfile = /var/log/vchanger/vchanger-1.log
Work Dir = /var/spool/vchanger/vchanger-1
Log Level = 7
Magazine = /srv/backupstor/bde48cec-03db-4f36-bb68-dbd14455b700
Magazine = /srv/backupstor/7B06F568090A6704
Magazine = /srv/backupstor/46786fd5-4bc4-4799-aaa4-a7459bc5b603
Magazine = /srv/backupstor/881564a0-63db-4e49-8f31-9a4ccd7d5d22
bconsole config = /etc/bareos/bconsole.conf

Here you can see that I'm using four different disks in rotation.

If I remember correctly, I had to do some file rights sanitization on 
logfile and work dir.



And the bareos bart looks like this:

Storage daemon:

/etc/bareos/bareos-sd.d/autochanger/vchanger1.conf
Autochanger {
    Name = vchanger1
    Device = vchanger-1-0
    Changer Command = "/usr/local/bin/vchanger %c %o %S %a %d"
    Changer Device = /etc/vchanger/vchanger.conf
}

/etc/bareos/bareos-sd.d/device/vchanger1.conf

Device {
    Name = vchanger-1-0
    DriveIndex = 0
    Autochanger = yes
    Device Type = File
    Media Type = Offsite-File
    Label Media = no
    Random Access = yes
    Removable Media = yes
    Automatic Mount = yes
    Archive Device = /var/spool/vchanger/vchanger-1/0
}

Director:

/etc/bareos/bareos-dir.d/storage/vchanger-1.conf:
Storage {
  Name = vchanger-1-changer
  Address = backup1    # N.B. Use a fully qualified name 
here (do not use "localhost" here).

  Password = "password"
  Device = vchanger1
  Media Type = Offsite-File
  Autochanger = yes
}

/etc/bareos/bareos-dir.d/pool/Offsite-eSATA.conf:
Pool {
  Name = Offsite-eSATA
  Storage = vchanger-1-changer
  Pool Type = Backup
  Recycle = yes
  AutoPrune = yes
  Volume Retention = 3 weeks
  Recycle Oldest Volume = yes
  Maximum Volume Bytes = 42949672960  # Small (40GB) easy to 
move/transfer volume sizes

}

In this setup after initial disk plugin I create volumes like this:

/usr/local/bin/vchanger -u bareos -g bareos /etc/vchanger/vchanger.conf 
createvols 3 104


Where 3 is a disk number (in this case if we look it up in vchanger.conf 
we'll see that it corresponds to 
/srv/backupstor/881564a0-63db-4e49-8f31-9a4ccd7d5d22 (units are numbered 
from 0)) and 104 is number of volumes to create.


Initially the volumes are created empty and can grow to Maximum Volume 
Bytes so in this case it's a 4TB drive.


It's supposed to perform a label command automaticaly but for some 
reason it wasn't working too god for me and I dind't have much time to 
debug it so after creating volumes I run bconsole and do "label barcodes".


After that the only commands I use after the initial creation of volumes 
are mount and update slots. Just keep in mind that the volume name is 
one off vs. slot number in vchanger. So if the console message wants you 
to, for example, mount volume vchanger-1_1_0030 you have to mount slot 29.


Hope this makes some sense :-)

If you have any more questions don't hesitate to ask. I might be able to 
recall 

Re: [bareos-users] Re: How to update from 17.2 to 18.2

2020-04-29 Thread Spadajspadaj
In short, yum history rollback is not a good way to do do anything other 
than very small package changes. i.e. downgrading from version 1.4.27 to 
1.4.26 of some package or completely removing package you just installed 
for testing along with all dependencies.


Longer explanation:

Yum history rollback just finds out which packages are to be installed 
(and in which versions) and which are to be removed. After that it 
simply downloads needed pakage files and runs appropriate install/remove 
actions. It's in no way similar to any "restore snapshot" operation. In 
case of files contained in the packages, the situation is easy - the 
"new" package installs new packages, then the files belonging to the 
"old" package but not belonging to "new" one are removed. Simple.


It gets more tricky with package scripts. RPM runs like this:

1. Runs pre-install script

2. Installs files

3. Runs post-install script

4. Runs pre-uninstall script

5. Removes files

6. Runs post-uninstall script


Steps 1-3 are run if there's any package installation or update 
(upgrade/downgrade) and steps 4-6 are run if there's any package removal 
(uninstall/upgrade/downgrade).


Bear with me, it's getting more magical :-)

The scripts executed during the operations are run almost completely 
without any additional environment or context. The only parameter is "a 
number representing the number of instances of the package currently 
installed on the system, /after/ the current package has been installed 
or erased" (http://ftp.rpm.org/max-rpm/s1-rpm-inside-scripts.html). This 
means that install scripts might be run with an argument of 1 (initial 
install) or 2 (upgrade/downgrade) since after the install stage the 
number of package instances on the system will be equal to those numbers 
and the uninstall scripts might be run with an argument of 0 (complete 
removal) or 1 (upgrade/downgrade) since that's how many instances of 
package will be left after the uninstall phase. There is probably an 
option for


You can see how it's used by querying a package with rpm -q --scripts. A 
good example here is an openssh-server package which runs some actions 
only on complete removal.


Unfortunately, the rpm scripts have no more context about the package 
versions (and don't have any reasonable way for querying the state of 
the rpm database in the middle of a transaction) so there is no way of 
knowing whether the update operation is an upgrade or downgrade. And we 
need also to remember than usually "big upgrades" invoke some "external" 
changes, like database schema upgrades, which are not easily 
rollbackable (as we said before - rpm rollback doesn't work on 
system-level snapshot). Therefore it's usually either assumed that 
update operation is indeed an upgrade one, not a downgrade. And it's a 
good practice to check prerequisites before executing needed "upgrade 
scripts" (like check database schema version beforehand as to not apply 
an upgrade script twice). Sometimes packagers just create a package with 
new files and tell you to upgrade the dependencies (like database 
schema) on your own.


So, long story short - no, there is no simple way to perform a yum 
history rollback and have a usable system if the upgrade had been "deep".


Therefore if you upgrade Bareos installation upgrade I'd do a full 
configuration backup and database dump to have a working state to which 
you can roll back.



Hope this wasn't too complicated :-)


On 28.04.2020 14:37, Goncalo Sousa wrote:

yum history rollback [id] doesn't do the job?

terça-feira, 28 de Abril de 2020 às 13:12:40 UTC+1, Oleg Volkov escreveu:

No, I am talking about rollback, reverting back to previous version.
Upgrading of production server suggest that you have any roll back
plan.
Anyway it is your choice, not mine.

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/34303ce5-505a-4f9c-b531-f9262b169689%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/c1946b4a-a625-7a82-0b8b-8f5d9ae89c1b%40gmail.com.


Re: [bareos-users] Bareos data encryption

2020-04-29 Thread Spadajspadaj



On 29.04.2020 14:09, Andreas Rogge wrote:

Am 29.04.20 um 13:22 schrieb Valentin Dzhorov:

Can anyone let me know what am I doing wrong here? Thank you all in advance!

That really depends on where you see the "Encryption: None" message.
In Bareos' context encryption can mean three different things:
- the PKI-based encryption of the backed up data (which is what you're
trying)
- the transport encryption (SSL) between dir, fd and sd
- hardware-assisted tape-encryption

I think you're seeing the message because the director cannot establish
a secure connection to the FD. However, PKI-based content-encryption may
still take place.

To test content-encryption you can try a restore of the data to another
client that has no access to the required key material.
It's also useful to check the job status. The Encryption field of the 
job status should contain info whether the job's been encrypted or not.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/50ff9529-35ba-0d83-e0a1-7bd8f4d043e6%40gmail.com.


Re: [bareos-users] Confused about some features, who can explain me what happens

2020-05-12 Thread Spadajspadaj



On 12.05.2020 11:34, 'DUCARROZ Birgit' via bareos-users wrote:
2) I thought that incremental level is writing into 
incremental-Volume, full level is writing into full-Volume etc. But it 
is not. Why?




Depends how your job is configured.

You might just use Pool directive to specify a pool into which all 
backups will be saved but you can also specify Differential Backup Pool, 
Incremental Backup Pool and Full Backup Pool which will cause different 
backup types to be split into various pools.


The defined pool types have nothing to do with pool names!

You can easily have Pool with name "Incremental" used for full backups. 
But that's a bit silly ;-)



Best regards,

MK

PS: https://docs.bareos.org/Configuration/Director.html#job-resource

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/edf140ae-deb2-10d8-1505-886881c8a5aa%40gmail.com.


Re: [bareos-users] backup does not move to next storage

2020-05-10 Thread Spadajspadaj
Since vchanger emulates the mtx-changer interface, theoreticaly it 
should be possible to have more than one "device".


I've never tried that myself but you could try defining another device 
as per 
https://docs.bareos.org/TasksAndConcepts/AutochangerSupport.html#multiple-devices


And you can of course allow interlacing multiple jobs on a single tape 
by allowing multiple concurrent jobs (I don't do it myself but I read 
that it can cause some problems if you lose the directory and try to 
restore from media only).



Best regards,

MK

On 08.05.2020 15:51, Erich Eckner wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,

I have a follow-up to my original problem: Is it possible to make 
vchanger load more than one virtual tape simultanously - preferably 
one for each pool? Because I now have a bunch of incremental backups 
waiting for a drive, because there are some long-running full backups 
still in progress.


Currently, I can only think of the possibility to run multiple 
vchangers, but then I would need to divide my volumes into "Full", 
"Differential" and "Incremental" manually - and I'd like to avoid this.


Cheers,
Erich

On Sun, 26 Apr 2020, spadaj wrote:


No problem, mate.
Hope this is of some help. If you have any questions, don't hesitate 
to ask. I can't guarantee I'll be able to give reasonable advice but 
I'll try :-)


Cheers.

W dniu 25.04.2020 o 21:53, Erich Eckner pisze:


Hi spadajspadaj,

I never came back to say "thanks", so here you go:
Thank you!




-BEGIN PGP SIGNATURE-

iQIzBAEBCAAdFiEE3p92iMrPBP64GmxZCu7JB1Xae1oFAl61Y+AACgkQCu7JB1Xa
e1o7Dw/9GB74tmUypbwRU121tVYtFIq3UK2R8rHOaQ3+Hi94TWEZ6mIolErBGO5w
plE3Y2g/lL8fJA5f0a438MegPcKFnJKpk3dNUS2XKXiWDdQZVFEIO8LQ7M9nzfde
AELIsjr9FyppgY/9xZOX0neEeifF7iA2J+0tVhtqfYscV2F3gRIaZYJ62OKsyYlI
3IwGaDxMEWvR9QKF70hpg6agk5ueSsarPtxLRFltkxhHKeavRZFU6QhLA/yJPg+m
hFC2NclZhPS2KCqVcGMPJKYPbsCu5eG2EyKgzoNtROUSQJbWKdFsu/XyLK00aYjZ
1pb9MLyDkiLOIYuHv0ul3QVK0fgTzznWZV/ZW5JerlGrYxQ0giUcPxmjRU4YtL0Z
LawY73LF1bQ5BNG3EQY2PvHpWySnmtDzY/lVz8w1u4HGvyKr10E58NBreJuywYk2
ff5IK8KK1jz5t03Z2cJ7iK1JJC6x/9Cgw1Qv/32ojd+tvNUv97S7yff7lLx1j+c5
qKaPPpIkqkBMYR8C79XajXalahhX/kXd3KDqNTKFDnOwenIdKMkNB059Ay27EAAh
UNHCIpxH7bCNEFmjgyHehoohmVCNITbokzEDoPO14554YUOBbYggi4yZg1aaJ3/c
DKe5bx/mQlAjDzRicb1mTRRmWJbXqfWdmwXhBwDuzqH9ECKq/eA=
=FDqs
-END PGP SIGNATURE-


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/57a7f026-9529-a6af-cbcd-a1c0518614b0%40gmail.com.


Re: [bareos-users] show volumes, that only have data from specific client. cleanup database and volumes

2020-05-19 Thread Spadajspadaj

On 19.05.2020 10:57, Miguel da Silva wrote:

Hello,

I have a massive Bareos Setup and one of my clients (Let's name him 
"client1") has had backup errors or slow backups.

So i investigated and found the Problem on client1.

Now i want to remove all old traces of data of client1 from the 
database and the actual volumes on the storage.
Since the first storage (600TB) is almost full, i want to regain some 
space.


Is there a neat way to get volume names that only have data from 
"client1"?


I guess after that i would have to
1. delete the Volumes from The Storage.
2. bconsole > delete volume=volname1
3. somehow clean the database which has also grown now to around 400GB 
when dumped.


Help would be much appreciated.

I don't want to make any errors or delete wrong Volumes, since 200 
Clients are now being saved and 200 more are coming when the next 
storage arrives.


There are two approaches:

1) The bareos way - purge jobs associated with given client and let the 
bareos do its job. And that's the approach I'd recommend. Purge jobs 
from given client, make sure you have "purge volume action=truncate" set 
and the prune all volumes. If volumes are no longer associated with any 
jobs (because you purged jobs from a particular client) they get purged. 
When they get purged, they are truncated. And voila


2) The manuall fiddling way - you get list of volumes containing only 
jobs from a given client by clever sql query (like inner join on 
jobmedia and job with a where on particular client and group by media id 
and count to select only volumes containing this single client jobs), 
then you manually delete volumes and manually delete files from storage. 
It can be done but it's not something that I'd recommend since the 
previous approach is IMO much safer.


You didn't tell us whether you're using one job per volume setup or any 
other settings because that could make your task a little bit easier.


The database issue is another story. Depending on database type (MySQL? 
Postgres? I don't suppose you're running this installation on sqlite) 
and even database configuration (MySQL table engine and - for example - 
file per table settings in case of innodb) you might need to do 
different things. Of course dump/restore will give you a "shrinked 
database" but that's a very radical approach. You might get away with 
vacujming postgres database but keep in mind that for the vacuuming 
process you need additional storage temporarily (my database shrunk from 
3.3GB to 2.6G after vacuuming but needed over 5G during vacuuming) and 
the process itself is time-consuming.



Best regards

MK

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/ff23cb1e-b735-cd4c-87bc-7391033b865f%40gmail.com.


Re: [bareos-users] show volumes, that only have data from specific client. cleanup database and volumes

2020-05-19 Thread Spadajspadaj

On 19.05.2020 13:08, Miguel da Silva wrote:


There are two approaches:

1) The bareos way - purge jobs associated with given client and
let the
bareos do its job. And that's the approach I'd recommend. Purge jobs
from given client, make sure you have "purge volume
action=truncate" set
and the prune all volumes. If volumes are no longer associated
with any
jobs (because you purged jobs from a particular client) they get
purged.
When they get purged, they are truncated. And voila

I have an always incremental setup, would it prune/truncate the 
volumes anyways? Retention time for the last possible backup is set to 
1 year, would i have to wait for that long?



There are two operations. One is prune, which means "clear the 
information about volumes/jobs/files/whatever as long as it's safe" 
(i.e. there are no more jobs on a volume, or the retention period 
already expired).


Another one is purge which means "do as I say, don't mind the volume 
contents, renention periods and so on".


That's why I'm suggesting _purging_ jobs, because that's what you 
specified as the thing you wanted to do - just remove the jobs without 
waiting for retention periods.


But all other operations should be prune. If you prune the tape, bareos 
checks retention periods and only "clears" the media if it can safely do 
so. That's why, if you purge the jobs first, when pruning media you'll 
only "clear" the media that have no more non-expired jobs associated 
with it.


And as the empty volumes get purged (pruning causes bareos to purge 
volumes if it's safe to do so), the action defined in "purge volume 
action" setting is called. If it is set to "truncate", the volume file 
is getting truncated to bare header.


Hope that's clearer now.


2) The manuall fiddling way - you get list of volumes containing only
jobs from a given client by clever sql query (like inner join on
jobmedia and job with a where on particular client and group by
media id
and count to select only volumes containing this single client jobs),
then you manually delete volumes and manually delete files from
storage.
It can be done but it's not something that I'd recommend since the
previous approach is IMO much safer.

You are right, it would be safer to do bareos it's thing.
But i don't really want to wait for 1 year. I am not really fluent in 
postgre, i mostly work with mysql.


You didn't tell us whether you're using one job per volume setup
or any
other settings because that could make your task a little bit easier.

It's a pretty standard always incremental Setup from the documentation 
with 4 pools 1.Full 2.AI-Incemental 3.AI-Consolidated and 4. 
AI-LongTerm. The jobs can get mixed in the Volumes.

It'll be indeed easier to let bareos decide which volumes it can srap then.


The database issue is another story. Depending on database type
(MySQL?
Postgres? I don't suppose you're running this installation on sqlite)
and even database configuration (MySQL table engine and - for
example -
file per table settings in case of innodb) you might need to do
different things. Of course dump/restore will give you a "shrinked
database" but that's a very radical approach. You might get away with
vacujming postgres database but keep in mind that for the vacuuming
process you need additional storage temporarily (my database
shrunk from
3.3GB to 2.6G after vacuuming but needed over 5G during vacuuming)
and
the process itself is time-consuming.


I use postgresql, right now i have 64TB on the Storage and 12TB on the 
Director where The Database lives remaining.

Can i vaccum while Bareos services are running or should i stop them?


https://www.postgresql.org/docs/9.1/sql-vacuum.html

"Normal" vacuuming only frees up space within existing tables for future 
reuse. It doesn't free up disk space but gives you more space within 
existing database installation so you can add data without growing the 
database on disk. And it can be run on fully operating database.


But vacuum full, which rebuilds the database files and reclaims space 
needs an exclusive lock on tables so might interfere with clients (in 
our case - bareos) operation.


Best regards

MK

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/36caf7ee-8825-0efc-afd8-a300c114030f%40gmail.com.


Re: [bareos-users] Client Run Job Before Script

2020-03-20 Thread Spadajspadaj
The bareos-fd runs with root user by default so it should have access 
but there canbe many different issues with the script itself.


One thing is SELinux - is it on? It might mess things up.

Second one is PATH variable. It might not be what you think when you're 
executing the script so some of your scripted commands might not be found.


That's two obvious things to check but there are many more. I'd run the 
bareos-fd with high debug level and see what gets reported because for 
now we don't even know whether there is a problem with launching the 
script at all or is it just some problem within the script itself.



Best regards,

MK

On 20.03.2020 18:37, aeronex...@gmail.com wrote:


Just noticed the file is in /root. Does Bareos have access to both the 
directory and file.


On 3/20/20 1:14 PM, Goncalo Sousa wrote:

In the documentation thats 2 tdifferent things but will try

sexta-feira, 20 de Março de 2020 às 16:58:25 UTC, aeron...@gmail.com 
escreveu:


suggest "Client Run Before Job = /root/pre_backup.sh" should be "
Run Before Job = /root/pre_backup.sh" i.e. remove the word client

On 3/20/20 11:22 AM, Goncalo Sousa wrote:

Hello,

I am having a issue with having a script run before the job.

In the bareos server I,
/etc/bareos/bareos-dir.d/job/tslxprdapp0001.conf has the
following configuration:

Job {
   Name = "tslxprdapp0001-job"
   Client = tslxprdapp0001-fd
   Type = Backup
   Level = Incremental
   FileSet = "tslxprdapp0001-fs"
   Schedule = "schedule"
   Storage = tslxprdapp0001-sd
   Messages = Standard
   Pool = pool-tslxprdapp0001
   Priority = 10
   Write Bootstrap = "/var/lib/bareos/%c.bsr"
   Client Run Before Job = /root/pre_backup.sh
}

_*The script is located in the client tslxprdapp0001*_


Screenshot_2020-03-19 Job details localhost-dir.png



Why is it giving me this error?


-- 
You received this message because you are subscribed to the

Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from
it, send an email to bareos...@googlegroups.com .
To view this discussion on the web visit

https://groups.google.com/d/msgid/bareos-users/87c0a46e-5765-4e4f-989d-0ece3acb8e0e%40googlegroups.com

.


--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/6031d15f-25b6-40d5-8fd8-cc4f4edd6090%40googlegroups.com 
.

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/f9c1aaa0-d382-f701-b04e-e4ee1cf1e886%40gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/5fdee6e3-9805-2914-8ec5-1a466f913de1%40gmail.com.


Re: [bareos-users] Backup, here automatic deletion of the storage medium when the expiry time has expired

2020-09-03 Thread Spadajspadaj
Deleted, as such - no. You can user ActionOnPurge = Truncate to make 
bareos shrink the media files to 0 bytes on purge but the file will 
still be there.


You'd have to use some external script to delete media and delete 
related files from disk.


On 03.09.2020 20:17, stefan.harb...@gmail.com wrote:

Hello,

my backup is being saved on a USB hard drive. Can I somehow check on 
the hard drive whether the files are also deleted there if the 
retention time / expiry time, for example Incremental-0014, has been 
deleted?


Greetings from Stefan Harbich
--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/4a435952-a754-41e2-b7b1-960e45955ba2n%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/13df4c01-2c96-7203-f9a7-800096f025fe%40gmail.com.


Re: [bareos-users] Re: some mount messages not sent by e-mail

2020-09-23 Thread Spadajspadaj
You have the messages configuration but do you have it attached to the 
appropriate job resource?



On 23.09.2020 12:19, 'birgit.ducarroz' via bareos-users wrote:

Hi, no one can help me?

birgit.ducarroz schrieb am Montag, 14. September 2020 um 11:32:42 UTC+2:

Hi,

I would like to receive an e-mail when a cartridge is full and I
need to
insert a new one, but I wonder why I do not get the following
messages
by mail:

"bareos-sd JobId 12: Please mount append Volume "users-0005" or
label a
new one for:
Job: users.2020-09-11_13.11.37_17
Storage: "LTO7-Tape-Drive"
(/dev/tape/by-id/scsi-350050763121460f3-nst)"


When restoring, I got the following e-mail, so this seems to work...
"10-Sep 15:33 bareos-sd JobId 9: Please mount read Volume
"WORM-0003" for:
Job: RestoreFiles.2020-09-10_15.33.47_55"


I re-checked the doc here, but I cannot figure out what is going
wrong.
https://docs.bareos.org/bareos-18.2/Configuration/Messages.html


This is my Standard Mail Config:

Messages {
Name = Standard
Description = "Reasonable message delivery -- send most everything to
email address and to the console."

operatorcommand = "/usr/bin/bsmtp -h root@localhost -f\"\(Bareos
lx85\) \<%r\>\" -s \"Bareos lx85: Intervention needed for %j\" %r"

mailcommand = "/usr/bin/bsmtp -h root@localhost -f \"\(Bareos lx85\)
\<%r\>\" -s \"Bareos lx85: %t %e of %c %l\" %r"

operator = root@localhost = mount #


mail = root@localhost = warning, error, fatal, mount, notsaved,
restored, security, alert, volmgmt
console = all, !skipped, !saved, !audit
append = "/var/log/bareos/bareos.log" = all, !skipped, !saved, !audit
catalog = all, !skipped, !saved, !audit
}

Can someone give me a hint please?

Thank you in advance,
Birgit

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/adbbd431-d144-48c0-85e5-f2683891b790n%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/8695befb-a390-fbed-ba11-ef545f1f88c7%40gmail.com.


Re: [bareos-users] Re: some mount messages not sent by e-mail

2020-09-24 Thread Spadajspadaj

Hmm...

At first glance it looks as if it should work.

I'd try setting "wider" mail configuration (like similar to the logfile  
one - "all, !skipped, !saved, !audit") and see if I get all the emails.


I assume you're perfectly positive there's no problem with emails 
themselves (like some overzealous spam filter or something like that), 
right?



On 24.09.2020 09:06, DUCARROZ Birgit wrote:

Hi, this is how my job ressource is configured. Am I missing something?

JobDefs {
  Name = "users"
  Type = Backup
  Level = Full
  Client = lx85
  FileSet = "users" (#13)
  Schedule = "03-09"
  Storage = Tape
  Messages = Standard
  Pool = users
  Priority = 1
  Write Bootstrap = "/euronas/data/bareos-tape-srv-db-and-bsr/%c.bsr"
  Full Backup Pool = users
  Allow Duplicate Jobs = no
  Cancel Lower Level Duplicates = yes
  Cancel Queued Duplicates = yes
  RunAfterJob = "/bin/mt -f /dev/tape/by-id/scsi-350050763121460f3-nst 
eject"

}


On 23/09/20 19:54, Spadajspadaj wrote:
You have the messages configuration but do you have it attached to 
the appropriate job resource?



On 23.09.2020 12:19, 'birgit.ducarroz' via bareos-users wrote:

Hi, no one can help me?

birgit.ducarroz schrieb am Montag, 14. September 2020 um 11:32:42 
UTC+2:


    Hi,

    I would like to receive an e-mail when a cartridge is full and I
    need to
    insert a new one, but I wonder why I do not get the following
    messages
    by mail:

    "bareos-sd JobId 12: Please mount append Volume "users-0005" or
    label a
    new one for:
    Job: users.2020-09-11_13.11.37_17
    Storage: "LTO7-Tape-Drive"
    (/dev/tape/by-id/scsi-350050763121460f3-nst)"


    When restoring, I got the following e-mail, so this seems to 
work...

    "10-Sep 15:33 bareos-sd JobId 9: Please mount read Volume
    "WORM-0003" for:
    Job: RestoreFiles.2020-09-10_15.33.47_55"


    I re-checked the doc here, but I cannot figure out what is going
    wrong.
https://docs.bareos.org/bareos-18.2/Configuration/Messages.html


    This is my Standard Mail Config:

    Messages {
    Name = Standard
    Description = "Reasonable message delivery -- send most 
everything to

    email address and to the console."

    operatorcommand = "/usr/bin/bsmtp -h root@localhost -f\"\(Bareos
    lx85\) \<%r\>\" -s \"Bareos lx85: Intervention needed for %j\" %r"

    mailcommand = "/usr/bin/bsmtp -h root@localhost -f \"\(Bareos 
lx85\)

    \<%r\>\" -s \"Bareos lx85: %t %e of %c %l\" %r"

    operator = root@localhost = mount #


    mail = root@localhost = warning, error, fatal, mount, notsaved,
    restored, security, alert, volmgmt
    console = all, !skipped, !saved, !audit
    append = "/var/log/bareos/bareos.log" = all, !skipped, !saved, 
!audit

    catalog = all, !skipped, !saved, !audit
    }

    Can someone give me a hint please?

    Thank you in advance,
    Birgit

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to bareos-users+unsubscr...@googlegroups.com 
<mailto:bareos-users+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/adbbd431-d144-48c0-85e5-f2683891b790n%40googlegroups.com 
<https://groups.google.com/d/msgid/bareos-users/adbbd431-d144-48c0-85e5-f2683891b790n%40googlegroups.com?utm_medium=email_source=footer>. 



--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to bareos-users+unsubscr...@googlegroups.com 
<mailto:bareos-users+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/8695befb-a390-fbed-ba11-ef545f1f88c7%40gmail.com 
<https://groups.google.com/d/msgid/bareos-users/8695befb-a390-fbed-ba11-ef545f1f88c7%40gmail.com?utm_medium=email_source=footer>. 



--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/e6144174-6894-0fa4-81fa-baf1e0b1e72e%40gmail.com.


Re: [bareos-users] How can I keep jobs/volumes beyond retention?

2020-08-01 Thread Spadajspadaj
Damn, my bad. I looked hastily into update and was pretty sure it worked 
for jobs the same way it does for volumes.


Apparently it does not. So you'd have to set retention period on whole 
volumes (here I'm pretty sure you can do that; I did it myself ;->).


Sorry for the confusion.

On 01.08.2020 19:31, Ariel Esteban Salvo wrote:

Thanks!

I'm looking into "update" to do it but I only seem to be able to 
change volretention of volumes.

Will that keep my job records as well?
What about file records?

On Saturday, August 1, 2020 at 6:10:53 AM UTC-3 spadaj...@gmail.com wrote:


On 31.07.2020 21:18, Ariel Esteban Salvo wrote:
> Hi!
>
> One of our clients was hit by a ransomware attack, Bareos did
its job
> and we were able to rebuild most of what was lost.
>
> I'd like to keep the jobs I used to restore for a while longer
(just
> in case)
> What are my options?
>
> I've seen migration and copy jobs in the docs but I've never
used them.
> Are there any other options?


You can use update command to change job parameters.

If you update job's retention period it won't get pruned earlier.


Best regards,

MK


The information contained in this e-mail may be confidential. It has 
been sent for the sole use of the intended recipient(s). If the reader 
of this message is not an intended recipient, you are hereby notified 
that any unauthorized review, use, disclosure, dissemination, 
distribution or copying of this communication, or any of its contents, 
is strictly prohibited. If you have received it by mistake please let 
us know by e-mail immediately and delete it from your system. Many thanks.


La información contenida en este mensaje puede ser confidencial. Ha 
sido enviada para el uso exclusivo del destinatario(s) previsto. Si el 
lector de este mensaje no fuera el destinatario previsto, por el 
presente queda Ud. notificado que cualquier lectura, uso, publicación, 
diseminación, distribución o copiado de esta comunicación o su 
contenido está estrictamente prohibido. En caso de que Ud. hubiera 
recibido este mensaje por error le agradeceremos notificarnos por 
e-mail inmediatamente y eliminarlo de su sistema. Muchas gracias.



--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/801d73cb-79fa-4071-b031-4d51b9089f02n%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/3d8edf38-9ee1-3e7e-ed9c-c059844f533a%40gmail.com.


Re: [bareos-users] How can I keep jobs/volumes beyond retention?

2020-08-01 Thread Spadajspadaj



On 31.07.2020 21:18, Ariel Esteban Salvo wrote:

Hi!

One of our clients was hit by a ransomware attack, Bareos did its job 
and we were able to rebuild most of what was lost.


I'd like to keep the jobs I used to restore for a while longer (just 
in case)

What are my options?

I've seen migration and copy jobs in the docs but I've never used them.
Are there any other options?



You can use update command to change job parameters.

If you update job's retention period it won't get pruned earlier.


Best regards,

MK

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/4369b529-6940-67d4-f179-6f07f1ba7f40%40gmail.com.


Re: [bareos-users] Re: Backup strategy for large dataset

2020-08-06 Thread Spadajspadaj

Two things.

RAID (or any other replication) is not a backup solution!

Archiving is not backup.

On 06.08.2020 19:34, Oleg Volkov wrote:
This system does not looks like suits any usual backup system. Your 
FULL is terrible and restore will be awful.


Use the storage ability. Make DR site and replicate the storage. Take 
daily snapshots.
If you have very dumb storage, you can use ZFS. It has space efficient 
snapshots and ability to replicate to another ZFS storage.

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/515ef17b-be7d-426b-94db-706b394a907fo%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/40a94414-4990-33c5-ef48-9682656d380b%40gmail.com.


Re: [bareos-users] Backup strategy for large dataset

2020-08-07 Thread Spadajspadaj
As I wrote earlier, this looks more like archiving plan, not a backup 
one (or a combination of backup and archiving). But more to the point - 
in case of backups you have to have a verification plan and periodical 
restore tests. In case of archiving you need to have a verification plan 
(i.e. every half a year you read each archive unit, check whether it's 
readable and if its checksum is correct; in case of more critical data 
you might want to even have some kind of functional tests like trying to 
read the data into appropriate software and check whether the data is 
still parseable and loadable; If any copy fails you'd want to re-create 
it from other existing copies).


On 07.08.2020 09:49, a.br...@mail.de wrote:
Thanks a lot, Brock, for your comprehensive post and also to the 
others. I haven't fully worked through your example cases yet, but it  
will certainly help me to get my head around it all. Maybe it helps if 
I provide a few more details about how the data/images are organized:


I run a Linux based virtualization cluster on RAID6-hosts with Windows 
VMs.
The images are organised in windows-folders of 2TB size each, like 
"E:\img\img01\" to currently "E:\img\img17\".
Once a folder is full, it's contents will never change again. They're 
like archives that will be read from but no more written to.


So I thought I'd proceed like this:
1. Backup "img01" to "img17" to tape, store the tapes offsite.
2. Do this a second time and store the tapes offsite, seperate from 
the first generation.

3. Do this a third time to disk, for quick access if needed.
4. Make sure the catalog of at least 1. and 2. is in a very safe place.
5. Develop a daily backup strategy - starting with "img18".

As for  (1.) - (3.) I have created seperate Full-jobs for each 
imgXX-folder. (1.) has already completed successfully, (2.) is 
currently in progress.
I thought that once (1.) and (2.) are completed successfully I'm safe 
what "img01-17" is concerned and never have to consider these folders 
for backup again. Right or am I missing something?


What I'd like to discuss here is (5.) - under consideration of a few 
parameters:
- the daily increment of image data is roughly 50 GB. BTW: The images 
(DICOM, JPEG2000) don't compress at all :).
- for legal reasons we have to store the images on WORM-media. So I 
need a daily job that writes to tape.
- the doctors want best possible protection against fire, supernova, 
Borg-attack etc. They want a daily tape change routine with the latest 
WORM-tape taken offsite.


For the daily tape change I could buy another LTO drive. I can also 
expand my backup-server to fit above (3.) and the daily increment.


So, here's what I thought I need to develop:
- Backup the daily increment to disk.
- Backup the same daily increment to a WORM tape (in a new 1-slot 
drive) that is part of a "daily change" pool of tapes (MON-SAT or so...)
- Append the same daily increment to another WORM tape in the 8-slot 
loader. Once the tape is full, take it offsite and add a fresh tape in 
the loader.


If that strategy doesn't sound to weird I need to transfer this into a 
working bareos config.
Sorry if it all sounds confusing but for me it's still really, really 
complex.


Thanks
Andreas

bro...@mlds-networks.com schrieb am Mittwoch, 5. August 2020 um 
20:21:10 UTC+2:


You will have some complexity with the size of your data and the
size of your loader. Unless your data compresses really well.
Does it have more than one tape drive? Your total loader capacity
is 48 TBytes raw, and you need 2x your full size to do
Consolidations or new Fulls or you have gaps in your protection.

If I’m reading this right you want an off site copy.

If that’s correct I would go about this two different ways,

* Get a much bigger loader with 2 drives
or
* Expand backups server raid6 to have Full + growth*Growth Window
capacity

I would then use migrate+Archive jobs to make my off site and copy
to tape.

In the first case you can avoid the extra migrate, just do an
archive to a pool of tapes you eject.

Workflow case 1 Bigger Tape loader 2 or more tape drives.
* Spool to Disk
* AI-Consolidated Pool Tape
* AI-Incremental Pool Disk
* Offsite Pool Tape

Fulls and Consolidations backups go to AI-Consolidated Tape pool,
Your daily go to disk until they are consolidated into tape.

To create your off sites you can use a copy job of whatever full
you want.

https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html#copy-jobs


I personally for offsite to avoid issues with Always Incremental
jobs use an Archive job

https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html#virtual-full-jobs


This avoids the offsite tapes being upgraded to the primary copy
when the Consolidate job prunes the older jobs.

To do a VirtualFull archive job like this though you need enough

Re: [bareos-users] vchanger keeps mounting wrong volume

2020-06-30 Thread Spadajspadaj



On 30.06.2020 07:24, Erich Eckner wrote:
Device "vchanger-1-0" (/var/spool/vchanger/vchanger-1/0) is waiting 
for sysop intervention:

    Volume:  vchanger-1_0005_0153
    Pool:    Incremental
    Media type:  Offsite-File
    Device is BLOCKED waiting for mount of volume "vchanger-1_0005_0292",
   Pool:    Incremental
   Media type:  Offsite-File
    Slot 753 is loaded in drive 0.
    Total Bytes Read=213 Blocks Read=1 Bytes/block=213
    Positioned at File=0 Block=0

There is nothing reading from this device, there are only jobs waiting 
for 1_0005_0292 to be mounted. If I tell vchanger (via bconsole) to 
mount any volume, it only mounts 1_0005_0153 again, which is also 
evident in the fs itself:


/var/spool/vchanger/vchanger-1/0 -> /mnt/bareos/mnt6/vchanger-1_0005_0153

If I change that symlink and the info in 
/var/spool/vchanger/vchanger-1/drive_state-0 manually to point to the 
right volume, vchanger reports the right volume, but bareos does not 
notice the change. So here are my questions:


I'd try running director and sd with higher debug level and see what's 
happening.



If not, how can I refresh the info in bconsole without re-mounting 
(this will only mount the wrong volume again)?



update slots doesn't help?

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/75d801a9-8291-4054-c6c8-990277334936%40gmail.com.


Re: [bareos-users] autodelete pool

2020-06-15 Thread Spadajspadaj

Hi Birgit.

To be honest, I fail to see what would be the point of deleting a pool 
from a script. If you need to delete a pool, you do it once, 
interactively and everything's good.


Of course you can do a delete pool from bconsole but if you don't delete 
it from the configuration it'll get recreated as soon as you reload the 
configuration or restart the director. And ID of the pool will change. 
(even if you're able to delete a pool having media assigned to it, which 
I'm not sure you can do, those media would not get re-associated to a 
new pool named the same as the old one).


But the bottom line is - why would you even want to do such a thing. 
Especially considering that if you want to be able to batch-delete pools 
you'd have to have a way to batch-create pools as well (otherwise what 
would you be deleting?).


Can you please elaborate a bit more what you're trying to achieve?

On 14.06.2020 22:35, 'DUCARROZ Birgit' via bareos-users wrote:

Hi all,

How can I autodelete a pool (i.ex in a script)?

echo delete pool=Scratch yes | bconsole

--> does unfortunately not work

I can rm /etc/bareos/bareos-dir.d/pool/Scratch.conf, but doing so will 
keep listing the pool in my storages list (see print-screen).


Thank you for any help.
Kind regards,
Birgit



--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/1e2165db-9584-933c-941f-6f96f3418286%40gmail.com.


Re: [bareos-users] Bareos storage error

2020-06-17 Thread Spadajspadaj



Can you tell me how to do a little guide? I am a novice in this,
in what format do I have to mount the disk and what privileges do
I have to give it?



I have no clue what your configuration looks like but I suppose
you've already mounted the disk (at least that's what the
screenshot shows - /dev/sdb is mounted on /var/lib/bareos/storage).

So you have to check what user the bareos-sd is running with (I
suppose it's user bareos but it's always good to check). Just run
"ps u -C bareos-sd" and see the first column of outtput.

Then you have to chown the /var/lib/bareos/storage. Supposing it's
the user "bareos", you have to do "chown bareos
/var/lib/bareos/storage".

But I strongly advise you read a bit about unix permissions.



I did what you told me, the backup is successful but with those messages.



I assume that you had a local installation on which you were writing to 
/var/lib/bareos/storage, it worked for some time (you did at least one 
backup) and now you mounted another drive into that directory.


When you mount the drive into a directory, the drive is seen by the 
operating system as this directory. Previous contents of this directory 
are no longer accessible until you unmount that drive.


So in your case /var/lib/bareos/storage no longer shows a local 
directory (which contained a previously used bareos media file) but 
points to another filesystem created on /dev/sdb. So as long as you have 
the disk mounted under /var/lib/bareos/storage, you don't have access to 
the /var/lib/bareos/storage/Full-0001 file. Also you have no way to - 
for example - delete it in case you want to free some space on your 
filesystem. But the media file is still referenced by bareos director 
database so you might run into trouble later, for example, trying to 
restore from the job contained in this file.


The question is what do you want to achieve. If you just want to have a 
single big bareos backup drive, I'd suggest you stop all bareos 
processes, unmount the new disk, move the contents from 
/var/lib/bareos/storage to another directory, mount the new disk and 
then move the files back to /var/lib/bareos/storage (this time they'll 
be located on the new disk).


But if you want to have removable disks which you can swap (for example 
to have off-line backup stored somewhere else), that's a much more 
tricky solution involving vchanger script and I wouldn't advise you try 
to set it up unless you have a good understanding of Bareos and how your 
OS works.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/153ce183-761f-21c2-c25d-b8fbd6ff54cf%40gmail.com.


Re: [bareos-users] Long Path UNC

2020-06-24 Thread Spadajspadaj
Apart from all the possible other prerequisites, you have to remember 
that the paths have to be accessible from the context of the user from 
which the bareos-fd is running. So if you - for example - mount the NAS 
share as a g: drive from your normal user, it won't be visible to the 
bareos-fd process since it's running from another user session. If you 
want to call UNC-paths you might have to do some windows-voodoo with 
credentials.



On 24.06.2020 10:17, Sylvain Donnet wrote:

Hi,

I need to backup network shares on a Windows guest, and the shares are 
coming from a branded NAS. No capability to install Bareos client on 
this NAS.


So, I discovered on this forum an old post, telling that UNC, in long 
path format (\\?\UNC\server\share) works for Bareos.


My tests didn't work. But I discovered that at least one prerequisite 
is required : a regedit key 
(|HKLM\SYSTEM\CurrentControlSet\Control\FileSystem LongPathsEnabled) 
to declare.|

|
|
|So, my questions are :|
|- are there other prerequisites ?|
|- is the Bareos windows client compatible with such call ? (or above 
a specific version ?)|
|- how do I have to write the files instruction in the Fileset (native 
\\?\UNC\..., ?\\UNC\\, //?/UNC/, ...)

|
|
|
|Any help will be greatly greatly appreciated... I am going to be 
completely mad with, for me, a simple+old problem...|

|
|
|Sylvain
|
--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/bc51bde5-9073-4346-a3ef-ec8d0ba2518bo%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/9cede40f-1883-1c0b-d3c7-28f928da0ce2%40gmail.com.


Re: [bareos-users] Bareos storage error

2020-06-16 Thread Spadajspadaj
As tou can see, the Storage Daemon can't create files on the disk. Since 
the device is formatted with ext4 filesystem you need to set appropriate 
ownership and access rights to the storage directory _after you've 
mounted it_.



On 16.06.2020 12:59, lucas wrote:

Hi,

I'm trying to change the storage of my bareos backups, I have mounted 
a second disk in /var/lib/bareos/storage, but when trying to make a 
backup I get that error, someone can help and explain what happens.


Thanks a lot
--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/da946380-4a9e-44a1-8822-7ba3e7045f4ao%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/12c66c17-4257-7367-5423-2723e8138e98%40gmail.com.


Re: [bareos-users] Bareos storage error

2020-06-16 Thread Spadajspadaj


On 16.06.2020 13:31, lucas wrote:


Can you tell me how to do a little guide? I am a novice in this, in 
what format do I have to mount the disk and what privileges do I have 
to give it?



I have no clue what your configuration looks like but I suppose you've 
already mounted the disk (at least that's what the screenshot shows - 
/dev/sdb is mounted on /var/lib/bareos/storage).


So you have to check what user the bareos-sd is running with (I suppose 
it's user bareos but it's always good to check). Just run "ps u -C 
bareos-sd" and see the first column of outtput.


Then you have to chown the /var/lib/bareos/storage. Supposing it's the 
user "bareos", you have to do "chown bareos /var/lib/bareos/storage".


But I strongly advise you read a bit about unix permissions.

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/d981131c-7340-a1bf-4e8b-4a6434b72ee3%40gmail.com.


Re: [bareos-users] autodelete pool

2020-06-16 Thread Spadajspadaj
There's no point in creating those pools in the first place. Just define 
a single pool in the config files and the daemon will create only this 
defined pool.


If there's no Scratch pool defined, it will not be created.

https://docs.bareos.org/Configuration/Director.html#pool-resource

https://docs.bareos.org/Configuration/Director.html#scratch-pool

Notice the "if it exists" in description of Scratch Pool

Long story short then - don't create pools so you don't have to delete 
them. Just set up a server with a defined pool(s) and you should be ok.


BTW, consider using some automation solution (for example - ansible) for 
installing a server from scratch instead of own script.



On 16.06.2020 12:00, DUCARROZ Birgit wrote:

Hi Spadajspadaj,

First of all, thank you for your response.

I created a script which completely installs my server. The script is 
meant to ease an eventual next installation on a new server 
(migration) and at the same point it is meant to be my documentation.


The script installs a bareos server which will serve as archiving 
server on single tapes.


Since I will and would not need incremental nor differential nor 
scratch pools (I will only archive data using full-backup), I would 
like to delete these pools using this initial installation script.


See also my post and answer from Philipp Storz: 
https://groups.google.com/forum/#!searchin/bareos-users/Birgit$20Ducarroz%7Csort:date/bareos-users/g53BNdTat2s/YhlHeARHAgAJ


It would have been a nice feature to automate the deletion of these 
pools directly at the installation.


So I wonder how I can manage this in my script.

Kind regards,
Birgit

On 15/06/20 21:32, Spadajspadaj wrote:

Hi Birgit.

To be honest, I fail to see what would be the point of deleting a 
pool from a script. If you need to delete a pool, you do it once, 
interactively and everything's good.


Of course you can do a delete pool from bconsole but if you don't 
delete it from the configuration it'll get recreated as soon as you 
reload the configuration or restart the director. And ID of the pool 
will change. (even if you're able to delete a pool having media 
assigned to it, which I'm not sure you can do, those media would not 
get re-associated to a new pool named the same as the old one).


But the bottom line is - why would you even want to do such a thing. 
Especially considering that if you want to be able to batch-delete 
pools you'd have to have a way to batch-create pools as well 
(otherwise what would you be deleting?).


Can you please elaborate a bit more what you're trying to achieve?

On 14.06.2020 22:35, 'DUCARROZ Birgit' via bareos-users wrote:

Hi all,

How can I autodelete a pool (i.ex in a script)?

echo delete pool=Scratch yes | bconsole

--> does unfortunately not work

I can rm /etc/bareos/bareos-dir.d/pool/Scratch.conf, but doing so 
will keep listing the pool in my storages list (see print-screen).


Thank you for any help.
Kind regards,
Birgit





--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/f2753ceb-31b1-26a4-86ac-551e1d779fb0%40gmail.com.


Re: [bareos-users] Re: Relabel Tapes

2020-06-09 Thread Spadajspadaj

On 09.06.2020 18:16, Jörg Steffens wrote:

On 09.06.20 at 14:58 wrote 'birgit.ducarroz' via bareos-users:

Did you try
bconsole
* relabel

?


Relabel will only work on empty/purged volumes. By relabeling a tape,
data will be lost.
AFAIK it is only possible to append data to a physical tape, not
rewriting parts of it.



I can't see the original mail so I'm posting here (I found the original 
post via google groups web interface).


I don't quite grasp your situation.

I assume you had 40 tapes for which you did "label barcodes" and 
everything worked great. Then you moved... Well, what did you move? You 
moved the tapes to another library or moved the library to another 
bareos instance? I don't quite follow what happened. You attached new 
barcode labels to old tapes? Or did the new library interpret old 
barcodes differently? I'm a bit lost here :-) What are the labels 
recorded on tapes themselves, what are the contents of barcode labels 
and what is in the media entries in directory?


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/93231f31-18d1-7438-d9f3-4eb708c72fc9%40gmail.com.


Re: [bareos-users] Howto Correctly Configure Full Pool For Catalog Backup For an Archiving System?

2020-06-05 Thread Spadajspadaj
If you don't specify retention period, they will get set at default 
values so it's not a proper solution. I'd rather go and set it to some 
insanely huge value.


But of course it will result in ever-growing storage demand for the 
catalog database since no jobs/files/volumes will be getting purged from 
catalog.



On 04.06.2020 20:35, 'DUCARROZ Birgit' via bareos-users wrote:

Hi list,

I have some WORM tapes which I will use for eternal archive.
My Catalog will be backuped on an external share.

Actually I'm not sure how to config my Full Pool For the Catalog backup.


If I AutoPrune and also configure Volume / Job and File Retention, and 
my Full-Pool will be overwritten in 180 days (so i.ex also Catalog 
infos for the WORM data, will I also lose the Catalog Information of 
data that has been written to restore from a WORM if this WORM must be 
used in some years?


Pool {
  Name = Full
  ...
  AutoPrune = yes
  Volume Retention = 180 days
  Job  Retention = 179 days
  File Retention = 178 days
}

For archiving, must I set AutoPrune = no and set neiter 
Volume/Job/File Retention?

But in this case the external share will endless grow.

How do you configure Pool information for archives?


Thank you for some hints!
Kind regards,
Birgit



--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/49ddc465-ecfd-da36-d06c-f374e6488836%40gmail.com.


Re: [bareos-users] Re: Bareos high availability

2020-06-05 Thread Spadajspadaj
I would be, however, cautious about possible scenarios where a node 
breaks and fails over to the other server - for example - in the middle 
of a backup job. Such scenarios would need some testing so you know what 
to expect and how to handle such situation.


On 05.06.2020 09:00, Oleg Volkov wrote:

I do not see any problem. It is just a service.
Make postgres HA, make /etc/bareos and  /var/lib/bareos be on shared 
disk, make VIP and colocate it with bareos services.


Never tried this with bareos, but made a lot of clusters - there 
should be no problem.

Just follow any active-standby scenario.

On Tuesday, June 20, 2017 at 8:47:27 AM UTC+3, Chanaka Madushan wrote:

Hi,

I got a requirement for install bareos as a high available cluster
on CentOS with PostgreSQL database. But in this case I have
limited to two servers + a shared storage (may be a SAN or a NAS).
So I have to install all bareos-fd, bareos-sd and bareos-dir on
these two servers as high available services.

I hope to make PostgreSQL high availability with transaction log
shipping.

But I donot have an idea to make bareos services high available.

Is there anyone who has deployed a high available cluster for bareos?

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/3833dc30-dac4-47d2-8c00-3d13c0fbe6c6o%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/aafbcb37-af38-c378-70d2-298d3d47f9f2%40gmail.com.


Re: [bareos-users] Re: Bareos high availability

2020-06-05 Thread Spadajspadaj
Probably. That's why I'd probably prefer some duplicated, but not 
necessarily HA setup. I'm always a bit suspicious towards HA-clustering 
non-HA-capable solutions.


But YMMV

On 05.06.2020 14:47, Oleg Volkov wrote:

Jobs will be obviously aborted and failed.
Then you have to care about them manually as usual for failed job.

K.O.


On Friday, June 5, 2020 at 1:32:40 PM UTC+3, Spadajspadaj wrote:

I would be, however, cautious about possible scenarios where a
node breaks and fails over to the other server - for example - in
the middle of a backup job. Such scenarios would need some testing
so you know what to expect and how to handle such situation.

On 05.06.2020 09:00, Oleg Volkov wrote:

I do not see any problem. It is just a service.
Make postgres HA, make /etc/bareos and /var/lib/bareos be on
shared disk, make VIP and colocate it with bareos services.

Never tried this with bareos, but made a lot of clusters - there
should be no problem.
Just follow any active-standby scenario.

On Tuesday, June 20, 2017 at 8:47:27 AM UTC+3, Chanaka Madushan
wrote:

Hi,

I got a requirement for install bareos as a high available
cluster on CentOS with PostgreSQL database. But in this case
I have limited to two servers + a shared storage (may be a
SAN or a NAS). So I have to install all bareos-fd, bareos-sd
and bareos-dir on these two servers as high available services.

I hope to make PostgreSQL high availability with transaction
log shipping.

But I donot have an idea to make bareos services high available.

Is there anyone who has deployed a high available cluster for
bareos?

-- 
You received this message because you are subscribed to the

Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to bareos...@googlegroups.com .
To view this discussion on the web visit

https://groups.google.com/d/msgid/bareos-users/3833dc30-dac4-47d2-8c00-3d13c0fbe6c6o%40googlegroups.com

<https://groups.google.com/d/msgid/bareos-users/3833dc30-dac4-47d2-8c00-3d13c0fbe6c6o%40googlegroups.com?utm_medium=email_source=footer>.


--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
<mailto:bareos-users+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/fead0ad8-344d-4a1f-8eb1-84affc638961o%40googlegroups.com 
<https://groups.google.com/d/msgid/bareos-users/fead0ad8-344d-4a1f-8eb1-84affc638961o%40googlegroups.com?utm_medium=email_source=footer>.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/0df69943-bbca-abce-0c9e-c42968492ead%40gmail.com.


Re: [bareos-users] mysqldump 128 GB limit ?

2020-06-11 Thread Spadajspadaj

On 11.06.2020 14:29, Kai Zimmer wrote:

Hi,

in former times i used bareos with a mysql database backend. However 
it became too slow and i switched to a secondary postgres catalogue. I 
need to keep the mysql database as a history though.


Now i'm switching from Ubuntu 16.04 (mysql 5.7) to Ubuntu 20.04 (mysql 
8.0) and i'm unable to start the mysqld server because of incompatible 
data structures. I tried dumping the database on another Ubuntu 16.04 
machine, but the SQL-dump file is only about 128 gb in size, although 
the binary index files are > 200 GB in size.


The size of database files is not directly related to a dump size. 
Firstly, the database files can contain spaces from which data has 
already been deleted but which was not reused yet. Secondly, remember 
that database files contain not only raw data (which you get dumped into 
a dump file) but also index structures. The more indices you have 
created in the database, the more extra space is used. So I wouldn't be 
surprised if the dump was indeed performed properly.


If you have doubts hovewer I'd advise you to try to restore the database 
onto another server and check whether select count(*) from each table is 
the same as on your source server.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/cf6f4e59-617a-6533-0852-414bc8e1ffba%40gmail.com.


[bareos-users] Way to prevent from cumulating jobs

2020-06-09 Thread Spadajspadaj

Hi there.

I'm wondering whether there is a reasonable way to prevent bareos from 
scheduling jobs from same client in quick succession.


Here's what I mean. We have a bareos setup with a single tape drive. The 
jobs are scheduled daily with Inc/Diff/Full schedule. If we fail to 
change tape as requested (sometimes there's noone on site for few days), 
the jobs are getting queued and if we finally do put a new tape in, all 
queued jobs are getting executed. The optimal solution would be to 
execute just one job for each client from the period that we had no tape 
available (of course with highest job level in case of different levels 
queued through few days). But I don't see a nice way to do that.


I wouldn't want the jobs to have short timeout and fail quickly (I 
suppose that would work, finally job on lower levels would get promoted 
if there was no appropriate higher level job available, right?) but that 
would mess job status with many failed jobs. But if it's the only way...


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/63fb2302-c4e0-e5fb-a84d-e72d1a16efdb%40gmail.com.


Re: [bareos-users] Way to prevent from cumulating jobs

2020-06-09 Thread Spadajspadaj



On 09.06.2020 11:30, Andrei Brezan wrote:

On 09/06/2020 11:24, Spadajspadaj wrote:

Hi there.

I'm wondering whether there is a reasonable way to prevent bareos 
from scheduling jobs from same client in quick succession.


Here's what I mean. We have a bareos setup with a single tape drive. 
The jobs are scheduled daily with Inc/Diff/Full schedule. If we fail 
to change tape as requested (sometimes there's noone on site for few 
days), the jobs are getting queued and if we finally do put a new 
tape in, all queued jobs are getting executed. The optimal solution 
would be to execute just one job for each client from the period that 
we had no tape available (of course with highest job level in case of 
different levels queued through few days). But I don't see a nice way 
to do that.


I wouldn't want the jobs to have short timeout and fail quickly (I 
suppose that would work, finally job on lower levels would get 
promoted if there was no appropriate higher level job available, 
right?) but that would mess job status with many failed jobs. But if 
it's the only way...




You can use the Cancel options from 
https://docs.bareos.org/Configuration/Director.html#job-resource. 
Something like:

  Allow Duplicate Jobs = false
  Cancel Lower Level Duplicates = true
  Cancel Queued Duplicates = true



That's exactly what I needed.

Thank you!

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/bb772b07-8b20-75c5-9c68-c28f6a9d13bc%40gmail.com.


Re: [bareos-users] Schedule question

2021-01-07 Thread Spadajspadaj
I believe you'd have to have two different jobs. You'd have to create a 
disk-based storage and firstly doing a backup job there, then have a 
migrate job to a tape pool.


I'm thinking of similar setup myself since I have sometimes problems 
with getting to the server to change tapes so I would like to have 
disk-based jobs and copy them whenever I get the chance but I have to 
dig a bit more into all those retention periods because there are few 
things still not obvious for me.


On 07/01/2021 11:03, 'Frank Cherry' via bareos-users wrote:


Hi there,
this is my schedule set:

Schedule {
  Name = "CR-WeeklyCycle"
  Run = Level=Full 1st sun at 7:00
  Run = Level=Differential 2nd-5th sun at 7:00
  Run = Level=Incremental mon-sat at 7:00
}

The backup is stored on a LTO tape, changed manually - no autoloader.

Looking on the schedule, a tape of the increment pool is insert on 
Saturday morining.
When now a differential or full backup is started, it checks first if 
the right tape is insert.
So I have do by manpower, that the right tape is available before the 
backup starts.


All backup jobs like inc, diff and full have spooling active.
Is there a way, that the nackups starts, and spools also if the wring 
tape is insert and after changing to the right tape, the despooling 
process starts or must I split it each backup in two jobs: copy to hdd 
and then to tape?


Thanks for any useful hints, Frank
--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/7000fa15-2e2c-4224-9602-bf1f94437a85n%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/c65cccbd-4130-2cff-0c4d-b5dc359c2ee5%40gmail.com.


Re: [bareos-users] Schedule question

2021-01-07 Thread Spadajspadaj
Well, not everyone has long enough tapes to always do full backups ;-) 
After all the whole concept of Inc and Diff backups didn't come from 
nothing.


On 07/01/2021 13:10, 'DUCARROZ Birgit' via bareos-users wrote:

Hi,

Another possibility is not to spool and not to backup incremental nor 
differential jobs.


For restoring speed and for your tape health it is better to always do 
full backups, especially if you have no autochanger.


I deleted all jobs which do no full backup.

I did the configuration in a way, that once a month it asks me to 
change the tape and this for each job. In this way I do a full backup 
twice in a year of each job by manipulating tapes only once per month. 
This is for me an easy way to handle the tapes.


The job tells me to insert cartridge 1 and as soon it is inserted, the 
job continues automatically. One month later it asks me for cartgidge 
2 and so on.


Let me know if you are interested to know how to configure such a 
handling.


Regards,
Birgit
On 07/01/21 11:03, 'Frank Cherry' via bareos-users wrote:


Hi there,
this is my schedule set:

Schedule {
   Name = "CR-WeeklyCycle"
   Run = Level=Full 1st sun at 7:00
   Run = Level=Differential 2nd-5th sun at 7:00
   Run = Level=Incremental mon-sat at 7:00
}

The backup is stored on a LTO tape, changed manually - no autoloader.

Looking on the schedule, a tape of the increment pool is insert on 
Saturday morining.
When now a differential or full backup is started, it checks first if 
the right tape is insert.
So I have do by manpower, that the right tape is available before the 
backup starts.


All backup jobs like inc, diff and full have spooling active.
Is there a way, that the nackups starts, and spools also if the wring 
tape is insert and after changing to the right tape, the despooling 
process starts or must I split it each backup in two jobs: copy to 
hdd and then to tape?


Thanks for any useful hints, Frank

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/7000fa15-2e2c-4224-9602-bf1f94437a85n%40googlegroups.com 
. 





--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/37f5c8c4-9c2e-f365-51d0-10bcac61bd57%40gmail.com.


Re: [bareos-users] Schedule question

2021-01-07 Thread Spadajspadaj
I'm not sure that there's any way other than running FD in a debug mode 
(and sufficiently high debug as well). At least I don't know of any.


On 07/01/2021 14:40, 'Frank Kirschner | Celebrate Records GmbH' via 
bareos-users wrote:


Is there a way to monitor the FD during collecting the files to see, 
how long it take to handle each directory in a file set?


Am 07.01.2021 um 14:08 schrieb Spadajspadaj:

Of course. It's all a matter of personal preference and personal needs.

There is one caveat though about full jobs and backup speed. It's all 
ok if you're backing up just files and have no problem reading them 
with the filedaemon. If you have some uncommon scenarios (like 
backing up shares via CIFS from devices you can't install FD on or 
using plugins to generate data for FD - in my case it might be a 
script reading package list or a database dump), you might face SD 
starvation leading to shoeshine. I suppose you could even hit 
uderruns with big flat directories (but I'm not sure here - maybe the 
files would just be read sequentially and not suffer that much from 
long directory listing).


It's always good to do an analysis of what you need and what you have 
:-)


On 07/01/2021 13:30, DUCARROZ Birgit wrote:
Yup this is a personal decision, that's right and every sysadmin 
should check their own needs and possibilities.


We have 6TB (15TB compression) LTO-7 cartridges and about 20 TB of 
data.


I decided to do it this way because I read in the originally bacula 
book (from Philippe Storz) how backup is handling the tapes and 
because even Philippe Storz himself advised me that this is a good 
way to backup on single tapes.


See the following treat:

https://groups.google.com/g/bareos-users/c/g53BNdTat2s

Regards,
Birgit

On 07/01/21 13:12, Spadajspadaj wrote:
Well, not everyone has long enough tapes to always do full backups 
;-) After all the whole concept of Inc and Diff backups didn't come 
from nothing.


On 07/01/2021 13:10, 'DUCARROZ Birgit' via bareos-users wrote:

Hi,

Another possibility is not to spool and not to backup incremental 
nor differential jobs.


For restoring speed and for your tape health it is better to 
always do full backups, especially if you have no autochanger.


I deleted all jobs which do no full backup.

I did the configuration in a way, that once a month it asks me to 
change the tape and this for each job. In this way I do a full 
backup twice in a year of each job by manipulating tapes only once 
per month. This is for me an easy way to handle the tapes.


The job tells me to insert cartridge 1 and as soon it is inserted, 
the job continues automatically. One month later it asks me for 
cartgidge 2 and so on.


Let me know if you are interested to know how to configure such a 
handling.


Regards,
Birgit
On 07/01/21 11:03, 'Frank Cherry' via bareos-users wrote:


Hi there,
this is my schedule set:

Schedule {
   Name = "CR-WeeklyCycle"
   Run = Level=Full 1st sun at 7:00
   Run = Level=Differential 2nd-5th sun at 7:00
   Run = Level=Incremental mon-sat at 7:00
}

The backup is stored on a LTO tape, changed manually - no 
autoloader.


Looking on the schedule, a tape of the increment pool is insert 
on Saturday morining.
When now a differential or full backup is started, it checks 
first if the right tape is insert.
So I have do by manpower, that the right tape is available before 
the backup starts.


All backup jobs like inc, diff and full have spooling active.
Is there a way, that the nackups starts, and spools also if the 
wring tape is insert and after changing to the right tape, the 
despooling process starts or must I split it each backup in two 
jobs: copy to hdd and then to tape?


Thanks for any useful hints, Frank

--
You received this message because you are subscribed to the 
Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to bareos-users+unsubscr...@googlegroups.com 
<mailto:bareos-users+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/7000fa15-2e2c-4224-9602-bf1f94437a85n%40googlegroups.com 
<https://groups.google.com/d/msgid/bareos-users/7000fa15-2e2c-4224-9602-bf1f94437a85n%40googlegroups.com?utm_medium=email_source=footer>. 








--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
<mailto:bareos-users+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/1d632d94-2b1d-037c-15ff-7c0d5f84d265%40celebrate.de 
<https://groups.google.com/d/msgid/bareos-users/1d632d94-2b1d-037c-15ff-7c0d5f84d265%40celebrate.de?utm_medium=email_source=footer>.


--
You received this message because you are subscribed to the Go

Re: [bareos-users] Schedule question

2021-01-07 Thread Spadajspadaj

Of course. It's all a matter of personal preference and personal needs.

There is one caveat though about full jobs and backup speed. It's all ok 
if you're backing up just files and have no problem reading them with 
the filedaemon. If you have some uncommon scenarios (like backing up 
shares via CIFS from devices you can't install FD on or using plugins to 
generate data for FD - in my case it might be a script reading package 
list or a database dump), you might face SD starvation leading to 
shoeshine. I suppose you could even hit uderruns with big flat 
directories (but I'm not sure here - maybe the files would just be read 
sequentially and not suffer that much from long directory listing).


It's always good to do an analysis of what you need and what you have :-)

On 07/01/2021 13:30, DUCARROZ Birgit wrote:
Yup this is a personal decision, that's right and every sysadmin 
should check their own needs and possibilities.


We have 6TB (15TB compression) LTO-7 cartridges and about 20 TB of data.

I decided to do it this way because I read in the originally bacula 
book (from Philippe Storz) how backup is handling the tapes and 
because even Philippe Storz himself advised me that this is a good way 
to backup on single tapes.


See the following treat:

https://groups.google.com/g/bareos-users/c/g53BNdTat2s

Regards,
Birgit

On 07/01/21 13:12, Spadajspadaj wrote:
Well, not everyone has long enough tapes to always do full backups 
;-) After all the whole concept of Inc and Diff backups didn't come 
from nothing.


On 07/01/2021 13:10, 'DUCARROZ Birgit' via bareos-users wrote:

Hi,

Another possibility is not to spool and not to backup incremental 
nor differential jobs.


For restoring speed and for your tape health it is better to always 
do full backups, especially if you have no autochanger.


I deleted all jobs which do no full backup.

I did the configuration in a way, that once a month it asks me to 
change the tape and this for each job. In this way I do a full 
backup twice in a year of each job by manipulating tapes only once 
per month. This is for me an easy way to handle the tapes.


The job tells me to insert cartridge 1 and as soon it is inserted, 
the job continues automatically. One month later it asks me for 
cartgidge 2 and so on.


Let me know if you are interested to know how to configure such a 
handling.


Regards,
Birgit
On 07/01/21 11:03, 'Frank Cherry' via bareos-users wrote:


Hi there,
this is my schedule set:

Schedule {
   Name = "CR-WeeklyCycle"
   Run = Level=Full 1st sun at 7:00
   Run = Level=Differential 2nd-5th sun at 7:00
   Run = Level=Incremental mon-sat at 7:00
}

The backup is stored on a LTO tape, changed manually - no autoloader.

Looking on the schedule, a tape of the increment pool is insert on 
Saturday morining.
When now a differential or full backup is started, it checks first 
if the right tape is insert.
So I have do by manpower, that the right tape is available before 
the backup starts.


All backup jobs like inc, diff and full have spooling active.
Is there a way, that the nackups starts, and spools also if the 
wring tape is insert and after changing to the right tape, the 
despooling process starts or must I split it each backup in two 
jobs: copy to hdd and then to tape?


Thanks for any useful hints, Frank

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to bareos-users+unsubscr...@googlegroups.com 
<mailto:bareos-users+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/7000fa15-2e2c-4224-9602-bf1f94437a85n%40googlegroups.com 
<https://groups.google.com/d/msgid/bareos-users/7000fa15-2e2c-4224-9602-bf1f94437a85n%40googlegroups.com?utm_medium=email_source=footer>. 







--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/d56b5ec1-b289-8de7-bf41-1835f2911c35%40gmail.com.


Re: [bareos-users] Re: Backup encryption help

2020-12-21 Thread Spadajspadaj
bareos-fd.conf is a configuration file for bareos-filedaemon. Bareos 
filedaemon is the program running on the client which you are backing up.


As per the documentation (which you already found), all data is 
encrypted on client prior to being sent to server (or to Storage Daemon, 
to be precise).


But please, read the documentation again (and again if need be) so you 
understand how it's working so you don't accidentaly lose your keys (and 
hence any possibility of decrypting the backed up data!).



Best regards,

MK

On 21/12/2020 14:14, Gonçalo Sousa wrote:

Can someone help me please

On Monday, December 7, 2020 at 4:04:51 PM UTC Gonçalo Sousa wrote:


I am trying to implement data encryption on bareOS following this
documentation:
https://docs.bareos.org/TasksAndConcepts/DataEncryption.html


I have already created/generated the .cert, .pem and .key files on
the BareOS server.

My question is  where do I configure them, on the example only
mentions bareos-fd.conf
Is this file located on /etc/bareos/bareos-dir.d/client/ ?

All the keys, pem and cert files must be located on the BareOS
server right?
All the configuration is only made on the BareOS right?

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/955a1789-27f7-4f96-84a5-808aac6a2698n%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/f370d739-65fb-5ed9-25da-30e78304258c%40gmail.com.


Re: [bareos-users] Backup encryption help

2020-12-21 Thread Spadajspadaj

There are many different situations and various needs.

Especially if you have a need for off-site backups, and even more so if 
you're processing any kind of sensitive data, you have to (and might be 
obliged by law to do so - enter GDPR or HIPAA, for example).


Encrypting storage units (tapes/lvm volumes and so on) is a bit 
different and addresses different needs than client-side encryption.


As you pointed out, bareos-fd encryption lets you encrypt all data and 
makes the backup possible without the backing up party being able to 
access raw data (there's always the issue with metadata which is not 
encrypted but that's a different subject).


There is also another angle to this - with media encryption you have 
just that - media encryption. And anyone compromising the cryptographic 
material used to encrypt said media gains access to all the data 
contained on said media. In case of client-side backup it's possible 
(and advised) to encrypt each client with own key so that each client 
can be managed independently of another.


To sum it up - there are different needs, so there are different 
solutions :-)


I'm only wondering (I admit, probably because I didn't read the docs 
enough times ;->) if the connection is still encrypted if we use 
client-side encryption on bareos-fd? That would make the data in transit 
double-encrypted which is a bit pointless.


Best regards,

MK

On 21/12/2020 17:33, Brock Palen wrote:

Personally I would not use data encryption at the client if not required.  Use 
the newer versions of Bareos where it uses PSK (pre shared keys)  using the 
password to set up an encrypted tunnel over which the data rides.  Thus it 
lands on your SD unencrypted but the data is encrypted over the wire.

If you need encrypt the data at  rest use LVM or Fuse encryption for disk 
volumes,  and LTO encryption for tape.  This will encrypt the data at rest, but 
avoid managing keys for clients.  Also makes restores not dependent on those 
SSL certs only for the disk volume and tape which is all managed on the server 
and can be easily replicated by the admin team.  (I keep all my tape secret in 
1password encrypted note and GPG encrypted file, and only needed if I lose my 
catalog dump/backup, which is treated differently than my client backups).


The only reason I see today to use File Damon Encryption as documented in that 
page is if you need to promise the client you cannot access their data.  That 
is _only_  true if only the client has the private key,  AND to double what MK 
said there is huge risk that the client will lose that key and not have it 
recoverable when you need to do a restore.


If you rely on encryption using PSK which should be automatic if any recent 
bareos version it’s much less error prone.
Eg Look for:  Connected Client: mlds at mlds:9102, encryption: 
PSK-AES256-CBC-SHA

In your job logs.  I do this all without managing certificates on the FD.


Brock Palen
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting




On Dec 21, 2020, at 8:21 AM, Spadajspadaj  wrote:

bareos-fd.conf is a configuration file for bareos-filedaemon. Bareos filedaemon 
is the program running on the client which you are backing up.

As per the documentation (which you already found), all data is encrypted on 
client prior to being sent to server (or to Storage Daemon, to be precise).

But please, read the documentation again (and again if need be) so you 
understand how it's working so you don't accidentaly lose your keys (and hence 
any possibility of decrypting the backed up data!).



Best regards,

MK

On 21/12/2020 14:14, Gonçalo Sousa wrote:

Can someone help me please

On Monday, December 7, 2020 at 4:04:51 PM UTC Gonçalo Sousa wrote:

I am trying to implement data encryption on bareOS following this 
documentation: https://docs.bareos.org/TasksAndConcepts/DataEncryption.html

I have already created/generated the .cert, .pem and .key files on the BareOS 
server.

My question is  where do I configure them, on the example only mentions 
bareos-fd.conf
Is this file located on /etc/bareos/bareos-dir.d/client/ ?

All the keys, pem and cert files must be located on the BareOS server right?
All the configuration is only made on the BareOS right?
--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/955a1789-27f7-4f96-84a5-808aac6a2698n%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/

Re: [bareos-users] space management on tape

2020-12-26 Thread Spadajspadaj
It's almost obvious if you look at possible medium states but to give 
you a verbose answer - the media can be read from any point but can only 
be appended at the end.


So if any job is being pruned/purged/deleted, it's just being 
"forgotten" by the database but is still present on the media where it 
originally was.


Oversimplifying a bit - a media life cycle is:

Purged -> Recycled -> Append -> Used/Full -> Purged again.

So, as you can see, there is no (de)fragmentation. A volume is getting 
appended to, then it's getting recycled. Simple as that.


With disk-based storage it's getting a bit more complicated with 
dynamicaly created volumes, single-job volumes and auto-truncate on purge.


On 26/12/2020 17:18, 'Frank Cherry' via bareos-users wrote:


Hi there,
a question how Bareos managed space on tape:
Hypothetc:

On a LTO tape are stored in this order 3 jobs:
1: 3 TB
2: 2 TB
3: 1 TB

Job 1 is deleted.
Now a new job is queued, the spooling file has a size of 2 TB.

Will now the SD despool it
a) on position 4 of the tape (append) [this is what I think]
or
b) replace position1 because the is availabe space

So, thinking about fragmentation would be one part of a backup 
strategy when working with tapes.


Thanks and all the best, Frank


--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/b5b7f195-056e-4391-99f0-6de9b0b87129n%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/55470d56-be2b-8a22-496a-c0120426c902%40gmail.com.


Re: [bareos-users] Execute script in FileSet

2020-12-25 Thread Spadajspadaj

First of all, you didn't the docs carefully.

If you say 'File = "|command"', said command will be run by the 
director, on the director machine and - what's important if it's the 
same machine - in the context of bareos-dir user.


So if you want to run the comand on the client, you have to give it not 
as "|command", but as "\\|command".


Additionally to that you might - depending on your installation - 
encounter some problems with PATH variable or proper escaping so I'd 
suggest wrapping your command into a simple script, putting it in 
/usr/local/sbin on the client machine (let's say - myscript.sh) and 
putting 'File = "\\|/usr/local/sbin/myscript.sh"' into the fileset resource.



On 25/12/2020 12:11, 'Frank Cherry' via bareos-users wrote:

Hello there,
my goal is, to backup only files, created of today.

This is my FileSet:

FileSet {
  Name = "Storage1PVE"
  Description = "PVE backups from GlusterFS"
  Include {
    Options {
  Signature = MD5 # calculate md5 checksum per file
  Compression = LZ4
  noatime = yes
    }
    File = "|sh -c 'find /mnt/glusterfs/pve_dump/dump/ -type f -mtime 
-1 -name \"*.zst\"'"

  }
}

When I execute it on the client, I get display the right files.
But when I start a job in Bareos, I get:

bareos-dir JobId 13: Fatal error: Error running program: sh -c 'find 
/mnt/glusterfs/pve_dump/dump/ -type f -mtime -1 -name "*.zst"'. 
ERR=Child exited with code 1


Followed the instructions on: 
https://docs.bareos.org/Configuration/Director.html#fileset-include 



What I'm doing wrong?
--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/c5806bea-9612-4e96-8bf9-9c5a2251f845n%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/6a844e2e-65fc-372b-923e-da61adc4dafe%40gmail.com.


[bareos-users] Estimating usage for S3 storage plugin

2021-01-17 Thread Spadajspadaj

Hi.

I wanted to give S3 storage plugin a try. For now just to see how it 
works, but maybe to use it in production one day. But I have completely 
no idea how to estimate S3 usage and thus associated costs. I admit I am 
no S3 expert at the moment so it would be an opportunity to learn about 
S3 for me at the same time. Where can I read a bit more about S3 storage 
backend (apart from the manual where I only see how to configure the SD 
for S3 as far as I can see)? I don't want to ask too many newbie 
questions ;-) Especially about using different S3 tiers for storage (It 
would make way sense to use Glacier or even Glacier Deep Archive for 
long-term storage rather than Frequent Access tier; at least pricewise).


I can of course set up an account and perform some small-scale test 
within free tier but I'd like to know what I would be doing ;-)


Best regards

MK

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/9e96d571-cf32-9708-ccd1-2f19350d0849%40gmail.com.


Re: [bareos-users] Estimating usage for S3 storage plugin

2021-01-25 Thread Spadajspadaj



On 18/01/2021 13:04, Spadajspadaj wrote:

On 18/01/2021 11:28, Brock Palen wrote:
Disclaimer I have not used s3 with bareos but done many cloud 
calculations.


Few things to think about using cloud.
Are you running your SD in the cloud?
Are your backup clients in the cloud?
If not what’s your bandwidth? It will impact your backup and restore 
times significantly if you have modest WAN capacity for local 
 clients servers.


No, no. I was thinking about keeping an extra copy "off-site". I'm 
mostly cloud-free at the moment and I do not wish to change it 
significantly. I was thinking whether S3 could be an option for 
extending my home backup setup.


Of course I understand the impact of bandwidth on the backup/restore 
times. :-)



OK. I recalculated it using the AWS calculator and it turnes out that 
even with glacier tier I can live with costs for storing the data (some 
16$ per month for 4TB), but in case of disaster I'd have to pay 
something like 180$ for transfer. That's definitely not worth it. I'd 
rather buy another disk and keep it in rotation.


It turns out it's not very useful for me after all.

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/fdea87d9-2ae9-b35b-ce0e-be3e86851622%40gmail.com.


Re: [bareos-users] Estimating usage for S3 storage plugin

2021-01-18 Thread Spadajspadaj

On 18/01/2021 11:28, Brock Palen wrote:
Disclaimer I have not used s3 with bareos but done many cloud 
calculations.


Few things to think about using cloud.
Are you running your SD in the cloud?
Are your backup clients in the cloud?
If not what’s your bandwidth? It will impact your backup and restore 
times significantly if you have modest WAN capacity for local  clients 
servers.


No, no. I was thinking about keeping an extra copy "off-site". I'm 
mostly cloud-free at the moment and I do not wish to change it 
significantly. I was thinking whether S3 could be an option for 
extending my home backup setup.


Of course I understand the impact of bandwidth on the backup/restore 
times. :-)



As for s3 pricing read this carefully

https://aws.amazon.com/s3/pricing/ 

You have three components to pricing with s3 and I expect only two 
move the needle on cost.


Data stored
Bandwidth and retrieval
Operations

Opts rations are so cheap and guessing how bareos uses virtual tape 
volumes it’s prob not a big issue. Someone who has used it though can 
speak.


That's a good observation. Thanks!

Data stored its straight $/gb/month. So you need to estimate your 
total data stored for all your fulls and incrementals. Your right 
these costs decline when you look at glacier but there is a trade off. 
The cheaper to store the more expensive to access.


Retrieval fees come in two forms. The first is bandwidth. Which fit 
most people is .09$/gb (unless your clients and servers are in the 
same aws region) for my cloud activities this is 50% of my monthly 
bill. It’s the thing that messes most cloud calculators for budget. 
 That said if your sever is on prem you likely will never pay this if 
you don’t use always incremental or do any restores. So if your ok 
paying for restores maybe it’s ok.


The cold tiers like glacier charge to access data. Again maybe fine if 
you almost never read it. Glacier runs $10/tb or more for transfer vs 
nothing for regular s3. With bandwidth your at ~$100/tb  This is 
something to avoid deep archive. Their sla is many hours to get data. 
 I don’t think deep archive is a backup replacement but a compliance 
archive replacement



Well, that's what I'm counting on - it's better to have backup copy and 
not need to use it than not having it ;-)


What I was also interested in was also how to approach the long SLA 
regarding Bareos SD operation. Would I have to firstly request access to 
the glacier data independently of the SD and after receiving 
confirmation of data availability would have to run a restore job? Or 
would I just run a restore job from storage using cold-tiered bucket and 
the job would simply wait for data availability (similar to mounting tape)?


Also be aware glacier and deep archive have minimum retention times of 
90 and 180 days. So you will always pay that at a minimum. Ok if your 
keeping fulls for a long time.  Look at the auto tier options to 
manage aging volumes.


Yes, I noticed that




So YMMV. If you are 100% in the cloud or you don’t use always 
incremental or have small data volumes or just a dr copy it works great.


Personally I run my servers in aws and my full bareos setup on prem 
with a $400 tape library from eBay. This gives me diversity and most 
of the data in the cloud is small (websites email text) while the on 
prem is video photos and road warriors using always incremental.



So it all comes to "try the free tier and see for yourself" :-) I'll 
have to do it anyway when I get some spare time just to see how it works 
and get some understanding about achievable througputs, needed space and 
so on.



Thanks for valuable insight!

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/f17e6555-1f48-3d1a-1ba6-f9681d1e8b73%40gmail.com.


Re: [bareos-users] backup from gluster fs

2021-05-21 Thread Spadajspadaj

https://docs.bareos.org/Configuration/Director.html#fileset-options-resource

|Mtime Only|
   Type:yes|no

   If enabled, tells the Client that the selection of files during
   Incremental and Differential backups should based only on the
   st_mtime value in the stat() packet. The default is *no* which means
   that the selection of files to be backed up will be based on both
   the st_mtime and the st_ctime values. In general, it is not
   recommended to use this option.

On 21.05.2021 08:56, 'Frank Cherry' via bareos-users wrote:


Hi there,
what is the criteria for the Bareos FD to take a file to a 
differeantial or incremental backup against the prev. full backup ?


Example:

[op1@storage1 ~]# stat 
/mnt/glusterfs/piler/store/00/588/00/45/40005885d683233125cc00df2ff10045.m
  File: 
„/mnt/glusterfs/piler/store/00/588/00/45/40005885d683233125cc00df2ff10045.m“

  Size: 3912    Blocks: 8  IO Block: 131072 reguläre Datei
Device: 17h/23d Inode: 10930643060122436912  Links: 1
Access: (0777/-rwxrwxrwx)  Uid: (  500/backup_master)   Gid: ( 
500/backup_master)

*Access: 2021-05-21 08:31:13.545365435 +0200*
*Modify: 2017-01-23 11:10:01.744076268 +0100*
*Change: 2021-05-21 08:31:13.568365427 +0200*

Is it Modify or Change ? I think, it is change because glusterfs 
permanently updates the meta data of the files and so it's impossible 
to back up ONLY modified files.


Is there a strategy for backing up GlusterFS volumes (FUSE mount) or 
can I change the criteria of the FD to backup only modified files?


Thanks, Frank

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/75f6707b-8297-4cf6-82c4-52d142809ea2n%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/c173b22b-e3bd-a73f-21c3-a0222cdc1f33%40gmail.com.


Re: [bareos-users] Filesystem snapshot support

2021-07-06 Thread Spadajspadaj

In case of filesystems I can think of as interesting for myself:

1) Windows FD has VSS support

2) In case of ZFS/LVM you can run a pre/post scripts creating a snapshot 
and mounting it for reading then unmounting and removing snapshot after 
backup.


I suppose you can pack it into a python plugin but on first glance it 
seems as a bit of an overkill.


On 06.07.2021 14:27, 'Christian Svensson' via bareos-users wrote:

Hello,

I am curious about the state of snapshot management in Bareos (and Bacula).
It seems that in the past Bacula at least had ZFS/LVM/BTRFS snapshot
support[1] but that seems to have been removed at some point.

To me, the ability of taking backup of a complex filesystem using e.g.
BTRFS incremental snapshots[2] seems like a very nice feature to have
in Bearos.
It seems it would be quite trivial to add support for implementing
Full, Incremental, and even Differential backups using these
snapshots.

Why would I need this? Consistency. I would like to snapshot the
filesystem in a point-of-time, not relying on that the file daemon is
able to work fast enough for the backup to not diverge too much from
when it started to when it finished.

I can see two operation modes:

1) File-level:
Ideally the file daemon would sense that "hey, this is a btrfs volume,
I will take a snapshot to read all the files in the fileset from" and
it would be transparent to the sysadmin setting things up.
When the backup is complete, the snapshot is removed.

Changes are detected using the normal scanning of mtime etc.

2) Filesystem-level:
This is more involved but handles complete restores of the full FS
using "btrfs send" and "btrfs receive" but changes are instead handled
by btrfs, ensuring that all changed data is backed up regardless of
timestamps.

Thoughts?

If I wanted to implement (2), what would be a good way to do that - as
a python plugin?



--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/12c493fa-ba73-ea4c-cf58-c4f6b468321e%40gmail.com.


Re: [bareos-users] Filesystem snapshot support

2021-07-06 Thread Spadajspadaj



On 06.07.2021 14:39, Christian Svensson wrote:

Hello,

On Tue, Jul 6, 2021 at 2:33 PM Spadajspadaj  wrote:

1) Windows FD has VSS support

That's interesting. It would be cool to have feature parity I suppose.


True, but there are many different filesystems on unices...


2) In case of ZFS/LVM you can run a pre/post scripts creating a snapshot
and mounting it for reading then unmounting and removing snapshot after
backup.

I suppose you can pack it into a python plugin but on first glance it
seems as a bit of an overkill.

I am trying to limit the amount of power the director has over my -fds
by allowing only backup and restores.
The "runscript" is a bit scary I think so I would prefer it if it can
be avoided by e.g. having a plugin.
But that's just my own threat model, it is OK if it is not shared by
other folks :-).



Perfectly understandable, but you still have to give the FD quite a lot 
of privileges on the box for the backup job.


And - more importantly - even as a plugin you'd have to give the FD 
enough powers to run the snapshot. So you might simply prepare the 
script yourself.



I guess I could always have a /.snapshot/ directory in every supported
filesystem with a cron that refreshes it, and configure my FileSets to
only copy those files.
That would be an easy way to do it.


Was about to suggest similar thing :-)

Of course you have to remember of all the caveats that arise from the 
asynchronicity between backup jobs and snapshot preparation.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/7760cc22-d6e0-8901-dcd8-e31c1c76bc01%40gmail.com.


Re: [bareos-users] Re: Job creating multiples new volumes

2021-07-12 Thread Spadajspadaj

I'd start by checking whether the media files are really created on disk.

Then I'd run the daemon with higher debug level and capture the output.

On 12.07.2021 16:24, Rodrigo Jorge wrote:

Hello Folks,

This is a BUG or a CONFIG error ?

Regards,

Rodrigo L L Jorge

Em sáb., 10 de jul. de 2021 às 11:54, Rodrigo Jorge 
mailto:rlljo...@gmail.com>> escreveu:


Hello guys!

I have a problem when execute parallel jobs are created many new
volumes in catalog.

For Example, I ran the jobid 112109 and this job created 19 new
volumes.

06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6043" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6044" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6045" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6046" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6047" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6048" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6049" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6050" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6051" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6052" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6053" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6054" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6055" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6056" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6057" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6058" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6059" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6060" in catalog.
06-Jul 20:00 backup01-dir JobId 112109: Created new Volume
"diario-DISCO-6061" in catalog.

But using only one volume.

*list jobid=112109

+-+-++-+--+---+--+-+---+
| JobId   | Name        | Client         | StartTime   | Type |
Level | JobFiles | JobBytes    | JobStatus |

+-+-++-+--+---+--+-+---+
| 112,109 | pmbp-sql-01 | pmbp-sql-01-fd | 2021-07-06 20:00:43 | B
   | I     |        6 | 199,093,012 | T |

+-+-++-+--+---+--+-+---+

*list jobmedia jobid=112109
+-+---++---+
| JobId   | VolumeName        | FirstIndex | LastIndex |
+-+---++---+
| 112,109 | diario-DISCO-6043 |          1 |         6 |
+-+---++---+

If I execute one job per time I don't have this problem.
I attach the full job execution

My Configs:

Client {
  Name = pmbp-sql-01-fd
  Address = 172.16.123.13
  Password = "PWD"
  @/etc/bareos/bareos-client-common.conf
}

Job {
  Name = "pmbp-sql-01"
  JobDefs = "DefJobsDISCO"
  Schedule = "DISCO_20h_TANDBERG"
  Client = pmbp-sql-01-fd
  Storage = backup-disco
  FileSet = file_pmbp-sql-01
  Client Run Before Job = "E:\bkp_full_pmbp-sql-01.bat"
}

FileSet {
  Name = "file_pmbp-sql-01"
  Enable VSS = yes
  Include {
  Options {
  Signature = MD5
  Drive Type = fixed
  IgnoreCase = yes
  WildFile = "[A-Z]:/pagefile.sys"
  WildDir = "[A-Z]:/RECYCLER"
  WildDir = "[A-Z]:/$RECYCLE.BIN"
  WildDir = "[A-Z]:/System Volume Information"
  Exclude = yes
    }
     File = E:/SQLBackup
  }
}

JobDefs {
 Name = "DefJobsDISCO"
 Type = Backup
 Level = Incremental
 Storage = backup-disco
 Schedule = "DISCO_19h_TANDBERG"
 Messages = Standard
 Accurate = no
 Pool = diario-DISCO
 Priority = 10
 Write Bootstrap = "/var/lib/bareos/%c.bsr"
 Full Backup Pool = semanal-DISCO
 Incremental Backup Pool = diario-DISCO
 Maximum Concurrent Jobs = 10
}

Schedule {
 Name = "DISCO_20h_TANDBERG"
 Run = Incremental 1st mon-fri at 20:00
 Run = Incremental 2nd-5th mon-sat at 20:00
 Run = Full sun at 20:00
 Run = Full Pool=Mensal Storage=TANDBERG 1st sat at 

Re: [bareos-users] Android backup

2021-07-06 Thread Spadajspadaj
In general, Android backup is - to put it delicately - a completely 
screwed up thing.


Especially if you don't have your phone rooted.


On 05.07.2021 19:35, Erich Eckner wrote:

Hi,

I was wondering, if it was possible to back up an Android phone with 
bareos. I searched the playstore, but couldn't find anything - but 
maybe, there's some "inofficial" app/solution?


regards,
Erich


>

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/dd350b50-fc21-cde0-51d0-57da5d3cb050%40gmail.com.


Re: [bareos-users] Android backup

2021-07-10 Thread Spadajspadaj



On 10.07.2021 18:51, Erich Eckner wrote:


>> > In general, Android backup is - to put it delicately - a 
completely screwed up thing.

>>
>> > Especially if you don't have your phone rooted.
>>
>> Yes, I imagined, it would be difficult/impossible to back up a 
complete phone, when it's not rooted. However, I could imagine backing 
up "user data" (images, address book, emails, etc.) with some 
pipe-like endpoint.

>>
> Not necessarily. Without root any access to other app's data is 
severly limited (if not completely blocked). There is more access via 
USB with debugging enabled and adb tool but still to back up the data 
this way the app must permit it (there is a setting in app's manifest 
saying whether adb backup will access this app's data or not). And 
thirdly - even with root access there is a huge PITA with migrating 
system apps' data (phonebook, text messages and so on) between 
different phones. Been there, done that, ended up with manually 
decrypting and unpacking app and converting contacts to CSV or ldif.


> Mobile phones - their backups and/or migrations have "since always" 
been a completely horrible experience for me.


ok, I see. Thank you for your input!


To give you an idea of how I backup my android device:

1) For text messages (and call logs) - app called SMS Backup and restore 
- it creates a dump of text messages using Android API and stores it in 
a XML file if I remember correctly. You can restore the backup with the 
same app on another phone - tried it, does work.


2) For contacts - there is nothing reasonable really, except built-in 
synchronization with external account. As I don't want to push my 
contacts to gmail or another such services, I have my own CardDAV server 
running radicale


3) For app data - I have a script that does a batch dump of app data for 
every installed app (as I wrote before - it doesn't work for some apps 
which don't allow it - mostly games or banking apps) using adb from 
android tools (it does a separate dump of each app thus needs to 
simulate entering password and "clicking" on screen with adb in order to 
not require you to enter the password separately for each app). I simply 
attach my phone with USB cable to my laptop and run this script from 
time to time.


Unfortunately, I don't see any easier reliable way. Of course you may 
try to synchronize your data with gmail account but - to be honest, I'm 
not sure how well it works (especially with those apps which are set as 
not-backuppable) and it does raise some privacy concerns for me.


HTH

PS: Since we're drifting from the main topic of this list PM me if you 
want more details about the script.




--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/800a12e2-f26a-f801-18d5-8623fcca1097%40gmail.com.


Re: [bareos-users] Android backup

2021-07-10 Thread Spadajspadaj



On 10.07.2021 09:10, Erich Eckner wrote:

On Tue, 6 Jul 2021, Spadajspadaj wrote:

> In general, Android backup is - to put it delicately - a completely 
screwed up thing.


> Especially if you don't have your phone rooted.

Yes, I imagined, it would be difficult/impossible to back up a 
complete phone, when it's not rooted. However, I could imagine backing 
up "user data" (images, address book, emails, etc.) with some 
pipe-like endpoint.


Not necessarily. Without root any access to other app's data is severly 
limited (if not completely blocked). There is more access via USB with 
debugging enabled and adb tool but still to back up the data this way 
the app must permit it (there is a setting in app's manifest saying 
whether adb backup will access this app's data or not). And thirdly - 
even with root access there is a huge PITA with migrating system apps' 
data (phonebook, text messages and so on) between different phones. Been 
there, done that, ended up with manually decrypting and unpacking app 
and converting contacts to CSV or ldif.


Mobile phones - their backups and/or migrations have "since always" been 
a completely horrible experience for me.



--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/5b7f2ad2-91b1-a6a8-e678-8afcb0a4ac0f%40gmail.com.


Re: [bareos-users] Restore backup without director

2021-03-12 Thread Spadajspadaj



On 12.03.2021 19:22, Sergey Zaguba wrote:

for instance
1 host - Bareos director
2 host Bareos-st

 For example, host number one burned out -

 let's consider two situations

1) we have a backup of the directory and the director's bareos database
2) we do not have a backup of the directory and director's base

 Host number two works without problems - how can I restore the 
necessary files from it?


There are several tools included with bareos that work with media. 
https://docs.bareos.org/Appendix/BareosPrograms.html#volume-utility-commands


But.

They do have their own limitation (for example, you can't restore 
encrypted files with bextract). And their usage is tricky.


So it's best to have your director backed up in some additional way that 
permits quick disaster recovery.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/d1f78008-4e55-74b1-437f-fdb00628c1fb%40gmail.com.


Re: [bareos-users] Backup data to AWS S3 bucket using the BareOS utility

2021-03-17 Thread Spadajspadaj

Yes, there is a plugin for storage daemon to store data in S3. See the docs.

On 16.03.2021 17:02, Kaushal Shriyan wrote:

Hi,

Is there a way to push the backup data to the AWS S3 bucket using the 
BareOS utility? For example, if I backup both configurations and data 
directory of GitLab SCM services using BareOS utility, Can it be 
pushed to AWS S3 bucket instead of storing it locally in the BareOS 
server.


Please advise. Thanks in advance.

Best Regards,

Kaushal
--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/655c1c45-4a5e-47d3-9895-7c87f47a0af6n%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/0e7df0d5-8749-4c63-f0a3-735d7ec7697e%40gmail.com.


Re: [bareos-users] Compatibility question

2021-03-17 Thread Spadajspadaj
Of course it's always best to have the whole environment with coherent 
current version but it's usually quite OK to have director slightly 
"ahead" of the clients. It's the other way around that easily causes 
problems - if you have FD's newer than director, you might run into 
problems.


On 17.03.2021 08:06, 'Frank Cherry' via bareos-users wrote:


Hi there,
On my backup system I've run Bareos v20 (Director and Storage Daemin) 
but I have a system with CentOS 6, there is running Bareos with V19 
(File Director). I have tested it successful.

Are there any pages which shows the version compatibility?
As I read between the lines in 
https://docs.bareos.org/Appendix/BackwardCompatibility.html 

the backward compatibility to a lower version of a FD should be ever 
possible - right?


Thanks and all the best,
Frank
--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/481485cc-9dd3-4f13-927c-53bdcd35984en%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/72de9c45-2944-ede4-74d1-31d8cd94bb9a%40gmail.com.


Re: [bareos-users] How to use more than one file location in one pool?

2021-03-02 Thread Spadajspadaj
I'm not sure what you're trying to achieve, but if you want to have a 
single pool spanning over several directories, you might try to look 
into vchanger and simulate a changer by "switching" directories.


On 02.03.2021 16:04, lst_ho...@kwsoft.de wrote:

Hello,

we try to use some file locations for file based backups in one pool. 
As we can not use the same "Media type" for different file path we 
could use multiple "Storage" entries in the pool. But according to the 
documentation this does not work:


"Be aware that you theoretically can give a list of storages here but 
only the first item from the list is actually used for backup and 
restore jobs."


So is there any other way to use multiple file (path) for one pool?

Thanks

Andreas




--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/fe21651d-e916-a739-4933-905c6cbd1e1c%40gmail.com.


Re: [bareos-users] Autchanger Script fails

2021-03-07 Thread Spadajspadaj

+ offline=0
+ offline_sleep=0
+ load_sleep=0

Have you tried playing with those values?

It seems a good place to start.

By default they're defined in /etc/bareos/mtx-changer.conf

On 07.03.2021 22:16, 'tilmang...@googlemail.com' via bareos-users wrote:
1) I gave it a try. Using the  mtx-changer script directly (without 
using bconsole after booting up) seems to work.  I mounted a tape via 
the script and then umounted it:

./mtx-changer  /dev/changer1 unload  8 /dev/nst0 0
+ test ! -r /etc/bareos//mtx-changer.conf
+ . /etc/bareos//mtx-changer.conf
+ offline=0
+ offline_sleep=0
+ load_sleep=0
+ inventory=0
+ vxa_packetloader=0
+ debug_log=0
+ uname
+ OS=Linux
+ ready=ONLINE
+ test -f /etc/debian_version
+ grep mt-st
+ mt --version
+ test 0 -eq 1
+ MTX=/usr/sbin/mtx
+ test ! -x /usr/sbin/mtx
+ MT=/bin/mt
+ test ! -x /bin/mt
+ dbgfile=/var/bareos/logs/mtx.log
+ test 0 -ne 0
+ check_parm_count 5 5
+ pCount=5
+ pCountNeed=5
+ test 5 -lt 5
+ ctl=/dev/changer1
+ cmd=unload
+ slot=8
+ device=/dev/nst0
+ drive=0
+ debug Parms: /dev/changer1 unload 8 /dev/nst0 0
+ test -f /var/bareos/logs/mtx.log
+ date +%Y%m%d-%H:%M:%S
+ echo 20210307-18:32:02 Parms: /dev/changer1 unload 8 /dev/nst0 0
+ debug Doing mtx -f /dev/changer1 unload 8 0
+ test -f /var/bareos/logs/mtx.log
+ date +%Y%m%d-%H:%M:%S
+ echo 20210307-18:32:02 Doing mtx -f /dev/changer1 unload 8 0
+ test 0 -eq 1
+ test 0 -ne 0
+ make_err_file
+ mktemp /var/bareos/working/mtx.err.XX
+ ERRFILE=/var/bareos/working/mtx.err.pJijcI8Tu2
+ test x/var/bareos/working/mtx.err.pJijcI8Tu2 = x
+ /bin/mt -f /dev/nst0 eject
+ sleep 10
+ /usr/sbin/mtx -f /dev/changer1 unload 8 0
Unloading drive 0 into Storage Element 8...done
+ rtn=0
+ cat /var/bareos/working/mtx.err.pJijcI8Tu2
+ rm -f /var/bareos/working/mtx.err.pJijcI8Tu2
+ exit 0

2) mount and unmount a tape and thereby loading and unloading a tape 
from a freshly booted machine via bconsole works.


3) mount a tape, running a backup job and then trying to 
umount/unloading the tape does not work and leads to the error message 
"ERR=Child died from signal 15: ". Unload the tape with mtx_changer 
works however:  The debug mode of the mtx-changer script shows an 
error message  "/dev/nst0: No medium found" which is consistent: The 
previously failing umount command issued an eject command.  The tape 
is  ejected from the drive but not unloaded into the magazine. When 
subsequently running the mtx_changer command,the medium is indeed no 
longer in the drive -- but also not yet in its slot


*status storage=TapeStorage1
Connecting to Storage daemon TapeStorage

Version: 19.2.6 (11 February 2020) Linux-4.15.0-112-generic ubuntu 
Ubuntu 18.04.4 LTS
Daemon started 07-Mär021 18:55. Jobs: run=2, running=0, self-compiled 
binary

 Sizes: boffset_t=8 size_t=4 int32_t=4 int64_t=8 bwlimit=0kB/s

Running Jobs:
No Jobs running.


Jobs waiting to reserve a drive:


Terminated Jobs:
 JobId  Level    Files  Bytes   Status   Finished Name
===
 ..
  2730  Full 42,072    16.36 G  OK   07-Mär021 21:25 
BackupTgvs2ToTape



Device status:
Autochanger "AutoChanger1" with devices:
   "TapeDrive1" (/dev/nst0)

Device "TapeDrive1" (/dev/nst0) is mounted with:
    Volume:  Tgvs2Tape-14
    Pool:    Tgvs2-Tape
    Media type:  DDS-4
    Slot 9 is loaded in drive 0.
    Total Bytes=16,380,499,968 Blocks=253,913 Bytes/block=64,512
    Positioned at File=17 Block=0
==


Used Volume status:
Tgvs2Tape-14 on device "TapeDrive1" (/dev/nst0)
    Reader=0 writers=0 reserves=0 volinuse=0




*umount storage=TapeStorage1

Connecting to Storage daemon TapeStorage1 ...
3307 Issuing autochanger "unload slot 9, drive 0" command.
3995 Bad autochanger "unload slot 9, drive 0": ERR=Child died from 
signal 15: Termination

Results=Program killed by BAREOS (timeout)

3002 Device ""TapeDrive1" (/dev/nst0)" unmounted.

#> mtx -f /dev/changer1 status
  Storage Changer /dev/changer1:1 Drives, 12 Slots ( 0 Import/Export )
Data Transfer Element 0:Full (Storage Element 9 Loaded)
  Storage Element 1:Full
  Storage Element 2:Full
  Storage Element 3:Full
  Storage Element 4:Full
  Storage Element 5:Full
  Storage Element 6:Full
  Storage Element 7:Full
  Storage Element 8:Full
  Storage Element 9:Empty
  Storage Element 10:Full
  Storage Element 11:Full
  Storage Element 12:Full

#>  ./mtx-changer  /dev/changer1 unload  9 /dev/nst0 0
+ test ! -r /etc/bareos//mtx-changer.conf
+ . /etc/bareos//mtx-changer.conf
+ offline=0
+ offline_sleep=0
+ load_sleep=0
+ inventory=0
+ vxa_packetloader=0
+ debug_log=0
+ uname
+ OS=Linux
+ ready=ONLINE
+ test -f /etc/debian_version
+ grep mt-st
+ mt --version
+ test 0 -eq 1
+ MTX=/usr/sbin/mtx
+ test ! -x /usr/sbin/mtx
+ MT=/bin/mt
+ test ! -x /bin/mt
+ dbgfile=/var/bareos/logs/mtx.log
+ test 0 -ne 0
+ check_parm_count 5 5
+ pCount=5
+ pCountNeed=5
+ test 5 -lt 5
+ ctl=/dev/changer1
+ 

Re: [bareos-users] Telegram + Bareos 18.2.5

2021-03-08 Thread Spadajspadaj
Haven't used Telegram in my life but it seems possible (and sholud be 
quite easy) to use linux telegram cli client


https://github.com/vysheng/tg

On 08.03.2021 20:38, Matheus Inacio wrote:


Hello!!

Has anyone integrated bareos with the telegram, to receive job status ??


thanks
--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/966a71fa-a658-4420-8ead-acf0d7da7579n%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/6402a1a3-c00a-bf72-d8e7-703972ecfd78%40gmail.com.


Re: [bareos-users] Autchanger Script fails

2021-02-27 Thread Spadajspadaj
If mtx runs fine. I'd try to run the mtx-changer script with bash "debug 
mode" (bash -x mtx-changer...) and see what is it that the mtx hangs and 
times out on.


On 27.02.2021 12:21, 'tilmang...@googlemail.com' via bareos-users wrote:

Dear spadaj

I forgot to mention that mtx runs OK. It lives in /usr/sbin/mtx, and 
the log files are in /var/bareos/working/. The logfiles are however empty.

mtx-changer script lives in /etc/bareos

Dear Andreas

I am using bareos 19.2.6 (self compiled).  I do not think that the 
drive has a data spooling option as it is a relatively old HP C5683A


Thanks
Tilman




On Monday, February 22, 2021 at 12:06:59 PM UTC+1 Andreas Rogge wrote:

Hi Tilman,

are you using Bareos 20 and have data-spooling enabled on the tape
drive?
You may have hit a bug that will be fixed in the upcoming 20.0.1,
then.

Best Regards,
Andreas

-- 
Andreas Rogge andrea...@bareos.com

Bareos GmbH & Co. KG Phone: +49 221-630693-86

http://www.bareos.com 

Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
Komplementär: Bareos Verwaltungs-GmbH
Geschäftsführer: S. Dühr, M. Außendorf, J. Steffens, Philipp Storz

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/ffeeeab8-59bb-4ca5-a051-10f9a9f0a207n%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/c584f09d-9e24-9d6d-a5b4-921fbf866af0%40gmail.com.


Re: [bareos-users] Is Bareos suitable for this scenario?

2021-04-16 Thread Spadajspadaj

I somehow missed the original email.

I suspect that with many many small files you're mostly limited by the 
source filesystem (and whole system) performance more than sheer backup. 
Regardless of what the method of deciding whether the file needs backing 
up, its metadata still have to be read from the filesystem. Probably 
tuning the source system (giving more memory for metadata cache) could 
help a little.


But I wouldn't expect big differences just by switching from rsync to 
bareos.


Just my three cents.

On 16.04.2021 15:21, Brock Palen wrote:

I have not seen any replies to your question,  I can’t speak to that volume of 
data though I see no reason why it cannot.  Here are my thoughts below how I 
would approach it along with some of your other questions.

* The number of files will impact things more than total data size.  It will 
increase database size, scan time etc.
* I have easily seen Bareos saturate well above 100Mbit networking.  Though 
100Mbit is very slow to do the initial full backup of 200T  You are looking at 
a minimum of 6 months assuming data does not compress.  For initial backup you 
might want to do sneaker net with a raspberry pi and a drobo.  This is what I 
do, full backup is done on site @ gig speeds then cary the entire setup to the 
other site and do a volume migration to the real server.
https://fasterdata.es.net/home/requirements-and-expectations/

* Look at the Bareos client side compression options, on bandwidth constrained 
hosts (this includes cloud because of cost)  I use gzip turned all teh way up.  
This will peg one CPU core but for text data reduces the volume of data over 
wire drastically.  Something like lz4  is a great low CPU impact but still get 
70% of the compression of gzip.  If you have the CPU core to burn and in a test 
if it still saturates your 100Mbit maybe use it to get that backup time down.  
If this is all video or already compressed images cram files it likely just 
burns CPU for no impact.  Baroes give you a report at the end of ajob of how 
well it compressed.

* how baroes checks for files,  using the accurate settings (recommended) the 
server will upload a list of files it knows about to the client and it compares 
them.  This process is very fast, by default Bareos won’t use checksums to 
compare, but only  1. does the file exist,   2.  is the filesystem metadata 
newer then in the database/catalog (file has changed).   Incrementals with 
Baroes are much faster than rsync.  (I have moved PB of data with Rsync)


With 200TB of data you will want a lot of tape,  otherwise you're looking at 
400TB+  of disk.  If your new to backup you have to build a new “full”  every 
so often.  Given your network is 100Mbit I would look at the Always Incremental 
features of Baroes.   This will let you avoid the 180 days of a new full 
backup.  But you still have to write 200TB every so often but it can all be 
done baroes server side.   I recommend tape just for cost, as you need 66 LTO 7 
tapes or 33 LTO8 tapes.   LTO7 i still the best value but LTO8 has come down in 
cost a lot and LTO9 is scheduled for GA this year.  You will also want a few 
tape drives and a fast spool pool of disks to do this right.   This 2x minimum 
size is one downside backup systems have to rsync.

An all disk solution will be faster because a big raid z2  will have greater 
bandwidth or the VirtualFull,  but it will be expensive.  You could look at 
something like 45 Drives to turn into your SD.   I do a mix (again fraction the 
size you are with Baroes)

I would personally split this into several jobs using wildcards in filesets, 
and not have 1 200TB job,  but several few TByte jobs.  This will also let you 
run jobs in parlalel recover better from a full backup failure,  not have to 
copy 200TB when you do a full etc.



Brock Palen
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting




On Apr 15, 2021, at 10:49 AM, Steve Eppert  wrote:

Hi.
I need to backup around 200 TB of data (with many small files) with around 1 TB 
per week new/changed data. Currently I simply rsync the data to an offsite 
location using a 100 MBit/s connection.

While searching for solutions for making the rsync faster (because of the many 
small files an rsync almost never uses the full 100 MBit/s) I stumbled across 
Bareos.

A question I could not find an answer to in the docs is: how does the 
bareos-fileseamon check for changed data when doing an incremental backup? Does 
the daemon hold some kind of database or does it check each file against the 
Bareos server? I'm wondering if a Bareos incremental backup job might be faster 
than the rsync.

Also after looking at the docs I'm considering purchasing a tape loader to 
backup a specific subset of more valuable data to tape.
Is it possible to have incremental backups to disk and do a regular full backup 
of only a subset of this data to tape?

It it possible to get filesystem access to the 

Re: [bareos-users] Re: longtime archive

2021-02-15 Thread Spadajspadaj



On 15.02.2021 08:04, 'Frank Kirschner | Celebrate Records GmbH' via 
bareos-users wrote:

Finally, can I discuss the following example:

I have to archive from 3 departments audio, video and print files as 
"cold data",

stored on tape:

First, I will will do copy all audio files to a local hard disk on the 
same host,
where the tape is connected directly, because copying files of the 
network from

different host a slower than writing to tape.

Second, creating a job, using "Enabled = no" for starting it manually 
via GUI.

Setting the client to the local fd,
Setting a file set, which points to a local directory, where the data 
are stored

from step #1
Setting also the storage to the tape
Define a pool where I have predefined some empty tapes

Now run the job and archive the audio files.
When finished successful, would it be a good idea to delete the 
collected audio
files from the hard disk and go ahead with copying now the video files 
and start
the job again or will it be better, to make for each types of data 
(audio, video,

print) an own job with an own pool and so tape like named:
audio1, audio2 / video1, video2 ..?

Sorry for asking but you guys have more experience with such tape 
scenario than a

green horn like me :-)

My first thought is that archiving is much more than just using backup 
solution to copy files. Archiving is a whole process which should be 
designed with proper data security in mind (i.e. appropriate copy 
redundancy and data verifiability).


Secondly, "First, I will will do copy all audio files to a local hard 
disk on the same host, where the tape is connected directly, because 
copying files of the network from different host a slower than writing 
to tape". Not necessarily. That's what you use spooling for.


Thirdly - I used to do a "copy and delete" scenario few years ago but I 
had a slightly different setup so my solution is not directly copy-paste 
appliable to you but I'd suggest you look into:


1) Dynamically create a list of files to backup (might involve checking 
client files for ctime or querying bareos database to verify if the file 
has already been backed up)


2) Create a post-job script which removes files that have already been 
backed up in a proper way (i.e. included in a given number of backup 
jobs if you want to have several copies) - this definitely involves 
querying director's database.


Best regards,

MK


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/34d7f873-57f8-dfe7-8b9c-066587a4322d%40gmail.com.


[bareos-users] Problems with remote installation of 19.2 FD on Windows 10

2021-02-15 Thread Spadajspadaj

Hi.

I was a bit surprised to find out that I couldn't reliably install 19.2 
fd on current (all updates applied) Win10 Pro machine. I used to install 
it on many servers and never had any problems but when I tried to 
install it on my wife's laptop, it said (at the end of the installation) 
that "The service name is invalid" and wouldn't install the service. 
Also, if I tried to run bareos-fd.exe by hand it would complain about 
some entrypoints (bah, didn't take screenshots; my bad).


When I went downstairs to my wife's laptop and ran the installer while 
being physically logged in onto the laptop, the installation went 
smooth, the service got installed OK and ran properly and now I'm 
backing up the machine with no further problems.


Anyone had similar experience?

(I tried both 19.2.4-rc1 and 19.2.7 and the result was the same).


Best regards,

MK

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/26ae7c22-3b29-efa5-52e2-72e32ada7a24%40gmail.com.


Re: [bareos-users] Re: longtime archive

2021-02-15 Thread Spadajspadaj



On 15.02.2021 09:27, 'Frank Kirschner | Celebrate Records GmbH' via 
bareos-users wrote:


Secondly, "First, I will will do copy all audio files to a local hard 
disk on the same host, where the tape is connected directly, because 
copying files of the network from different host a slower than 
writing to tape". Not necessarily. That's what you use spooling for.
Spooling is not working for this scenario, because I have to backup 
multiple clients, the manual says: "Each Job will reference only a 
single client."
So I use a "run before" script which collects from the 3 clients the 
data. On each client are placed the files in a "archiving" folder 
manually by the operator.


Sure. If this is the case, it sounds reasonable :-)

You might also have just three separate clients from which you backup 
with spooling but it's of course up to you. I don't know your setup 
sufficiently well to suggest this solution or another.




Thirdly - I used to do a "copy and delete" scenario few years ago but 
I had a slightly different setup so my solution is not directly 
copy-paste appliable to you but I'd suggest you look into:


1) Dynamically create a list of files to backup (might involve 
checking client files for ctime or querying bareos database to verify 
if the file has already been backed up)


2) Create a post-job script which removes files that have already 
been backed up in a proper way (i.e. included in a given number of 
backup jobs if you want to have several copies) - this definitely 
involves querying director's database.

That's a good idea for my scenario. Thanks for this good hint,



For example, my fileset included something like that:

FileSet {
    Name = "Local-archives"
    Include {
        File = "\\| find /srv/archives -type f -not -path '*backup*' 
-ctime +60"

    }
}

Which copied onto tape only files located in /srv/archives and not in 
"backup" in file (or directory in path) name that were created more than 
two months ago.


Then I would run a script (in my case it was ran asynchronously by cron, 
not from post-job trigger but post-job script is just as good here) 
involving a query like


select

    concat(Path.Path,file.name) as filepath,

    count(distinct job.jobid) as jobcount

from

    ((path join

    file

    on

    file.pathid = path.pathid)

    join

    job
    on

    file.jobid=job.jobid)

where job.jobstatus='t' and job.name like '%my_srv%'

group by filepath

having jobcount>=3;


To find files that had already been backed up 3 times with different 
jobs so I can remove them from disk. Of course you might want to extend 
the query to include - for example - media table to make sure that files 
have been copied to separate tapes.




--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/e84fe859-9cd0-020f-72de-2904b677d5f9%40gmail.com.


Re: [bareos-users] Bareos Director Hosted on Internet

2021-09-24 Thread Spadajspadaj
In case of multi-location setup you need to think about ways of limiting 
access and connection direction.


I have a "reverse" setup - I needed passive clients so I can initiate 
connections from director/sd _to_ fd. You might need the opposite, as I 
see, so it's pretty standard.


There is _always_ a risk when you're putting something open to the 
internet so if you want to limit your exposure, think about filtering 
the traffic on the network/OS level (limiting access to bareos ports 
only to specific addreses) and of course you can always think about 
setting up a VPN between your locations.


On 24.09.2021 09:25, Florian Panzer - PLUSTECH GmbH wrote:


We're runnig this setup (public director + client initiated fd 
connections) with overall success.

No problems so far - apart from the usual* ;)

I'm sure nobody will gurarantee that there are no security flaws - 
there most like are.



*) bareos-dir crashing on typo in config followed by reload
*) bareos-dir crashing because it's tuesday

Florian Panzer

---
PLUSTECH GmbH
Jäckstraße 35
96052 Bamberg
Telefon: +49 951 299 09 716
https://plustech.de/
Geschäftsführung: Florian Panzer
Amtsgericht Bamberg - HRB 9680
---
Am 24.09.21 um 02:51 schrieb Alexandre Denault:

Hi,

I’m working on a somewhat complicated Bareos setup and it would be 
must simpler/easier to host the Bareos Director over the Internet. 
Combined with Active Storage and File clients,  it would simplify my 
multisite setup greatly.


That said, is the Bareos Director robust enough to be hosted over the 
Intenet? Is it secure? I would assure that any client without a 
private key recognized by the Director would not be able to interact 
with it.


Thanks,

Alex
--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/08188095-4800-413c-88b7-ccc66bc57bacn%40googlegroups.com 
.

--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/4847a7d9-edf1-fb2e-be89-57b73be58bbc%40plustech.de 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/02211794-f8b3-6c7e-17fc-28e38f377bb4%40gmail.com.


Re: [bareos-users] Bareos Director Hosted on Internet

2021-09-25 Thread Spadajspadaj
Well... yes, if you use TLS Verify Peer than TLS library is your first 
line of defence because you shouldn't be able to connect with a peer 
without a valid certificate. I don't see any mention about CRL in TLS 
configuration Directives though so you might want to think how you would 
like to address possible issue with compromised client (you can 
explicitly allow specified CN's with TLS Allowed CN option as a workaround).


In general, compromised client - unless abusing some error in the 
director software - shouldn't be able to exploit the director. As you 
can see on the picture in 
https://docs.bareos.org/IntroductionAndTutorial/WhatIsBareos.html#interactions-between-the-bareos-services 
even though it might be the FD that connects to the DIR (if you don't 
use passive clients), it's the Director that issues commands to the FD.


Of course a rogue fd could try to generate an endless stream of data but 
you can mitigate it to some extent by - for example - limiting job run 
time or fiddling with Maximum Volume Jobs and Maximum Volume Bytes in 
case of file backed storage.



On 24.09.2021 21:57, Alexandre Denault wrote:

Hi,

I understand that there is a risk for any application on the Internet. 
Heck, even Nginx and Apache has a certain risk.


I'm trying to gage the amount of risk based on the security of the 
director. My understanding is that I would need to expose a TLS socket 
which no one can interact with without an acceptable key. That said, I 
understand that if one of my client is compromised, then the attacker 
would have a foothold on the director.


Should this be a concern? Can a "rogue" file client really do any 
damage other to its backup? I guess it could try filling the storage 
pool. Or am I being paranoid?


Cheers,

On Fri, Sep 24, 2021 at 2:36 PM Spadajspadaj <mailto:spadajspa...@gmail.com>> wrote:


In case of multi-location setup you need to think about ways of
limiting access and connection direction.

I have a "reverse" setup - I needed passive clients so I can
initiate connections from director/sd _to_ fd. You might need the
opposite, as I see, so it's pretty standard.

There is _always_ a risk when you're putting something open to the
internet so if you want to limit your exposure, think about
filtering the traffic on the network/OS level (limiting access to
bareos ports only to specific addreses) and of course you can
always think about setting up a VPN between your locations.

On 24.09.2021 09:25, Florian Panzer - PLUSTECH GmbH wrote:


We're runnig this setup (public director + client initiated fd
connections) with overall success.
No problems so far - apart from the usual* ;)

I'm sure nobody will gurarantee that there are no security flaws
- there most like are.


*) bareos-dir crashing on typo in config followed by reload
*) bareos-dir crashing because it's tuesday

Florian Panzer

---
PLUSTECH GmbH
Jäckstraße 35
96052 Bamberg
Telefon: +49 951 299 09 716
https://plustech.de/  <https://plustech.de/>
Geschäftsführung: Florian Panzer
Amtsgericht Bamberg - HRB 9680
---
Am 24.09.21 um 02:51 schrieb Alexandre Denault:

Hi,

I’m working on a somewhat complicated Bareos setup and it would
be must simpler/easier to host the Bareos Director over the
Internet. Combined with Active Storage and File clients,  it
would simplify my multisite setup greatly.

That said, is the Bareos Director robust enough to be hosted
over the Intenet? Is it secure? I would assure that any client
without a private key recognized by the Director would not be
able to interact with it.

Thanks,

Alex
-- 
You received this message because you are subscribed to the

Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from
it, send an email to bareos-users+unsubscr...@googlegroups.com
<mailto:bareos-users+unsubscr...@googlegroups.com>.
To view this discussion on the web visit

https://groups.google.com/d/msgid/bareos-users/08188095-4800-413c-88b7-ccc66bc57bacn%40googlegroups.com

<https://groups.google.com/d/msgid/bareos-users/08188095-4800-413c-88b7-ccc66bc57bacn%40googlegroups.com?utm_medium=email_source=footer>.
-- 
You received this message because you are subscribed to the

Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to bareos-users+unsubscr...@googlegroups.com
<mailto:bareos-users+unsubscr...@googlegroups.com>.
To view this discussion on the web visit

https://groups.google.com/d/msgid/bareos-users/4847a7d9-edf1-fb2e-be89-57b73be58bbc%40plustech.de

<https://groups.google.com/d/msgid/bareos-users/4847a7d9-edf1-fb2e-be89

Re: [bareos-users] List space used/free on each tape currently in autochanger

2021-10-20 Thread Spadajspadaj
You can query the database for media details. You can get both the used 
capacity and overall capacity in bytes and blocks but be aware that you 
need to take into account also the status of the volume. You can only 
write more jobs to the tape if it's in "Append" status.


example query (using my pool name; chose any columns you want and run it 
via sqlquery on bconsole):


select
  media.volumename,
  media.mediaid,
  media.volcapacitybytes,
  media.volbytes,
  media.maxvolbytes,
  media.volblocks,
  media.endblock,
  media.volstatus
from
 media join pool
   on media.poolid = pool.poolid
where
  pool.name like 'Offsite%' and media.inchanger=1;

On 20.10.2021 11:13, JHI Star wrote:


Is this possible ?

Many thanks
--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/90733c0a-7745-47fe-9a53-3649ae6f1267n%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/4d0698f8-13ba-5e37-a5e1-7321953b0c88%40gmail.com.


Re: [bareos-users] chown/chgrp question

2021-11-05 Thread Spadajspadaj

https://docs.bareos.org/Configuration/Director.html#config-Dir_Job_Level

"The File daemon (Client) decides which files to backup for an 
Incremental backup by comparing start time of the prior Job (Full, 
Differential, or Incremental) against the time each file was last 
“modified” (st_mtime) and the time its attributes were last 
“changed”(st_ctime). If the file was modified or its attributes changed 
on or after this start time, it will then be backed up."


Change of owner/permissions is a change of file attributes. So, 
according to that logic, it should be backed up again.


Read the whole Job Level section to better understand the 
Incremental/Differential level and the logic behind them.


On 05.11.2021 11:53, wfo...@gmail.com wrote:

Hi,
When we change the owner/group permissions of files/directories it 
will backup again these files/directories even when the 
files/directories did not change.

Is this a normal behavior?
Sometime we need to change the ownership of big files/directories but 
every time it will be back-upped again

thanks in advance
regards
--
You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/c31f8bba-5167-45b9-bd6b-f7176c1ba90fn%40googlegroups.com 
.


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/f4954a35-531f-8c84-1a88-12b1b6021745%40gmail.com.


[bareos-users] Bareos server on Raspberry Pi?

2022-01-14 Thread Spadajspadaj

Hello there.

I've been using Bareos for backups for quite some time at home. At the 
moment I have a Gigabyte Brix box with 4G RAM and external USB-connected 
swappable disks set up with vchanger so I have off-line backups. 
Everything runs great.


But.

The Brix started failing lately and I'm looking for a replacement. To be 
absolutely honest - the price is the key factor here. I thought about 
RaspberryPi as the backup server even those few years ago when I set up 
the original environment but at that point there was that was not enough 
for me performance-wise. If I remember correctly, it was that the RasPi 
didn't have USB3 back then.


Anyway.

Nowadays the RasPi 4B seems to have GigEthernet, 5G WiFi and USB3. So in 
theory should work sufficiently quickly. But that's all "on paper".


Question is if it's able to reach proper performance in reality (also, 
the RasPi has only 2G of RAM... Would it suffice for bareos and db 
server?). After all it's a different architecture and it's not meant for 
such workload. Also - should I worry about sdcard wear while running the 
database?


And the hardest question - is there a decently recent version of bareos 
available for Raspberry Pi at all? I see 16.2 in raspbian archives - 
that's quite old.


Still, if I found a proper version - could I simply migrate server 
(director and storage) settings (I suppose so) and database dump (I 
think dump & restore would be needed, I wouldn't count on moving binary 
postgres database files)?


Of course the easiest solution for me would be to buy a new Brix-like PC 
but let's be honest, they cost at least twice as much as RasPi and the 
power footprint of RasPi is very tempting compared to those netboxes or 
however you call them.


Uff, that's pretty much all :-)

Thanks for staying with me up to this point

Best Regards,

MK


--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/fe88b0aa-1900-9c1a-3c82-8fd056bb2517%40gmail.com.


Re: [bareos-users] Client-operated backup

2022-02-15 Thread Spadajspadaj

Hi Erich.

Well, that is some idea. As it is probably relatively obvious, the 
solution has to be simple so I can tell my wife "click here" an that's 
it. Maybe I should do a script that would turn the service on/of.


Thanks for the hint.


On 15.02.2022 19:41, Erich Eckner wrote:

Hi Spadajspadaj,

maybe a hacky solution, but would work: only start the fd on the 
laptop, when you want to run a backup? (Or auto-start it, when the 
laptop is up for >2h and no video conference software was started, etc.?)


regards,
Erich

On Mon, 14 Feb 2022, Spadajspadaj wrote:

> Hello there.

> I'm wondering if there is any less-intrusive option for backing up 
"interminnently connected" hosts (like my wife's laptop :-)).


> At first I had a static schedule which would backup the client every 
day at 9am or so.


> That of course would make backups skipped if the laptop was not on 
at that precise time so I had to monitor that externally and sometimes 
spawn backup jobs manually. That was annoying.


> Then I found the "Run On Incoming Connect Interval" option which 
made my life much easier. Now if the "normal" backup job is skipped, 
the backup gets spawned as soon as my wife boots up the laptop. That 
makes it relatively well covered in terms of backup.


> But it also makes the laptop unuseable (especialy in terms of 
videoconference solutions or other low-latency uses) for the first 
half hour or so after each bootup.


> So the question is - is there any reasonable way to make the backups 
more client-controlled? So the backup job is not spawn immediately 
after the connection from fd to dir, but can be delayed by user or 
spawned directly by user when desired?


> I probably could set up the bareos webui but could I limit my wife's 
permissions to just that single client? It would still not be a very 
elegant solution (the necessity to log in to "external" service is not 
very user-friendly) but it would be something.


> Any other paths to explore?

> I'm at 20.0.1 at the moment.

> --
> You received this message because you are subscribed to the Google 
Groups "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, 
send an email to bareos-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/7a7f-6586-a08a-0a7b-051c6db72ecb%40gmail.com.




--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/683ba8dd-6a72-2311-89b0-3ce90e662928%40gmail.com.


[bareos-users] Client-operated backup

2022-02-14 Thread Spadajspadaj

Hello there.

I'm wondering if there is any less-intrusive option for backing up 
"interminnently connected" hosts (like my wife's laptop :-)).


At first I had a static schedule which would backup the client every day 
at 9am or so.


That of course would make backups skipped if the laptop was not on at 
that precise time so I had to monitor that externally and sometimes 
spawn backup jobs manually. That was annoying.


Then I found the "Run On Incoming Connect Interval" option which made my 
life much easier. Now if the "normal" backup job is skipped, the backup 
gets spawned as soon as my wife boots up the laptop. That makes it 
relatively well covered in terms of backup.


But it also makes the laptop unuseable (especialy in terms of 
videoconference solutions or other low-latency uses) for the first half 
hour or so after each bootup.


So the question is - is there any reasonable way to make the backups 
more client-controlled? So the backup job is not spawn immediately after 
the connection from fd to dir, but can be delayed by user or spawned 
directly by user when desired?


I probably could set up the bareos webui but could I limit my wife's 
permissions to just that single client? It would still not be a very 
elegant solution (the necessity to log in to "external" service is not 
very user-friendly) but it would be something.


Any other paths to explore?

I'm at 20.0.1 at the moment.

--
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bareos-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/7a7f-6586-a08a-0a7b-051c6db72ecb%40gmail.com.


Re: [bareos-users] Config hint for backup / bareos rookie

2022-01-20 Thread Spadajspadaj
The point of spooling is to mainain constant rate of data for the 
storage device (even if it's done in bursts). It works best for jobs 
that have low effective transfer speed (for example - for incremental 
jobs over a huge filesystem where only a very small subset of files 
changes - the job checks every file and backs up only the changed ones 
so the job itself takes quite a long time but the effective data rate is 
very low since only a small subset of files is transferred). If you have 
a source which can easily fill your storage device bandwidth, you can 
skip spooling since it would effectively - as you observed - increase 
the backup time since the backup data would firstly be transferred into 
your intermediate spool directory and after that would get copied to 
tape - that's one data transfer too many.


On 20.01.2022 11:45, Mi Zi wrote:

hi again,

and now the result of the performance with spooling:
it is SLOWER than without in my situation: (LTO-8 drive)

so i can recommend that you check both settings (with and without 
spool parameter)

with spooling parameter:
in my case i canceld the job after 11h (2TB stored)
without spooling parameter:
round about 9h for 3,7TB

greetings michael




bro...@mlds-networks.com schrieb am Mittwoch, 19. Januar 2022 um 
23:35:35 UTC+1:


Great glad it worked,

Yeah I forgot about that, I add that to my template and then I
disable spooling in specific jobs eg archive jobs that I know the
system is fast enough to saturate the tape drive.

Welcome to Bareos!


Brock Palen
bro...@mlds-networks.com
www.mlds-networks.com 
Websites, Linux, Hosting, Joomla, Consulting



> On Jan 19, 2022, at 5:26 PM, Mi Zi  wrote:
>
> hi Palen
> sorry for the delay ... but better late than never :)
> here is some comment with regards spooling feature:
> this doesn´t work for me until i set the following parameter on
Job level: see here Spool Data (Dir->Job)
> Spool Data = yes
> this setting can be overwritten in a Schedule: see here Run
(Dir->Schedule)
> the rest works like a charme at present time.
>
> greetings michael
>
> Mi Zi schrieb am Samstag, 4. Dezember 2021 um 19:40:49 UTC+1:
> hello plane ... big thx for your time to share this informations
... i will check this an comment my experiences
>
> bro...@mlds-networks.com schrieb am Samstag, 4. Dezember 2021 um
16:34:19 UTC+1:
> I only use clients on Windows so have never ran the director or
storage on Windows.
>
> By tape drive working I’m assuming you mean you used btape and
ran the tests included with it? If so you should have the
approrate ’storage’ config and the storage director working. If
not here is my config for my stand alone tape drive, highly
recomend use spool if your spool location in your server is fast
enough to avoid drive start / stops, for LTO8 that probably means
something that can sustain 200MB/s+ preferablly closer to 500MB/s
but all LTO8 drive have speed matching. Spooling will avoid a lot
of start/stops.
>
> # goes in bareos-sd.conf
> Device {
> Name = T-LTO4
> Autochanger = no
> Drive Index = 0
> Media Type = LTO4
> Archive Device = /dev/nst0
> Device Type = Tape
> Maximum File Size = 200 # 20 GB
> Spool Directory = /mnt/spool/Q-LTO4
> Maximum Job Spool Size = 800
> Maximum Spool Size = 1600
> Drive Crypto Enabled = Yes
> Query Crypto Status = yes
> Maximum Concurrent Jobs = 1
> }
>
>
> # storage config goes in the director config and says how to
talk to the devices on the storage deamon
> Storage {
> Name = T-LTO4
> Address = myth
> Password = “"
> Device = T-LTO4
> Media Type = LTO4
> Maximum Concurrent Jobs = 1
> Auto Changer = no
> }
>
> # pool config goes in director config and says how to treat
volumes. You don’t need “next pool” that’s used for migraiton
/copy jobs. For your use case you probably want to adjust Volume
Use Duraction so the next week the system asks for a new volume
(tape) so your not stacking jobs across multiple weeks on one
tape. This is all your volume config also. Notice how it attached
the volumes in this “pool” to the storage deff above which tells
how to connect to teh stroage server etc.
> Pool {
> Name = LTO4
> Pool Type = Backup
> Recycle = no # Bareos can automatically recycle Volumes
> AutoPrune = yes # Prune expired volumes
> Job Retention = 12 months
> Volume Retention = 12 months
> Volume Use Duration = 3 weeks
> Next Pool = Offsite # consolidated jobs go to this pool
> Storage = T-LTO4
> }
>
> Now because you only want a full once a week, and nothing more
involved you would control that with a Schedule. In your case it
would look like

  1   2   >