On 06.04.2019 09:25, muravey.novosibi...@gmail.com wrote:
четверг, 30 ноября 2017 г., 11:27:43 UTC+7 пользователь oldt...@gmail.com
написал:
On Friday, November 17, 2017 at 6:18:40 AM UTC-5, Nikitin Artem wrote:
Hello.
I’m executing a sequence of commands in the Fileset resource (File = “ls
Thanks for the config but if I'm not mistaken it won't let me do what I
mainly wanted to achieve with vchanger - it won't let me plug in and out
external drives. That's the whole point of using vchanger for me so I
can take one external disk and move it somewhere offline or even
off-site, Your
Are you sure the OVA file is a sparse one? AFAIR, thin provisioning
means that the file size should sum up to already provisioned chunks of
data.
In other words, if inside the virtual machine you use 4G and have a 4G
file even though the maximum disk size is 30G you'd have a 4G file. But
it
ed/VM1.ova
166G../restored/VM1.ova <--- reverted to full allocated file.
On Friday, July 12, 2019 at 10:40:31 PM UTC+8, Spadajspadaj wrote:
Are you sure the OVA file is a sparse one? AFAIR, thin provisioning
means that the file size should sum up to already provisioned chunks of
da
Firstly, let me say that - from the security point of view - it's usualy
best idea to let the connection come from the director to the clients
(you usually connect from safer zone to less safe one).
Secondly -
On 04.08.2019 10:18, Roman Starun wrote:}
But as soon as i change SDport to 8103, SD does not start.
Bareos ver 18.2.5, Centos 7.
Since you're using CentOS, there is a big chance that you have SELinux
enabled. So SELinux is preventing a bind to non-labeled port. You have
to label port 8103
On 16.07.2019 08:30, Andreas Rogge wrote:
"By turning on the *sparse* option, Bareos will specifically look for
empty space in the file, and any empty space will not be written to the
Volume, nor will it be restored."
(https://docs.bareos.org/Configuration/Director.html#fileset-resource
)Sorry.
On 15.07.2019 09:05, Andreas Rogge wrote:
[root@server export-domain]# ls -l ../restored/VM1.ova
-rw---. 1 root root 177794008064 Jul 12 15:07 ../restored/VM1.ova
[root@mgnt21 export-domain]# du ../restored/VM1.ova
166G../restored/VM1.ova <--- reverted to full allocated file.
That's
On 12.11.2019 09:11, Jörg Steffens wrote:
On 12.11.19 at 07:59 wrote Spadajspadaj:
Hi there.
I added a laptop client to my bareos setup and everything runs mostly
fine except for the fact that if the job is scheduled for - let's say -
9pm, it tries to run, tries to connect to fd and since
This is not a good approach.
Firstly, if you mount a remote directory as a network drive (a letter
z:) for a particular user. A bareos client runs in different user's
context so it doesn't see that mounted path.
Secondly even if you managed to force bareos to see the network
directory (by
On 03.12.2019 11:45, Adam Podstawka wrote:
Hi,
i have a little problem, we build a new backup system and wanted to use
our old tapes in it. But the tapes are all labeled already through the
old system.
An "label barcodes" doesn't add them to the pool, as they are already
labeld.
i can't get
Hi.
I have a setup where I backup few linux machines and one Windows
workstation.
All linux clients work fine, the Windows machine sometimes does work OK
but sometimes the jobs fail. Typical failed job run:
15-Oct 21:00 backup1-dir JobId 1589: Start Backup JobId 1589,
Firstly, sorry for replying personally to you, not to the list before.
Hit "Reply" instead of "Reply to the list".
On 16.10.2019 09:11, Spadajspadaj wrote:
On 16.10.2019 09:01, Andreas Rogge wrote:
Am 16.10.19 um 08:17 schrieb Spadajspadaj:
15-Oct 23:52 backup1-dir Job
On that note - is there any "blessed" way to migrate existing
installation from MySQL to Postgres? I can easily google some
not-very-official recipes for bacula but are there any advices for bareos?
(and any more reasonable way to migrate than "export everything to csv
and pull that csv into
I'd add a thing or two to Jörg's answer.
Firstly, if you don't trust the backup provider, the whole backup setup
is highly questionable. Remember that even though you can encrypt the
file contents, you keep the filenames in clear text in the database, so
there is at least a vector of
Hi Sven.
I'd go for joining info from File and Path tables in bareos database
selecting by File.JobId. For size you'd need to decode LStat field of
File Table (I'm pretty sure I'd seen some decoders somewhere on the
Internet).
Best regards
MK
On 02.11.2019 08:47, Sven Gehr wrote:
hello
Hi there.
I added a laptop client to my bareos setup and everything runs mostly
fine except for the fact that if the job is scheduled for - let's say -
9pm, it tries to run, tries to connect to fd and since the laptop is
down, the job fails.
I was wondering how I can avoid those failed jobs
Firstly, it's perfectly normal that hardware compression rate drops when
dealing with encrypted data. Compression ratio depends heavily on
entropy of input data and good encryption assures uniform distribution
of encrypted data so there's no point in compressing data _after_
encryption.
Apache log shows HTTP transactions, not the bareos logs. So the "client"
in this context is the computer with your web browser. Hence the IP has
nothing to do with the computer you want to restore.
Regardless of the underlying cause which I don't know, the code you
provided shows that PHP
With Bareos it's usually not a question whether it is possible but how
to do it ;-)
But seriously - since WebDAV is not a storage as such, just a method of
access, you have two options. Either you have access to the server from
which the DAV share is served and you back it up locally. But I
Bareos is very flexible in terms of preparing a job. You can run a
"pre-job" script. It can be run either on server's side or client's
side. I suppose you'd prefer client's side in this case.
https://docs.bareos.org/Configuration/Director.html#config-Dir_Job_RunScript
And you can fail the
Remember that in default installation the bareos director reaches to the
fd to initiate the backup job but then the fd connects to sd to send the
backup data. If you don't allow for incoming connections (which is
understandable in case of i.e. DMZ-located clients), you need to use
passive
If you change settings in the config file they will be applied to new
volumes only as you already noticed. If old volumes are purged, only
their status is changed they are not deleted and created anew. So you
have to manually update volume using bconsole "update" command.
On 20.01.2020 13:23,
The question is whether the job output shows Compression. A volume is a
storage unit. It may be a fixed file, it may be a tape. We don't know
your configuration here. I, for example, have fixed-size 40G file-based
volumes so the volumes size don't change but the jobs can be bigger or
smaller
ature = SHA1
Compression = GZIP6
}
File = "/home/bee"
File = "/media/windows/Users/bee"
}
}
to
FileSet {
Name = "Full_Laptop_Set"
...
bee
On 1/1/20 11:35 AM, Spadajspadaj wrote:
On 01.01.2020 17:24, aeronex...@gmail.com wrote:
I have upda
On 01.01.2020 17:24, aeronex...@gmail.com wrote:
I have update my fileset to include a new exclude statement. I have
restarted Bareos (including reboot the server). Unfortunately, Bareos
continues to use the old version of the fileset definition for the
backup. I do not find an update
at the time and not shown in help; I found
it in sources) it tries to copy the rows.
We'll see how it goes.
On 11.03.2020 20:26, Spadajspadaj wrote:
Hello.
I've been trying to migrate my setup from MySQL to Postgres using the
bareos-dbcopy utility. It is almost working. Almost, because it copies
It seems that with -l 1 (i have only some 5 mil entries in File
table) the migration completed and now my instance is running ok on
postgres.
On 12.03.2020 08:25, Spadajspadaj wrote:
BTW, If I'm seeing correctly, the dbcopy tool is inserting entries
with INSERT INTO even though
Every way is safe as long as you prepare for it :-)
But seriously, you have two main options
1) Do a database dump and restore to a bigger server. (the "logical
migration")
2) Stop the postgresql service, make a new filesystem on a bigger
device, move the database files there and mount the
Hello.
I've been trying to migrate my setup from MySQL to Postgres using the
bareos-dbcopy utility. It is almost working. Almost, because it copies
only one record from each table.
I ran it with strace and it seems that it's not me, it's him ;-)
Strace excerpt from the File table
Unless you really need the automatic fall-over to the next storage, you
can also set up a vchanger. That way you can also control fixed number
of fixed-size volumes. I prefer this approach to dynamicaly created
media files but YMMV.
On 21.04.2020 16:34, Brock Palen wrote:
It is possible to
Hi.
Just to make sure that I understand my config good, because the manual
is a bit unclear on this.
What I wanted was to make a schedule that was run every other week on a
given day of the week (i.e. every second saturday, or every third thursday).
The examples of the modulo scheduler use
https://docs.bareos.org/Configuration/Director.html#config-Dir_Job_Enabled
On 26.03.2020 10:07, Martin Krämer wrote:
Hi All,
via bareos-webui -> "Jobs "-> "Actions" I can disable individual jobs.
As said in the action button comment "Disabling is a temporary
operation until the director
Firstly, you cannot have bareos delete files without dirty tricks. It
can truncate volumes on purge as someone already pointed out.
If I were you and wanted to have fixed number of backups regardless of
any other parameters, I'd go for maximum volume jobs=1 and apropriate
retention and
On 21.04.2020 17:44, Erich Eckner wrote:
On Tue, 21 Apr 2020, Spadajspadaj wrote:
Unless you really need the automatic fall-over to the next storage,
you can also set up a vchanger. That way you can also control fixed
number of fixed-size volumes. I prefer this approach to dynamicaly
In short, yum history rollback is not a good way to do do anything other
than very small package changes. i.e. downgrading from version 1.4.27 to
1.4.26 of some package or completely removing package you just installed
for testing along with all dependencies.
Longer explanation:
Yum history
On 29.04.2020 14:09, Andreas Rogge wrote:
Am 29.04.20 um 13:22 schrieb Valentin Dzhorov:
Can anyone let me know what am I doing wrong here? Thank you all in advance!
That really depends on where you see the "Encryption: None" message.
In Bareos' context encryption can mean three different
On 12.05.2020 11:34, 'DUCARROZ Birgit' via bareos-users wrote:
2) I thought that incremental level is writing into
incremental-Volume, full level is writing into full-Volume etc. But it
is not. Why?
Depends how your job is configured.
You might just use Pool directive to specify a pool
Erich
On Sun, 26 Apr 2020, spadaj wrote:
No problem, mate.
Hope this is of some help. If you have any questions, don't hesitate
to ask. I can't guarantee I'll be able to give reasonable advice but
I'll try :-)
Cheers.
W dniu 25.04.2020 o 21:53, Erich Eckner pisze:
Hi spadajspadaj,
I never c
On 19.05.2020 10:57, Miguel da Silva wrote:
Hello,
I have a massive Bareos Setup and one of my clients (Let's name him
"client1") has had backup errors or slow backups.
So i investigated and found the Problem on client1.
Now i want to remove all old traces of data of client1 from the
On 19.05.2020 13:08, Miguel da Silva wrote:
There are two approaches:
1) The bareos way - purge jobs associated with given client and
let the
bareos do its job. And that's the approach I'd recommend. Purge jobs
from given client, make sure you have "purge volume
The bareos-fd runs with root user by default so it should have access
but there canbe many different issues with the script itself.
One thing is SELinux - is it on? It might mess things up.
Second one is PATH variable. It might not be what you think when you're
executing the script so some of
Deleted, as such - no. You can user ActionOnPurge = Truncate to make
bareos shrink the media files to 0 bytes on purge but the file will
still be there.
You'd have to use some external script to delete media and delete
related files from disk.
On 03.09.2020 20:17, stefan.harb...@gmail.com
You have the messages configuration but do you have it attached to the
appropriate job resource?
On 23.09.2020 12:19, 'birgit.ducarroz' via bareos-users wrote:
Hi, no one can help me?
birgit.ducarroz schrieb am Montag, 14. September 2020 um 11:32:42 UTC+2:
Hi,
I would like to
ueued Duplicates = yes
RunAfterJob = "/bin/mt -f /dev/tape/by-id/scsi-350050763121460f3-nst
eject"
}
On 23/09/20 19:54, Spadajspadaj wrote:
You have the messages configuration but do you have it attached to
the appropriate job resource?
On 23.09.2020 12:19, 'birgit.ducarroz' via ba
Damn, my bad. I looked hastily into update and was pretty sure it worked
for jobs the same way it does for volumes.
Apparently it does not. So you'd have to set retention period on whole
volumes (here I'm pretty sure you can do that; I did it myself ;->).
Sorry for the confusion.
On
On 31.07.2020 21:18, Ariel Esteban Salvo wrote:
Hi!
One of our clients was hit by a ransomware attack, Bareos did its job
and we were able to rebuild most of what was lost.
I'd like to keep the jobs I used to restore for a while longer (just
in case)
What are my options?
I've seen
Two things.
RAID (or any other replication) is not a backup solution!
Archiving is not backup.
On 06.08.2020 19:34, Oleg Volkov wrote:
This system does not looks like suits any usual backup system. Your
FULL is terrible and restore will be awful.
Use the storage ability. Make DR site and
As I wrote earlier, this looks more like archiving plan, not a backup
one (or a combination of backup and archiving). But more to the point -
in case of backups you have to have a verification plan and periodical
restore tests. In case of archiving you need to have a verification plan
(i.e.
On 30.06.2020 07:24, Erich Eckner wrote:
Device "vchanger-1-0" (/var/spool/vchanger/vchanger-1/0) is waiting
for sysop intervention:
Volume: vchanger-1_0005_0153
Pool: Incremental
Media type: Offsite-File
Device is BLOCKED waiting for mount of volume
Hi Birgit.
To be honest, I fail to see what would be the point of deleting a pool
from a script. If you need to delete a pool, you do it once,
interactively and everything's good.
Of course you can do a delete pool from bconsole but if you don't delete
it from the configuration it'll get
Can you tell me how to do a little guide? I am a novice in this,
in what format do I have to mount the disk and what privileges do
I have to give it?
I have no clue what your configuration looks like but I suppose
you've already mounted the disk (at least that's what the
Apart from all the possible other prerequisites, you have to remember
that the paths have to be accessible from the context of the user from
which the bareos-fd is running. So if you - for example - mount the NAS
share as a g: drive from your normal user, it won't be visible to the
bareos-fd
As tou can see, the Storage Daemon can't create files on the disk. Since
the device is formatted with ext4 filesystem you need to set appropriate
ownership and access rights to the storage directory _after you've
mounted it_.
On 16.06.2020 12:59, lucas wrote:
Hi,
I'm trying to change the
On 16.06.2020 13:31, lucas wrote:
Can you tell me how to do a little guide? I am a novice in this, in
what format do I have to mount the disk and what privileges do I have
to give it?
I have no clue what your configuration looks like but I suppose you've
already mounted the disk (at
automation solution (for example - ansible) for
installing a server from scratch instead of own script.
On 16.06.2020 12:00, DUCARROZ Birgit wrote:
Hi Spadajspadaj,
First of all, thank you for your response.
I created a script which completely installs my server. The script is
meant to ease a
On 09.06.2020 18:16, Jörg Steffens wrote:
On 09.06.20 at 14:58 wrote 'birgit.ducarroz' via bareos-users:
Did you try
bconsole
* relabel
?
Relabel will only work on empty/purged volumes. By relabeling a tape,
data will be lost.
AFAIK it is only possible to append data to a physical tape, not
If you don't specify retention period, they will get set at default
values so it's not a proper solution. I'd rather go and set it to some
insanely huge value.
But of course it will result in ever-growing storage demand for the
catalog database since no jobs/files/volumes will be getting
I would be, however, cautious about possible scenarios where a node
breaks and fails over to the other server - for example - in the middle
of a backup job. Such scenarios would need some testing so you know what
to expect and how to handle such situation.
On 05.06.2020 09:00, Oleg Volkov
manually as usual for failed job.
K.O.
On Friday, June 5, 2020 at 1:32:40 PM UTC+3, Spadajspadaj wrote:
I would be, however, cautious about possible scenarios where a
node breaks and fails over to the other server - for example - in
the middle of a backup job. Such scenarios would need
On 11.06.2020 14:29, Kai Zimmer wrote:
Hi,
in former times i used bareos with a mysql database backend. However
it became too slow and i switched to a secondary postgres catalogue. I
need to keep the mysql database as a history though.
Now i'm switching from Ubuntu 16.04 (mysql 5.7) to
Hi there.
I'm wondering whether there is a reasonable way to prevent bareos from
scheduling jobs from same client in quick succession.
Here's what I mean. We have a bareos setup with a single tape drive. The
jobs are scheduled daily with Inc/Diff/Full schedule. If we fail to
change tape as
On 09.06.2020 11:30, Andrei Brezan wrote:
On 09/06/2020 11:24, Spadajspadaj wrote:
Hi there.
I'm wondering whether there is a reasonable way to prevent bareos
from scheduling jobs from same client in quick succession.
Here's what I mean. We have a bareos setup with a single tape drive
I believe you'd have to have two different jobs. You'd have to create a
disk-based storage and firstly doing a backup job there, then have a
migrate job to a tape pool.
I'm thinking of similar setup myself since I have sometimes problems
with getting to the server to change tapes so I would
Well, not everyone has long enough tapes to always do full backups ;-)
After all the whole concept of Inc and Diff backups didn't come from
nothing.
On 07/01/2021 13:10, 'DUCARROZ Birgit' via bareos-users wrote:
Hi,
Another possibility is not to spool and not to backup incremental nor
to see,
how long it take to handle each directory in a file set?
Am 07.01.2021 um 14:08 schrieb Spadajspadaj:
Of course. It's all a matter of personal preference and personal needs.
There is one caveat though about full jobs and backup speed. It's all
ok if you're backing up just files and have
that this is a good way
to backup on single tapes.
See the following treat:
https://groups.google.com/g/bareos-users/c/g53BNdTat2s
Regards,
Birgit
On 07/01/21 13:12, Spadajspadaj wrote:
Well, not everyone has long enough tapes to always do full backups
;-) After all the whole concept of Inc
bareos-fd.conf is a configuration file for bareos-filedaemon. Bareos
filedaemon is the program running on the client which you are backing up.
As per the documentation (which you already found), all data is
encrypted on client prior to being sent to server (or to Storage Daemon,
to be
lds-networks.com
Websites, Linux, Hosting, Joomla, Consulting
On Dec 21, 2020, at 8:21 AM, Spadajspadaj wrote:
bareos-fd.conf is a configuration file for bareos-filedaemon. Bareos filedaemon
is the program running on the client which you are backing up.
As per the documentation (which you already fo
It's almost obvious if you look at possible medium states but to give
you a verbose answer - the media can be read from any point but can only
be appended at the end.
So if any job is being pruned/purged/deleted, it's just being
"forgotten" by the database but is still present on the media
First of all, you didn't the docs carefully.
If you say 'File = "|command"', said command will be run by the
director, on the director machine and - what's important if it's the
same machine - in the context of bareos-dir user.
So if you want to run the comand on the client, you have to give
Hi.
I wanted to give S3 storage plugin a try. For now just to see how it
works, but maybe to use it in production one day. But I have completely
no idea how to estimate S3 usage and thus associated costs. I admit I am
no S3 expert at the moment so it would be an opportunity to learn about
S3
On 18/01/2021 13:04, Spadajspadaj wrote:
On 18/01/2021 11:28, Brock Palen wrote:
Disclaimer I have not used s3 with bareos but done many cloud
calculations.
Few things to think about using cloud.
Are you running your SD in the cloud?
Are your backup clients in the cloud?
If not what’s your
On 18/01/2021 11:28, Brock Palen wrote:
Disclaimer I have not used s3 with bareos but done many cloud
calculations.
Few things to think about using cloud.
Are you running your SD in the cloud?
Are your backup clients in the cloud?
If not what’s your bandwidth? It will impact your backup and
https://docs.bareos.org/Configuration/Director.html#fileset-options-resource
|Mtime Only|
Type:yes|no
If enabled, tells the Client that the selection of files during
Incremental and Differential backups should based only on the
st_mtime value in the stat() packet. The
In case of filesystems I can think of as interesting for myself:
1) Windows FD has VSS support
2) In case of ZFS/LVM you can run a pre/post scripts creating a snapshot
and mounting it for reading then unmounting and removing snapshot after
backup.
I suppose you can pack it into a python
On 06.07.2021 14:39, Christian Svensson wrote:
Hello,
On Tue, Jul 6, 2021 at 2:33 PM Spadajspadaj wrote:
1) Windows FD has VSS support
That's interesting. It would be cool to have feature parity I suppose.
True, but there are many different filesystems on unices...
2) In case of ZFS
I'd start by checking whether the media files are really created on disk.
Then I'd run the daemon with higher debug level and capture the output.
On 12.07.2021 16:24, Rodrigo Jorge wrote:
Hello Folks,
This is a BUG or a CONFIG error ?
Regards,
Rodrigo L L Jorge
Em sáb., 10 de jul. de 2021
In general, Android backup is - to put it delicately - a completely
screwed up thing.
Especially if you don't have your phone rooted.
On 05.07.2021 19:35, Erich Eckner wrote:
Hi,
I was wondering, if it was possible to back up an Android phone with
bareos. I searched the playstore, but
On 10.07.2021 18:51, Erich Eckner wrote:
>> > In general, Android backup is - to put it delicately - a
completely screwed up thing.
>>
>> > Especially if you don't have your phone rooted.
>>
>> Yes, I imagined, it would be difficult/impossible to back up a
complete phone, when it's not
On 10.07.2021 09:10, Erich Eckner wrote:
On Tue, 6 Jul 2021, Spadajspadaj wrote:
> In general, Android backup is - to put it delicately - a completely
screwed up thing.
> Especially if you don't have your phone rooted.
Yes, I imagined, it would be difficult/impossible to b
On 12.03.2021 19:22, Sergey Zaguba wrote:
for instance
1 host - Bareos director
2 host Bareos-st
For example, host number one burned out -
let's consider two situations
1) we have a backup of the directory and the director's bareos database
2) we do not have a backup of the directory and
Yes, there is a plugin for storage daemon to store data in S3. See the docs.
On 16.03.2021 17:02, Kaushal Shriyan wrote:
Hi,
Is there a way to push the backup data to the AWS S3 bucket using the
BareOS utility? For example, if I backup both configurations and data
directory of GitLab SCM
Of course it's always best to have the whole environment with coherent
current version but it's usually quite OK to have director slightly
"ahead" of the clients. It's the other way around that easily causes
problems - if you have FD's newer than director, you might run into
problems.
On
I'm not sure what you're trying to achieve, but if you want to have a
single pool spanning over several directories, you might try to look
into vchanger and simulate a changer by "switching" directories.
On 02.03.2021 16:04, lst_ho...@kwsoft.de wrote:
Hello,
we try to use some file locations
+ offline=0
+ offline_sleep=0
+ load_sleep=0
Have you tried playing with those values?
It seems a good place to start.
By default they're defined in /etc/bareos/mtx-changer.conf
On 07.03.2021 22:16, 'tilmang...@googlemail.com' via bareos-users wrote:
1) I gave it a try. Using the mtx-changer
Haven't used Telegram in my life but it seems possible (and sholud be
quite easy) to use linux telegram cli client
https://github.com/vysheng/tg
On 08.03.2021 20:38, Matheus Inacio wrote:
Hello!!
Has anyone integrated bareos with the telegram, to receive job status ??
thanks
--
You
If mtx runs fine. I'd try to run the mtx-changer script with bash "debug
mode" (bash -x mtx-changer...) and see what is it that the mtx hangs and
times out on.
On 27.02.2021 12:21, 'tilmang...@googlemail.com' via bareos-users wrote:
Dear spadaj
I forgot to mention that mtx runs OK. It lives
I somehow missed the original email.
I suspect that with many many small files you're mostly limited by the
source filesystem (and whole system) performance more than sheer backup.
Regardless of what the method of deciding whether the file needs backing
up, its metadata still have to be read
On 15.02.2021 08:04, 'Frank Kirschner | Celebrate Records GmbH' via
bareos-users wrote:
Finally, can I discuss the following example:
I have to archive from 3 departments audio, video and print files as
"cold data",
stored on tape:
First, I will will do copy all audio files to a local
Hi.
I was a bit surprised to find out that I couldn't reliably install 19.2
fd on current (all updates applied) Win10 Pro machine. I used to install
it on many servers and never had any problems but when I tried to
install it on my wife's laptop, it said (at the end of the installation)
that
On 15.02.2021 09:27, 'Frank Kirschner | Celebrate Records GmbH' via
bareos-users wrote:
Secondly, "First, I will will do copy all audio files to a local hard
disk on the same host, where the tape is connected directly, because
copying files of the network from different host a slower than
In case of multi-location setup you need to think about ways of limiting
access and connection direction.
I have a "reverse" setup - I needed passive clients so I can initiate
connections from director/sd _to_ fd. You might need the opposite, as I
see, so it's pretty standard.
There is
p 24, 2021 at 2:36 PM Spadajspadaj <mailto:spadajspa...@gmail.com>> wrote:
In case of multi-location setup you need to think about ways of
limiting access and connection direction.
I have a "reverse" setup - I needed passive clients so I can
initiate connections
You can query the database for media details. You can get both the used
capacity and overall capacity in bytes and blocks but be aware that you
need to take into account also the status of the volume. You can only
write more jobs to the tape if it's in "Append" status.
example query (using my
https://docs.bareos.org/Configuration/Director.html#config-Dir_Job_Level
"The File daemon (Client) decides which files to backup for an
Incremental backup by comparing start time of the prior Job (Full,
Differential, or Incremental) against the time each file was last
“modified” (st_mtime)
Hello there.
I've been using Bareos for backups for quite some time at home. At the
moment I have a Gigabyte Brix box with 4G RAM and external USB-connected
swappable disks set up with vchanger so I have off-line backups.
Everything runs great.
But.
The Brix started failing lately and I'm
wrote:
Hi Spadajspadaj,
maybe a hacky solution, but would work: only start the fd on the
laptop, when you want to run a backup? (Or auto-start it, when the
laptop is up for >2h and no video conference software was started, etc.?)
regards,
Erich
On Mon, 14 Feb 2022, Spadajspadaj wrote:
> Hel
Hello there.
I'm wondering if there is any less-intrusive option for backing up
"interminnently connected" hosts (like my wife's laptop :-)).
At first I had a static schedule which would backup the client every day
at 9am or so.
That of course would make backups skipped if the laptop was
The point of spooling is to mainain constant rate of data for the
storage device (even if it's done in bursts). It works best for jobs
that have low effective transfer speed (for example - for incremental
jobs over a huge filesystem where only a very small subset of files
changes - the job
1 - 100 of 135 matches
Mail list logo