Hi,
I can't reproduce the behaviour you are describing, even when a manually
created snapshot
was existing at backup time, the virtual disks are created correctly with the
given new vmname.
The version on https://download.bareos.org/current/ is now 23.0.3~pre104, could
you please update
and
Hi,
thanks for reporting this and you proprosed PR #1796.
There already was a change to address this, see
https://github.com/bareos/bareos/blob/5037b212391da1c4af45907f11a05d07865cd08f/core/src/plugins/filed/python/vmware/bareos-fd-vmware.py#L4071-L4092
but it does not seem to work when using
Hi,
there are examples in the documentation at
https://docs.bareos.org/TasksAndConcepts/Plugins.html#vmware-plugin
to restore to different VM name by passing plugin options like the following on
restore:
python:restore_datastore=datastore2:vmname=testvm1restored:restore_powerstate=off
May it
Hello Ramil,
there should be an error message logged after the "Get CBT failed:" which is
what the plugin got back from the failed API call
when it requested the CBT data.
One cause for that issue can be that the VM was restored, which means that the
disk was completey overwritten. In that
Hello,
looking at the plugin code
https://github.com/bareos/bareos/blob/6c6568cd8b022c48429f8465b6d82a0bc5df50c0/core/src/plugins/filed/python/vmware/BareosFdPluginVMware.py#L3202-L3206
the cause is that vim.vm.device.VirtualSerialPort is currently not covered by
the plugin.
So this does not
Hi,
yes, you are right. When using a public cloud storage service, always
incremental is probably not suitable.
However, when running object storage on premises, for example a ceph cluster
providing S3 compatible storage,
it still may be suitable.
Regards,
Stephan
On 4/1/23 16:53, Tang
Hello,
there's not much documentation about this indeed, at
https://docs.bareos.org/IntroductionAndTutorial/BareosWebui.html it says:
Plugin options
Provide a plugin options string here if required. The field is only shown
if a fileset using a plugin is detected.
The "Merge all client
Thanks for reporting this. It will be fixed with
https://github.com/bareos/bareos/pull/1352
Regards,
Stephan
On 1/9/23 23:54, TDL wrote:
Hi
Since I upgraded to Bareos 22 on Ubuntu 22.04.1, I was unable to backup VMs.
The backup failed with the following error:
Fatal error: bareosfd:
Hi,
note that the error message says "change UUID", this has nothing to with the
UUID of the VM.
The change UUID refers to the changed block tracking. This error will occur
when the VM was
completey recreated, eg. redeployed from a template or however. In that case an
incremental
backup will
On 4/5/22 19:08, Jeffery Banning wrote:
I have some VMs that have multiple drives defined. One of the drives is 2TB
and used for temporary data when executing long running simulations. I would
like to exclude that drive from the backup process. Can I tell the Bareos
plugin to only
Hi Bob,
the certificate is ok, it's not expired:
$ curl -v https://download.bareos.org/
* Trying 185.170.114.121:443...
* Connected to download.bareos.org (185.170.114.121) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile:
Hello,
you could try to add an index for PathId, JobId on the PathVisibility table
like this:
su - postgres
echo "CREATE INDEX pathvisibility_pjid on PathVisibility (PathId, JobId); ANALYZE
PathVisibility;" | psql bareos
I think that could accelerate this query, especially if you not only
Hello,
you could try to add an index for PathId, JobId on the PathVisibility table
like this:
su - postgres
echo "CREATE INDEX pathvisibility_pjid on PathVisibility (PathId, JobId); ANALYZE
PathVisibility;" | psql bareos
I think that could accellerate this query if you not only have many
, see
https://docs.bareos.org/Appendix/Howtos.html#section-migrationmysqltopostgresql
Regards,
Stephan
On 5/11/20 8:31 PM, Stephan Duehr wrote:
> Hi,
>
> indeed bscan does not recover RestoreObject data correctly. I've also
> verified that this is not an oVirt plugin specfic problem,
Hi,
indeed bscan does not recover RestoreObject data correctly. I've also verified
that this is not an oVirt plugin specfic problem, any
plugin that uses restore objects would be affected.
Regards,
Stephan
On 5/2/20 12:23 PM, levindecaro wrote:
> I managed to know the root cause of this
Hi,
what kind of oVirt storage is used? block-storage (like iSCSI or FC), or
File-Storage (NFS etc.) ?
I've observed this on block-storage, space does not get release there after
deleting the snapshot.
Regards
On 4/29/20 3:52 PM, Team_Orbit wrote:
> Hello everyone, we noticed that the
Hello,
you probably found a bug in the plugin, technically it's not necessary to
connect vCenter for
restore to local VMDK file.
If you don't mind, please create a bugreport on https://bugs.bareos.org/
Regards,
Stephan
On 02/27/2018 12:51 PM, Stefan wrote:
> My vCenter server is a Virtual
Hi,
that looks like the query being run for accurate backup.
I wonder why it's slower after the upgrade, as it's one table less for the join.
This kind of problem with accurate has also been seen with the Bareos 16.2 DB
schema
in MySQL.
Please check if the indexes on the File table match the
Hello,
does this VM have multiple disks?
Are you on vSphere 6.5?
However, at least against vSphere 5.5 I can't reproduce this error, the plugin
works with VMs with multiple disks, and also when spanning multiple storage
volumes.
Regards,
Stephan
On 11/20/2017 09:52 AM, OJK wrote:
> Hello,
>
would be faster. May be that changed with 6.5. Was on VMFS6 also, that may
matter.
May be you want to share your results.
Regards,
Stephan
On 11/16/2017 05:36 PM, Stephan Duehr wrote:
> Hello,
>
> sorry, my fault. There were some commits missing on
> https://github.com/bareos/bareos
Hello,
sorry, my fault. There were some commits missing on
https://github.com/bareos/bareos-vmware/tree/bareos-17.2
We are currently building new packages and will publish them as soon as
possible.
Regards,
Stephan
On 11/16/2017 03:47 PM, OJK wrote:
> Greetings,
>
> I've just tried the
> to work with ESXI 6.5?.
>
>
> Best regards,
>
>
> On 27/09/17 22:15, Stephan Duehr wrote:
>> Hi,
>>
>> I don't know if VDDK 5.5.4 is supposed to work with vSphere 6.5, may be
>> bareos_vadp_dumper must
>> be adapted and built against VDDK
Hi,
I don't know if VDDK 5.5.4 is supposed to work with vSphere 6.5, may be
bareos_vadp_dumper must
be adapted and built against VDDK 6.x to get it working with vSphere 6.5.
There are some more errors, like
Log: Cnx_Connect: Error message:
Warning: [NFC ERROR] NfcNewAuthdConnectionEx: Failed
Hi Douglas,
I would expect one DB connection per running job as normal.
How do you have Maximum Concurrent Jobs configured?
I'm currently not sure if or why the DB Connection is already opened at
scheduled time, when
Maximum Concurrent Jobs is less then the number of scheduled jobs.
Regards,
Hi,
there are no packages for Debian 9 on download.bareos.org.
This looks like you are trying to install the Debian 8 packages from
download.bareos.org on Debian 9,
that's not going to work.
Is your system a new installation of Debian 9 or did you upgrade a Debian 8
system?
Regards,
Stephan
Hi,
if you can reproduce a problem with glusterfind by running it manually, I'd
recommend to report it
upstream, go to https://www.gluster.org/community/ and click "Report a problem
in Gluster".
I was able to reproduce the error of the Bareos gfapi-fd plugin with
glusterfind when the
Hi,
1) may some permission is missing in the whole path, may be try like
this if the user bareos can really access the directory:
su -s /bin/bash - bareos
touch /run/media/patricio/500Gb/BareosBackup/testfile.txt
2) I think you need
Archive Device = /home/patricio/respaldos
Mount Command = mount
Hello,
you could define the Script in each Job, using the "Run Before Job" and
"Run After Job" shortcuts to save some lines if you want, for example
Job {
Name = 00_TestBackupJob_vm_testvm
JobDefs = "00_TestBackupJobDefs"
Run Before Job = "/root/bin/testjob before testvm \"%j\""
Run
Hi,
are you using ReFS on the Windows system that is being backed up?
As of https://en.wikipedia.org/wiki/Comparison_of_file_systems
most filesystem have a limit of 255 bytes per filename/dirname,
but ReFS has 255 UTF-16 characters.
On Linux that would expand to more than 255 bytes, looks like
Hi,
looking at the code:
https://github.com/bareos/bareos/blob/master/src/plugins/filed/gfapi-fd.c#L1535
https://github.com/bareos/bareos/blob/master/src/filed/fd_plugins.c#L2585
it calls accurate_mark_all_files_as_seen(jcr), even before reading anything
from the file list generated by
Hello,
the Ubuntu 14.04, 16.04, Debian 7/8 repos on
http://download.bareos.org/bareos/release/16.2/
are now signed with digest algorithm SHA256.
The apt warning message "... uses weak digest algorithm (SHA1)" should no
longer appear.
On 04/02/2017 04:50 AM, Thomas Schweikle wrote:
> The
Hi,
using where= like what you are trying looks correct.
Did you make sure your job definition starts with
Job {
Name = "restore-mysql"
...
}
Did you issue the reload command in bconsole after adding
your job definition? Did it throw any error?
Which version of Bareos are you using?
Hello,
I've run some tests. The combination of sparse=yes and autoxflate is generally
broken
even with a small non-sparse testfile, I've added a comment at
http://bugs.bareos.org/view.php?id=694
So logically, it's also implicitly broken when adding Python Plugins.
Without sparse=yes, the
Hi,
that looks like a problem in job statistics collection.
If you have
Collect Job Statistics = yes
in your bareos-sd configuration, please set it to
Collect Job Statistics = no
Then restart bareos-sd and check if it still crashes.
Regards,
Stephan
On 01/09/2017 03:14 PM, Oliver
Hi,
theoretically that should be possible, as far as I know, VDDK negotiates the
transport
mode. However, I don't know if that works out of the box, we haver never tested
this,
insofar, I'd say it's not supported.
However, you could try to modify the wrapper script
On Wed, 26 Aug 2015 17:19:15 +0200
lst_ho...@kwsoft.de wrote:
Zitat von David Pearce david_pea...@wycliffe.org:
We use data spooling and this can increase the total time, but it
definitely keeps the tape drive streaming (going at full speed)
reducing wear-and-tear.
However, I
Hi,
I've tried to reproduce this, but couldn't. It works for me with
packages from
http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/CentOS/epel-7/x86_64/
which is 3.7.3 at the moment.
I've also run tests with Red Hat Gluster Storage 3.1 which is also based on
gluster 3.7.x
Only
Hi,
did you check if you have set
option rpc-auth-allow-insecure on
in /etc/glusterfs/glusterd.vol on all your gluster nodes?
Or could you solve your problem otherwise meanwhile?
Regards
Stephan
On Wed, 29 Jul 2015 20:59:53 +
Michael Mol mike...@gmail.com wrote:
Help! I've run out of
On Tue, 4 Aug 2015 07:16:19 -0700 (PDT)
Gerhard Sulzberger g...@treetec.at wrote:
Hi are there known dependency issues in the bareos-devel package with CentOS7?
Or is it just an issue on my system?
You are right, the dependencies of the bareos-devel package for CentOS7/RHEL7
are not correct,
On Sat, 13 Jun 2015 14:43:57 +0200
Bruno Friedmann friedmann.br...@gmail.com wrote:
On Friday 12 June 2015 03.13:34 cocolocko wrote:
hmm, CentOS 6 has sysV init. (CentOS 7 has SystemD)
and no,/etc/init.d/bareos-dir does not have a dependency, or what did you
mean exactly? What personal
40 matches
Mail list logo