Hi,
Can please you provide the versions of vdsm, qemu, libvirt?
On Sun, Jul 30, 2017 at 1:01 PM, Johan Bernhardsson wrote:
> Hello,
>
> We get this error message while moving or copying some of the disks on
> our main cluster running 4.1.2 on centos7
>
> This is shown in the
Hi,
Can you please attach full engine and vdsm logs?
On Thu, Jul 13, 2017 at 1:07 AM, Devin Acosta wrote:
> We are running a fresh install of oVIRT 4.1.3, using ISCSI, the VM in
> question has multiple Disks (4 to be exact). It snapshotted OK while on
> iSCSI however
[Adding ovirt-users]
On Sun, Jul 16, 2017 at 12:58 PM, Benny Zlotnik <bzlot...@redhat.com> wrote:
> We can see a lot of related errors in the engine log but we are unable
> to correlate to the vdsm log. Do you have more hosts? If yes, please
> attach their logs as well.
> And j
_3
> GlusterFS Version:
> glusterfs-3.8.11-1.el7
> CEPH Version:
> librbd1-0.94.5-1.el7
>
> qemu-img version 2.6.0 (qemu-kvm-ev-2.6.0-28.el7_3.9.1), Copyright (c)
> 2004-2008 Fabrice Bellard
>
> This is what i have on the hosts.
>
> /Johan
>
> On Sun, 2017-07-
Hi Terry,
The disk in the snapshot appears to be in an illegal state. How long has it
been like this? Do you have logs from when it happened?
On Tue, Sep 5, 2017 at 8:52 PM, Terry hey wrote:
> Dear all,
> Thank you for your time to read this post first.
> In the same
Hi,
Look at [1], however there are caveats so be sure to pay close attention to
the warning section.
[1] - https://github.com/oVirt/vdsm/blob/master/vdsm_hooks/localdisk/README
On Tue, Sep 5, 2017 at 4:52 PM, Benny Zlotnik <bzlot...@redhat.com> wrote:
> Hi,
>
> Look
Accidentally replied without cc-ing the list
On Sun, Sep 3, 2017 at 12:21 PM, Benny Zlotnik <bzlot...@redhat.com> wrote:
> Hi,
>
> Could you provide full engine and vdsm logs?
>
> On Sat, Sep 2, 2017 at 4:23 PM, wai chun hung <waichunte...@gmail.com>
> wrote:
>
nd attach my logs?
>
> 20.11.2017, 13:08, "Benny Zlotnik" <bzlot...@redhat.com>:
>
> Yes, you can remove it
>
> On Mon, Nov 20, 2017 at 8:10 AM, Алексей Максимов <
> aleksey.i.maksi...@yandex.ru> wrote:
>
> I found an empty directory in the Export domain
Hi Tibor,
Can you please explain this part: "After this I just wondered, I will make
a new VM with same disk and I will copy the images (really just rename)
from original to recreated."
What were the exact steps you took?
Thanks
On Thu, Nov 16, 2017 at 4:19 PM, Demeter Tibor
Can you please provide full vdsm logs (only the engine log is attached) and
the versions of the engine, vdsm, gluster?
On Tue, Nov 14, 2017 at 6:16 PM, Bryan Sockel wrote:
> Having an issue moving a hard disk from one vm data store new a newly
> created gluster data
c90-e574-4282-b1ee-779602e35f24/
> master/vms/f4429fa5-76a2-45a7-ae3e-4d8955d4f1a6
>
> total 16
> drwxr-xr-x. 2 vdsm kvm 4096 Nov 9 02:32 .
> drwxr-xr-x. 106 vdsm kvm 12288 Nov 9 02:32 ..
>
> I can just remove this directory?
>
> 19.11.2017, 18:51, "Benny Zlotnik&qu
+ ovirt-users
On Sun, Nov 19, 2017 at 5:40 PM, Benny Zlotnik <bzlot...@redhat.com> wrote:
> Hi,
>
> There are a couple of issues here, can you please open a bug so we can
> track this properly? https://bugzilla.redhat.com/
> and attach all relevant logs
>
> I went
Hi,
Please attach full engine and vdsm logs
On Sun, Nov 19, 2017 at 12:26 PM, Алексей Максимов <
aleksey.i.maksi...@yandex.ru> wrote:
>
> Hello, oVirt guru`s!
>
> oVirt Engine Version: 4.1.6.2-1.el7.centos
>
> Some time ago the problems started with the oVirt administrative web
> console.
>
378408a9d] Failed
> invoking callback end method 'onFailed' for command
> 'a84519fe-6b23-4084-84a2-b7964cbcde26' with exception 'null', the
> callback is marked for end method retries
> --
>
> I don't have vdsm log. (I don't know why).
>
> Atenciosamente,
> Arthur Melo
> Linux
Please attach engine and vdsm logs
On Tue, Nov 21, 2017 at 2:11 PM, Arthur Melo wrote:
> Can someone help me with this error?
>
>
> Failed to delete snapshot '' for VM 'proxy03'.
>
>
>
> Atenciosamente,
> Arthur Melo
> Linux User #302250
>
>
>
Hi,
This looks like a bug. Can you please file a report with the steps and full
logs on https://bugzilla.redhat.com?
>From looking at the logs it looks like its related to the user field being
empty
On Wed, Nov 15, 2017 at 1:40 PM, wrote:
> Hi,
>
> I'm trying to connect a
t; attaching the logs in gdrive
>
> thanks in advance
>
> 2018-05-11 12:50 GMT-03:00 Benny Zlotnik <bzlot...@redhat.com>:
>
>> I see here a failed attempt:
>> 2018-05-09 16:00:20,129-03 ERROR [org.ovirt.engine.core.dal.dbb
>> roker.auditloghandling.AuditLogDirector]
-387929c7fb4d
>> Alignment:
>> Unknown
>> Disk Profile:
>> Wipe After Delete:
>> No
>>
>> that one
>>
>> 2018-05-11 11:12 GMT-03:00 Benny Zlotnik <bzlot...@redhat.com>:
>>
>>> I looked at the logs and I see some disks have mov
Can you provide the logs? engine and vdsm.
Did you perform a live migration (the VM is running) or cold?
On Fri, May 11, 2018 at 2:49 PM, Juan Pablo
wrote:
> Hi! , Im strugled about an ongoing problem:
> after migrating a vm's disk from an iscsi domain to a nfs and
I believe you've hit this bug: https://bugzilla.redhat.c
om/show_bug.cgi?id=1565040
You can try to release the lease manually using the sanlock client command
(there's an example in the comments on the bug),
once the lease is free the job will fail and the disk can be unlock
On Thu, May 17, 2018
4-d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease:0
for 2,9,5049
Then you can use this command to unlock, the pid in this case is 5049
sanlock client release -r RESOURCE -p pid
On Thu, May 17, 2018 at 11:52 AM, Benny Zlotnik <bzlot...@redhat.c
By the way, please verify it's the same issue, you should see "the volume
lease is not FREE - the job is running" in the engine log
On Thu, May 17, 2018 at 1:21 PM, Benny Zlotnik <bzlot...@redhat.com> wrote:
> I see because I am on debug level, you need to enable it in ord
pool': '', 'capacity':
> '21474836480', 'uuid': u'c2cfbb02-9981-4fb7-baea-7257a824145c',
> 'truesize': '1073741824', 'type': 'SPARSE', 'lease': {'owners': [8],
> 'version': 1L}} (__init__:582)
>
> As you can see, there's no path field there.
>
> How should I procceed?
>
&
This is an iSCSI based storage FWIW (both source and
> destination of the movement).
>
> Thanks.
>
> El 2018-05-17 10:01, Benny Zlotnik escribió:
> > In the vdsm log you will find the volumeInfo log which looks like
> > this:
> >
> > 2018-05-17 11:55:03,25
Could be this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1555116
Adding Ala
On Thu, May 17, 2018 at 5:00 PM, Marcelo Leandro
wrote:
> Error in engine.log.
>
>
> 2018-05-17 10:58:56,766-03 INFO
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
>
tunately no entry there...
>
>
> El 2018-05-17 13:11, Benny Zlotnik escribió:
>
>> Which vdsm version are you using?
>>
>> You can try looking for the image uuid in /var/log/sanlock.log
>>
>> On Thu, May 17, 2018 at 2:40 PM, <nico...@devels.es> wro
Do you see this disk on engine side? it should be aware of this disk since
> it created
> the disk during live storage migration.
>
> Also, we should not have leftovers volumes after failed operations. Please
> file a bug
> for this and attach both engine.log and vdsm.log on the host doing the
>
18101223/583d3d
>
>
> El 2018-06-18 09:28, Benny Zlotnik escribió:
>
>> Can you provide full engine and vdsm logs?
>>
>> On Mon, Jun 18, 2018 at 11:20 AM, wrote:
>>
>> Hi,
>>>
>>> We're running oVirt 4.1.9 (we cannot upgrade at this time) an
t once shut down they
> start showing the problem below...
>
> Thank you
>
>
> El 2018-06-18 15:20, Benny Zlotnik escribió:
>
>> I'm having trouble following the errors, I think the SPM changed or
>> the vdsm log from the right host might be missing.
>>
&g
25/5550ee
>
> El 2018-06-18 13:19, Benny Zlotnik escribió:
>
>> Can you send the SPM logs as well?
>>
>> On Mon, Jun 18, 2018 at 1:13 PM, wrote:
>>
>> Hi Benny,
>>>
>>> Please find the logs at [1].
>>>
>>> Thank you.
>>
can provide VPN access to our
> infrastructure so you can access and see whateve you need (all hosts, DB,
> etc...).
>
> Right now the machines that keep running work, but once shut down they
> start showing the problem below...
>
> Thank you
>
>
> El 2018-06-18 15:20, Be
Hi,
What do you mean by converting the LUN from thin to preallocated?
oVirt creates LVs on top of the LUNs you provide
On Wed, Jun 13, 2018 at 2:05 PM, Albl, Oliver
wrote:
> Hi all,
>
>
>
> I have to move some FC storage domains from thin to preallocated. I
> would set the storage domain to
is transparent to the oVirt host).
>
>
>
> Besides removing “discard after delete” from the storage domain flags, is
> there anything else I need to take care of on the oVirt side?
>
>
>
> All the best,
>
> Oliver
>
>
>
> *Von:* Benny Zlotnik
> *G
Can you provide full engine and vdsm logs?
On Mon, Jun 18, 2018 at 11:20 AM, wrote:
> Hi,
>
> We're running oVirt 4.1.9 (we cannot upgrade at this time) and we're
> having a major problem in our infrastructure. On friday, a snapshots were
> automatically created on more than 200 VMs and as this
Are you able to move the disk?
Can you open a bug?
On Sun, Jun 3, 2018 at 1:35 PM, Arsène Gschwind
wrote:
> I'm using version : 4.2.3.8-1.el7 the latest version.
>
>
> On Sun, 2018-06-03 at 12:59 +0300, Benny Zlotnik wrote:
>
> Which version are you using?
>
> On Sun,
Which version are you using?
On Sun, 3 Jun 2018, 12:57 Arsène Gschwind,
wrote:
> Hi,
>
> in the UI error log ui.log i do get a lot of those errors:
>
> 2018-06-03 10:57:17,486+02 ERROR
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
> (default task-52) [] Permutation name:
Can you please provide the log with the error?
On Sat, Jan 6, 2018 at 5:09 PM, carl langlois
wrote:
> Hi again,
>
> I manage to go a little bit further.. I was not able to set one host to
> maintenance because they had running vm.. so i force it to mark it as
> reboot
Hi,
Can you please attach engine and vdsm logs?
On Tue, Jan 23, 2018 at 1:55 PM, Chris Boot wrote:
> Hi all,
>
> I'm running oVirt 4.2.0 and have been using oVirtBackup with it. So far
> it has been working fine, until this morning. Once of my VMs seems to
> have had a
It was replaced by vdsm-client[1]
[1] - https://www.ovirt.org/develop/developer-guide/vdsm/vdsm-client/
On Tue, Feb 6, 2018 at 10:17 AM, Alex K wrote:
> Hi all,
>
> I have a stuck snapshot removal from a VM which is blocking the VM to
> start.
> In ovirt 4.1 I was able
Under the 3 dots as can be seen in the attached screenshot
On Thu, Feb 15, 2018 at 7:07 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:
>
>
> > On 15 Feb 2018, at 14:17, Andrei V wrote:
> >
> > Hi !
> >
> >
> > I can’t locate “Sparsify” disk image command
Regarding the first question: there is a bug open for this issue [1]
[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1513987
On Fri, Dec 22, 2017 at 1:42 PM, Nathanaël Blanchet
wrote:
> Hi all,
>
> On 4.2, it seems that it is not possible anymore to move a disk to an
>
You could something like this (IIUC):
dead_snap1_params = types.Snapshot(
description=SNAPSHOT_DESC_1,
persist_memorystate=False,
disk_attachments=[
types.DiskAttachment(
disk=types.Disk(
id=disk.id
)
, 2018 at 4:06 PM Gianluca Cecchi
wrote:
> On Thu, Jun 21, 2018 at 2:00 PM, Benny Zlotnik
> wrote:
>
>> You could something like this (IIUC):
>> dead_snap1_params = types.Snapshot(
>> description=SNAPSHOT_DESC_1,
>> persist_memorystate=F
Can you attach the vdsm log?
On Wed, Aug 15, 2018 at 5:16 PM Inquirer Guy wrote:
> Adding to the below issue, my NODE01 can see the NFS share i created from
> the ENGINE01 which I don't know how it got through because when I add a
> storage domain from the ovirt engine I still get the error
>
>
Can you attach the logs from the original failure that caused the active
snapshot to disappear?
And also add your INSERT command
On Fri, Jul 20, 2018 at 12:08 AM wrote:
> Benny,
>
> Thanks for the response!
>
> I don't think I found the right snapshot ID in the logs, but I was able to
> track
Can you locate commands with id: 8639a3dc-0064-44b8-84b7-5f733c3fd9b3,
94607c69-77ce-4005-8ed9-a8b7bd40c496 in the command_entities table?
On Mon, Jul 23, 2018 at 4:37 PM Marcelo Leandro
wrote:
> Good morning,
>
> can anyone help me ?
>
> Marcelo Leandro
>
> 2018-06-27 10:53 GMT-03:00 Marcelo
I can't write an elaborate response since I am away from my laptop, but a
workaround would be to simply insert the snapshot back to the snapshots
table
You need to locate the snapshot's id in the logs where the failure occured
and use vm's id
insert into snapshots values ('', '', 'ACTIVE',
'OK',
if it's reliable
[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1460701
[2] -
https://github.com/oVirt/ovirt-system-tests/blob/master/basic-suite-master/test_utils/__init__.py#L249
On Wed, Jul 18, 2018 at 1:39 PM wrote:
> Hi Benny,
>
> El 2018-07-12 08:50, Benny Zlotnik escribió:
> &
Perhaps you can query the status of job using the correlation id (taking
the examples from ovirt-system-tests):
dead_snap1_params = types.Snapshot(
description=SNAPSHOT_DESC_1,
persist_memorystate=False,
disk_attachments=[
types.DiskAttachment(
Hi,
By default there two OVF_STORE disks per domain. It can be changed with the
StorageDomainOvfStoreCount config value.
On Wed, Jan 24, 2018 at 1:58 PM, Stefano Danzi wrote:
> Hello,
>
> I'm checking Storage -> Disks in my oVirt test site. I can find:
>
> - 4 disks for my 4
Is the storage domain marked as backup?
If it is, you cannot use its disks in an active VM. You can remove the flag
and try again
On Mon, Apr 9, 2018 at 10:52 PM, Scott Walker <crim...@unspeakable.org>
wrote:
> All relevant log files.
>
> On 9 April 2018 at 15:21, Benny
Can you provide the full engine and vdsm logs?
On Mon, 9 Apr 2018, 22:08 Scott Walker, wrote:
> Log file error is:
>
> 2018-04-09 15:05:09,576-04 WARN [org.ovirt.engine.core.bll.RunVmCommand]
> (default task-28) [5f605594-423e-43f6-9e42-e47453518701] Validation of
>
You can do that using something like:
snapshot_service = snapshots_service.snapshot_service(snapshot.id)
snapshot = snapshot_service.get()
if snapshot.snapshot_status == types.SnapshotStatus.OK:
...
But counting on the snapshot status is race prone, so in 4.2 a
Can you attach engine and vdsm logs?
Also, which version are you using?
On Wed, 18 Apr 2018, 19:23 , wrote:
> Hello All,
>
> after an update and a reboot, 3 vm's are indicated as diskless.
> When I try to add disks I indeed see 3 available disks, but I also see
Looks like you hit this: https://bugzilla.redhat.com/show_bug.cgi?id=1569420
On Thu, Apr 19, 2018 at 3:25 PM, Roger Meier
wrote:
> Hi all,
>
> I wanted to add a new host to our current oVirt 4.2.2 setup and the
> install of the host fail with the following error
It is in the disk_image_dynamic table
On Thu, Apr 19, 2018 at 3:36 PM, Hari Prasanth Loganathan <
hariprasant...@msystechnologies.com> wrote:
> Hi Team,
>
> I am trying to get the disk level statistics using oVirt with the
> following API,
>
> /ovirt-engine/api/disks/{unique_disk_id}/statistics/
Looks like a bug. Can you please file a report:
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
On Mon, Apr 23, 2018 at 9:38 PM, ~Stack~ wrote:
> Greetings,
>
> After my rebuild, I have imported my VM's. Everything went smooth and
> all of them came back,
Hi Bryan,
You can go into the template -> storage tab -> select the disk and remove
it there
On Fri, Mar 30, 2018 at 4:50 PM, Bryan Sockel
wrote:
> Hi,
>
>
> We are in the process of re-doing one of our storage domains. As part of
> the process I needed to relocate
Can you attach engine and vdsm logs?
On Sun, Oct 28, 2018 at 4:29 PM wrote:
> Hi Oliver,
>
> I did the "Confirmed host has been updated" step..
>
> If I retry to ACTIVATE my Master Storage domain this morning as you
> suggest but it didn't change anything. Here is what I get in the storage
>
You can use /us/share/ovirt-engine/setup/dbutils/unlock_entity.sh to unlock
it first, before removing
On Thu, Oct 25, 2018 at 1:40 PM Hari Prasanth Loganathan <
hariprasant...@msystechnologies.com> wrote:
> Hi Team,
>
> Any update on this ?
>
> On Tue, Oct 23, 2018 at 9:22 PM Hari Prasanth
You can run `virsh -r dumpxml ` on the relevant host
On Thu, Dec 20, 2018, 16:17 Jacob Green How does one get an XML dump of a VM from ovirt? I have seen ovirt
> do it in the engine.log, but not sure how to force it to generate one
> when I need it.
>
>
> Thank you.
>
> --
> Jacob Green
>
It is not part of the first alpha, still in the works[1]
[1] - https://gerrit.ovirt.org/#/q/topic:cinderlib-integration
On Thu, Nov 29, 2018 at 1:14 PM Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:
> What is the status of cinderlib integration in oVirt 4.3?
>
> thank you
>
Can you please attach full engine and vdsm logs?
On Sun, Jan 27, 2019 at 2:32 PM wrote:
> Hi Guys,
>
> I am trying to install a new cluster... I currently have a 9 node and two
> 6 node oVirt Clusters... (these were installed on 4.1 and upgraded to 4.2)
>
> So i want to build a new cluster,
and dealing with the rbd feature issues I could
> proudly start my first VM with a cinderlib provisioned disk :-)
>
> Thanks for help!
> I'll keep posting my experiences concerning cinderlib to this list.
>
> Matthias
>
> Am 01.04.19 um 16:24 schrieb Benny Zlotnik:
> > D
it should be something like this:
$ cat update.json
{
"job_id":"",
"vol_info": {
"sd_id": "",
"img_id": "",
"vol_id": "",
"generation": ""
},
"legality": "LEGAL"
}
}
$ vdsm-client SDM update_volume -f update.json
On
any UUID for the job?
>
> Thanks.
>
> El 2019-04-03 09:52, Benny Zlotnik escribió:
> > it should be something like this:
> > $ cat update.json
> > {
> > "job_id":"",
> > "vol_info": {
> >
,
> >>
> >> Thanks for the help.
> >>
> >> Could you please tell me what job_uuid and vol_gen should be replaced
> >> by? Should I just put any UUID for the job?
> >>
> >> Thanks.
> >>
> >> El 2019-04-03 09:52
Looks like it was fixed[1]
[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1690268
On Thu, Apr 4, 2019 at 1:47 PM Callum Smith wrote:
>
> 2019-04-04 10:43:35,383Z ERROR
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default
> task-15) [] Permutation name:
Hi,
Thanks for trying this out!
We added a separate log file for cinderlib in 4.3.2, it should be available
under /var/log/ovirt-engine/cinderlib/cinderlib.log
They are not perfect yet, and more improvements are coming, but it might
provide some insight about the issue
>Although I don't think
> OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says:
>
> 2019-04-01 11:14:54,925 - cinder.volume.drivers.rbd - ERROR - Error
> connecting to ceph cluster.
> Traceback (most recent call last):
>File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
> line 337, in _do_conn
>
I added an example for ceph[1]
[1] -
https://github.com/oVirt/ovirt-site/blob/468c79a05358e20289e7403d9dd24732ab453a13/source/develop/release-management/features/storage/cinderlib-integration.html.md#create-storage-domain
On Mon, Apr 1, 2019 at 5:24 PM Benny Zlotnik wrote:
>
> Did yo
Did you pass the rbd_user when creating the storage domain?
On Mon, Apr 1, 2019 at 5:08 PM Matthias Leopold
wrote:
>
>
> Am 01.04.19 um 13:17 schrieb Benny Zlotnik:
> >> OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says:
> >>
> >> 2019-04-01 11:14:54,925
Please open a bug for this, with vdsm and supervdsm logs
On Mon, Apr 8, 2019 at 2:13 PM Matthias Leopold
wrote:
>
> Hi,
>
> after I successfully started my first VM with a cinderlib attached disk
> in oVirt 4.3.2 I now want to test basic operations. I immediately
> learned that migrating this VM
2
> Segments 9
> Allocation inherit
> Read ahead sectors auto
>
>
> On Tue, 26 Feb 2019 15:57:47 + *Benny Zlotnik
> >* wrote
>
> I haven't found anything other the leaks issue, you can try to run
> $ qemu-img check -r leaks
Can you provide full vdsm & engine logs?
On Tue, Feb 26, 2019 at 5:10 PM Alan G wrote:
> Hi,
>
>
> I performed the following: -
>
> 1. Shutdown VM.
> 2. Take a snapshot
> 3. Create a clone from snapshot.
> 4. Start the clone. Clone starts fine.
> 5. Attempt to delete snapshot from original VM,
ne log.
>
>
> On Tue, 26 Feb 2019 15:11:39 + *Benny Zlotnik
> >* wrote
>
> Can you provide full vdsm & engine logs?
>
> On Tue, Feb 26, 2019 at 5:10 PM Alan G wrote:
>
> ___
> Users mailing list -- users@ovirt.
I don't think it is documented. IMHO contributing to the admin guide is
best.
Documenting what is tried and works well would be a good start, this can be
improved over time if needed
On Wed, Feb 20, 2019 at 2:09 PM Greg Sheremeta wrote:
>
> On Tue, Feb 19, 2019 at 6:03 PM Matt Simonsen wrote:
re(rw=rw, justme=True)
> File "/usr/share/vdsm/storage/volume.py", line 562, in prepare
> raise se.prepareIllegalVolumeError(self.volUUID)
> prepareIllegalVolumeError: Cannot prepare illegal volume:
> ('5f5b436d-6c48-4b9f-a68c-f67d666741ab',)
>
>
&
Can you please attach the full engine and vdsm logs?
On Thu, Feb 28, 2019, 23:33 wrote:
> Unable to delete VM snapshots. Error messages and log info below.
>
> Running oVirt VM 4.2.8.2-1.el7,
>
> I haven't been able to find good documentation on how to fix. Can
> someone point me in the right
Also, which driver are you planning on trying?
And there are some known issues we fixed in the upcoming 4.3.2,
like setting correct permissions to /usr/share/ovirt-engine/cinderlib
it should be owned by the ovirt user
We'll be happy to receive bug reports
On Wed, Mar 6, 2019, 13:44 Benny
unfortunately we don't have proper packaging for cinderlib at the moment,
it needs to be installed via pip,
pip install cinderlib
also you need to enable the config value ManagedBlockDomainSupported
On Wed, Mar 6, 2019, 13:24 Gianluca Cecchi
wrote:
> Hello,
> I have updated an environment from
ds to be cleaned/reset?
>
>
> On Tue, 26 Feb 2019 16:25:22 + *Benny Zlotnik
> >* wrote
>
> it's because the VM is down, you can manually activate using
> $ lvchange -a y vgname/lvname
>
> remember to deactivate after
>
>
> On Tue, Feb 26, 2019 at 6:15 PM
Sorry, I meant which storage backend are you planning to test :)
On Thu, Mar 7, 2019 at 9:48 AM Gianluca Cecchi
wrote:
> On Wed, Mar 6, 2019 at 12:49 PM Benny Zlotnik wrote:
>
>> Also, which driver are you planning on trying?
>>
>> And there are some known issues we fix
gt;
> On Mon, 18 Mar 2019 12:36:13 + *Benny Zlotnik
> >* wrote
>
> is this live or cold migration?
> which version?
>
> currently the best way (and probably the only one we have) is to kill the
> qemu-img convert process (if you are doing cold migration), un
is this live or cold migration?
which version?
currently the best way (and probably the only one we have) is to kill the
qemu-img convert process (if you are doing cold migration), unless there is
a bug in your version, it should rollback properly
On Mon, Mar 18, 2019 at 2:10 PM Alan G wrote:
you don't need to remove, ovirt should exclude "unsnappable" disks
automatically, and direct LUN disks are "unsnappable". Regardless of this,
you can choose yourself which disks you include in a snapshot
On Tue, Mar 12, 2019 at 5:00 PM wrote:
> We know that now oVirt doesn't make snapshot for
You can also have look at:
https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html
There is an nfs-check.py utility to help troubleshoot
On Fri, Feb 15, 2019 at 8:08 AM Jinesh Ks wrote:
> Host Node3 cannot access the Storage Domain(s) attached to the
> Data Center Default.
4eb375c8e6e7 (api:52)
> 2019-02-11 10:06:13,632+ INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
> call Host.getDeviceList succeeded in 1.12 seconds (__init__:573)
>
> The luns are still not retrived form the target. I hit "Login" button
> goes greyed out, but still no luns ..
IIUC it shouldn't get to 475 since it would short-circuit if it isn't a disk
Jiří, can you attach the full engine logs so we can retrace everything that
happened?
On Fri, Feb 15, 2019 at 10:06 PM Michal Skrivanek <
michal.skriva...@redhat.com> wrote:
> Isn’t the code wrong? Seems the function
Can you attach engine and vdsm logs?
On Thu, Jan 31, 2019 at 4:04 PM Leo David wrote:
> Hello everyone,
> Trying to setup an iscsi target as a storage domain, and it seems not to
> be possible.
> Discovered the hosts, the targets are displayed.
> Selected one target, clicked the "Login"
This looks like underlying error:
2019-01-27 11:09:59,468+0400 ERROR (jsonrpc/4) [storage.Dispatcher] FINISH
createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock',
> It feels like is not getting the luns from the target, although from a
> different client ( windows machine) i can successfully connect and map the
> lun.
>
>
>
> On Thu, Jan 31, 2019 at 4:10 PM Benny Zlotnik wrote:
>
>> Can you attach engine and vdsm logs?
&g
Can you run:
$ gdb -p $(pidof qemu-img convert) -batch -ex "t a a bt"
On Wed, Apr 10, 2019 at 11:26 AM Callum Smith wrote:
>
> Dear All,
>
> Further to this, I can't migrate a disk to different storage using the GUI.
> Both disks are configured identically and on the same physical NFS
2019-04-12 10:39:25,643+0200 ERROR (jsonrpc/0) [virt.vm]
(vmId='71f27df0-f54f-4a2e-a51c-e61aa26b370d') Unable to start
replication for vda to {'domainID':
'244dfdfb-2662-4103-9d39-2b13153f2047', 'volumeInfo': {'path':
This is just an info message, if you don't use managed block
storage[1] you can ignore it
[1] -
https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
On Tue, Apr 16, 2019 at 7:09 PM Stefano Danzi wrote:
>
> Hello,
>
> I've just upgrade one node host to v.
yes, I reported it and it's being worked on[1]
[1] https://ovirt-jira.atlassian.net/browse/OVIRT-2728
On Wed, May 15, 2019 at 4:05 PM Markus Stockhausen
wrote:
>
> Hi,
>
> does anyone currently get old mails of 2016 from the mailing list?
> We are spammed with something like this from
If there is a backend driver available it should work. We did not test
this though, so it would be great to get bug reports if you had any
trouble
Upon VM migration the disk should be automatically connected to the
target host (and disconnected from the origin).
On Thu, May 30, 2019 at 10:35 AM
Can you attach vdsm and engine logs?
Does this happen for new VMs as well?
On Thu, Jun 13, 2019 at 12:15 PM Alex McWhirter wrote:
>
> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
> images are become owned by root:root. Live migration succeeds and the vm
> stays up, but
Also, what is the storage domain type? Block or File?
On Thu, Jun 13, 2019 at 2:46 PM Benny Zlotnik wrote:
>
> Can you attach vdsm and engine logs?
> Does this happen for new VMs as well?
>
> On Thu, Jun 13, 2019 at 12:15 PM Alex McWhirter wrote:
> >
> > after upgra
Can you attach engine and vdsm logs?
On Sun, Jun 23, 2019 at 11:29 AM m black wrote:
> Hi.
>
> I have a problem with importing some VMs after importing storage domain in
> new datacenter.
>
> I have 5 servers with oVirt version 4.1.7, hosted-engine setup and
> datacenter with iscsi, fc and nfs
1 - 100 of 301 matches
Mail list logo