Re: [Users] importing VM from ESXI

2013-09-26 Thread Richard W.M. Jones

On Wed, Sep 25, 2013 at 06:10:00PM -0300, emi...@gmail.com wrote:
 could not open disk image
 /tmp/KxIoJI50Pc/2872ac3e-7340-4dfa-9801-0a1bd052b3a3/v2v._ApSlRZG/387a5113-bbc2-45a2-9c55-5dc3dade31a9/01c899de-131e-4407-a16c-8c5484ccb8bd:
 Permission denied

The error comes from qemu and indicates qemu cannot open this
temporary disk image.  I've no idea why it cannot open it, but
possibilities include: SELinux, file permissions (eg on /tmp), setting
of TMPDIR, relative path to a backing file, and more ...

I'd suggest that you 'strace -f' the whole set of processes to see
what precise system call fails.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming blog: http://rwmj.wordpress.com
Fedora now supports 80 OCaml packages (the OPEN alternative to F#)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-26 Thread David Riedl



On 25.09.2013 09:11, Vijay Bellur wrote:

On 09/25/2013 11:51 AM, Gianluca Cecchi wrote:

On Wed, Sep 25, 2013 at 8:11 AM, Vijay Bellur  wrote:




Have the following configuration changes been done?

1) gluster volume set volname server.allow-insecure on

2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain 
this

line:
 option rpc-auth-allow-insecure on

Post 2), restarting glusterd would be necessary.

Regards,
Vijay



No, because I didn't find find this kind of info anywhere... ;-)


The feature page wiki does provide this information but it gets missed 
in the details. Should we highlight it more?




Done on both hosts (step 1 only one time) and I see that the gui
detects the change in volume setting.
Now the VM can start (I see the qemu process on ovnode02) but it seems
to remain in hourglass state icon.
After 5 minutes it still remains in executing phase in tasks



Let us know how this goes.

-Vijay
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


I found a bug report on your issue.
https://bugzilla.redhat.com/show_bug.cgi?id=988299

Scroll to the end. ( 
https://bugzilla.redhat.com/show_bug.cgi?id=988299#c46 )
There is a modified glusterVolume.py. I have the same issue as well, I'm 
trying to fix this it right now.


Regards David

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] vdsm live migration errors in latest master

2013-09-26 Thread Federico Simoncelli
- Original Message -
 From: Dan Kenigsberg dan...@redhat.com
 To: Federico Simoncelli fsimo...@redhat.com
 Cc: Dead Horse deadhorseconsult...@gmail.com, users users@ovirt.org, 
 vdsm-de...@fedorahosted.org,
 aba...@redhat.com
 Sent: Thursday, September 26, 2013 1:38:15 AM
 Subject: Re: [Users] vdsm live migration errors in latest master
 
 On Tue, Sep 24, 2013 at 12:04:14PM -0400, Federico Simoncelli wrote:
  - Original Message -
   From: Dan Kenigsberg dan...@redhat.com
   To: Dead Horse deadhorseconsult...@gmail.com
   Cc: users@ovirt.org users@ovirt.org, vdsm-de...@fedorahosted.org,
   fsimo...@redhat.com, aba...@redhat.com
   Sent: Tuesday, September 24, 2013 11:44:48 AM
   Subject: Re: [Users] vdsm live migration errors in latest master
   
   On Mon, Sep 23, 2013 at 04:05:34PM -0500, Dead Horse wrote:
Seeing failed live migrations and these errors in the vdsm logs with
latest
VDSM/Engine master.
Hosts are EL6.4
   
   Thanks for posting this report.
   
   The log is from the source of migration, right?
   Could you trace the history of the hosts of this VM? Could it be that it
   was started on an older version of vdsm (say ovirt-3.3.0) and then (due
   to migration or vdsm upgrade) got into a host with a much newer vdsm?
   
   Would you share the vmCreate (or vmMigrationCreate) line for this Vm in
   your log? I smells like an unintended regression of
   http://gerrit.ovirt.org/17714
   vm: extend shared property to support locking
   
   solving it may not be trivial, as we should not call
   _normalizeDriveSharedAttribute() automatically on migration destination,
   as it may well still be apart of a 3.3 clusterLevel.
   
   Also, migration from vdsm with extended shared property, to an ovirt 3.3
   vdsm is going to explode (in a different way), since the destination
   does not expect the extended values.
   
   Federico, do we have a choice but to revert that patch, and use
   something like shared3 property instead?
  
  I filed a bug at:
  
  https://bugzilla.redhat.com/show_bug.cgi?id=1011608
  
  A possible fix could be:
  
  http://gerrit.ovirt.org/#/c/19509
 
 Beyond this, we must make sure that on Engine side, the extended shared
 values would be used only for clusterLevel 3.4 and above.
 
 Are the extended shared values already used by Engine?

Yes. That's the idea. Actually to be fair, the second case you mentioned
(migrating from extended shared property to old vdsm) it wouldn't have been
possible I suppose (the issue here is that Dead Horse has one or more
hosts running on master instead of 3.3). The extended shared property would
have appeared only in 3.4 and to allow the migration you would have had to
upgrade all the nodes.

But anyway since we were also talking about a new 3.3.1 barnch I just went
ahead and covered all cases.

-- 
Federico
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-26 Thread Gianluca Cecchi
On Thu, Sep 26, 2013 at 11:07 AM, David Riedl  wrote:
g report on your issue.
 https://bugzilla.redhat.com/show_bug.cgi?id=988299

 Scroll to the end. ( https://bugzilla.redhat.com/show_bug.cgi?id=988299#c46
 )
 There is a modified glusterVolume.py. I have the same issue as well, I'm
 trying to fix this it right now.

 Regards David


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


Unfortunately I have the esx infra down for a scheduled maintenance right now.
Later today I'm going to verify and test when up again as suggested to
me in the previous message..
But it doesn't appear as the same thing to me...
My volume is gv01 and without any _ or / in it.
Also the message is a bit different

On vdsm.log of host:
 File /usr/lib64/python2.7/site-packages/libvirt.py, line 2805, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: Unable to read from monitor: Connection reset by peer

In qemu.log of the VM
Gluster connection failed for server=ovnode01 port=0 volume=gv01
image=20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161
transport=tcp

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Disk state - Illegal?

2013-09-26 Thread SULLIVAN, Chris (WGK)
Thanks for posting that Andrew. I had the same problem removing disks from a 
Gluster volume (disk state changed to Illegal and the actual disk file was not 
removed from the volume) and your method worked fine for me.

Cheers,

Chris

Message: 3
Date: Thu, 26 Sep 2013 12:53:04 +1000
From: Andrew Lau and...@andrewklau.com
To: Dan Ferris dfer...@prometheusresearch.com
Cc: Users@ovirt.org Users@ovirt.org
Subject: Re: [Users] Disk state - Illegal?
Message-ID:
cad7df9dropupgxqjm-9atye7_qllhdnyxllvvb_ftwqdkh0...@mail.gmail.com
Content-Type: text/plain; charset=iso-8859-1

I noticed that too, I wasn't sure if it was a bug or just how I had setup my 
NFS share..

There were three steps I did to remove the disk images, I'm sure there's a 100% 
easier solution..:

I found the easiest way (graphically) was go to your 
https://ovirtengine/api/disks and so a search for the illegal disk. Append the 
extra ID eg. disk href=/api/disks/lk342-dfsdf...
into your URL this'll give you your image ID.

Go to your storage share:
cd /data/storage-id/master/vms/storage-id
grep -ir 'vmname' *
You'll find the image-id reference here too.

Then the image you will want to remove is in the 
/data/storage-id/images/image-id I assume you could safely remove this whole 
folder if you wanted to delete the disk.

To remove the illegal state I did it through the API so again with the URL 
above https://ovirtengine/disks/disk-id send a DELETE using HTTP/CURL

Again, this was a poor mans solution but it worked for me.

On Thu, Sep 26, 2013 at 4:04 AM, Dan Ferris
dfer...@prometheusresearch.comwrote:

 Hi,

 I have another hopefully simple question.

 One VM that I am trying to remove says that it's disk state is illegal
 and when I try to remove the disk it says that it failed to initiate
 the removing of the disk.

 Is there an easy way to get rid of these illegal disk images?

 Dan


PLEASE CONSIDER THE ENVIRONMENT, DON'T PRINT THIS EMAIL UNLESS YOU REALLY NEED 
TO.

This email and its attachments may contain information which is confidential 
and/or legally privileged. If you are not the intended recipient of this e-mail 
please notify the sender immediately by e-mail and delete this e-mail and its 
attachments from your computer and IT systems. You must not copy, re-transmit, 
use or disclose (other than to the sender) the existence or contents of this 
email or its attachments or permit anyone else to do so.

-

__**_
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/**mailman/listinfo/usershttp://lists.ovirt.org
 /mailman/listinfo/users

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.ovirt.org/pipermail/users/attachments/20130926/7b7ac24b/attachment-0001.html

--

Message: 4
Date: Thu, 26 Sep 2013 08:35:07 +0100
From: Richard W.M. Jones rjo...@redhat.com
To: emi...@gmail.com emi...@gmail.com
Cc: users@ovirt.org
Subject: Re: [Users] importing VM from ESXI
Message-ID: 20130926073507.ge1...@redhat.com
Content-Type: text/plain; charset=us-ascii


On Wed, Sep 25, 2013 at 06:10:00PM -0300, emi...@gmail.com wrote:
 could not open disk image
 /tmp/KxIoJI50Pc/2872ac3e-7340-4dfa-9801-0a1bd052b3a3/v2v._ApSlRZG/387a5113-bbc2-45a2-9c55-5dc3dade31a9/01c899de-131e-4407-a16c-8c5484ccb8bd:
 Permission denied

The error comes from qemu and indicates qemu cannot open this temporary disk 
image.  I've no idea why it cannot open it, but possibilities include: SELinux, 
file permissions (eg on /tmp), setting of TMPDIR, relative path to a backing 
file, and more ...

I'd suggest that you 'strace -f' the whole set of processes to see what precise 
system call fails.

Rich.

--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones 
Read my programming blog: http://rwmj.wordpress.com Fedora now supports 80 
OCaml packages (the OPEN alternative to F#)


--

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


End of Users Digest, Vol 24, Issue 140
**

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] vdsm live migration errors in latest master

2013-09-26 Thread Dan Kenigsberg
On Thu, Sep 26, 2013 at 05:35:46AM -0400, Federico Simoncelli wrote:
 - Original Message -
  From: Dan Kenigsberg dan...@redhat.com
  To: Federico Simoncelli fsimo...@redhat.com
  Cc: Dead Horse deadhorseconsult...@gmail.com, users 
  users@ovirt.org, vdsm-de...@fedorahosted.org,
  aba...@redhat.com
  Sent: Thursday, September 26, 2013 1:38:15 AM
  Subject: Re: [Users] vdsm live migration errors in latest master
  
  On Tue, Sep 24, 2013 at 12:04:14PM -0400, Federico Simoncelli wrote:
   - Original Message -
From: Dan Kenigsberg dan...@redhat.com
To: Dead Horse deadhorseconsult...@gmail.com
Cc: users@ovirt.org users@ovirt.org, vdsm-de...@fedorahosted.org,
fsimo...@redhat.com, aba...@redhat.com
Sent: Tuesday, September 24, 2013 11:44:48 AM
Subject: Re: [Users] vdsm live migration errors in latest master

On Mon, Sep 23, 2013 at 04:05:34PM -0500, Dead Horse wrote:
 Seeing failed live migrations and these errors in the vdsm logs with
 latest
 VDSM/Engine master.
 Hosts are EL6.4

Thanks for posting this report.

The log is from the source of migration, right?
Could you trace the history of the hosts of this VM? Could it be that it
was started on an older version of vdsm (say ovirt-3.3.0) and then (due
to migration or vdsm upgrade) got into a host with a much newer vdsm?

Would you share the vmCreate (or vmMigrationCreate) line for this Vm in
your log? I smells like an unintended regression of
http://gerrit.ovirt.org/17714
vm: extend shared property to support locking

solving it may not be trivial, as we should not call
_normalizeDriveSharedAttribute() automatically on migration destination,
as it may well still be apart of a 3.3 clusterLevel.

Also, migration from vdsm with extended shared property, to an ovirt 3.3
vdsm is going to explode (in a different way), since the destination
does not expect the extended values.

Federico, do we have a choice but to revert that patch, and use
something like shared3 property instead?
   
   I filed a bug at:
   
   https://bugzilla.redhat.com/show_bug.cgi?id=1011608
   
   A possible fix could be:
   
   http://gerrit.ovirt.org/#/c/19509
  
  Beyond this, we must make sure that on Engine side, the extended shared
  values would be used only for clusterLevel 3.4 and above.
  
  Are the extended shared values already used by Engine?
 
 Yes. That's the idea. Actually to be fair, the second case you mentioned
 (migrating from extended shared property to old vdsm) it wouldn't have been
 possible I suppose (the issue here is that Dead Horse has one or more
 hosts running on master instead of 3.3). The extended shared property would
 have appeared only in 3.4 and to allow the migration you would have had to
 upgrade all the nodes.
 
 But anyway since we were also talking about a new 3.3.1 barnch I just went
 ahead and covered all cases.

I do not see how the 3.3.1 branch is relevant to the discussion, as its
Vdsm is NOT going to support clusterLevel 3.4.

Pardon my slowliness, but would you confirm that this feature is to be
used only on clusterLevel 3.4 and above? If so, I'm +2ing your patch.

Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] importing VM from ESXI

2013-09-26 Thread emi...@gmail.com
Thanks Richard!

SELinux is set as Permissive, the tmp directory has permission as 777
root:root. What do you mean with the setting of TMPDIR?, I'm going to start
the process again using the strace  as you tell me but I'm not sure if I'm
going to be able to deduce where is the issue with the output of that
command.

If you know something else that I could check would be great!

Regards!





2013/9/26 Richard W.M. Jones rjo...@redhat.com


 On Wed, Sep 25, 2013 at 06:10:00PM -0300, emi...@gmail.com wrote:
  could not open disk image
 
 /tmp/KxIoJI50Pc/2872ac3e-7340-4dfa-9801-0a1bd052b3a3/v2v._ApSlRZG/387a5113-bbc2-45a2-9c55-5dc3dade31a9/01c899de-131e-4407-a16c-8c5484ccb8bd:
  Permission denied

 The error comes from qemu and indicates qemu cannot open this
 temporary disk image.  I've no idea why it cannot open it, but
 possibilities include: SELinux, file permissions (eg on /tmp), setting
 of TMPDIR, relative path to a backing file, and more ...

 I'd suggest that you 'strace -f' the whole set of processes to see
 what precise system call fails.

 Rich.

 --
 Richard Jones, Virtualization Group, Red Hat
 http://people.redhat.com/~rjones
 Read my programming blog: http://rwmj.wordpress.com
 Fedora now supports 80 OCaml packages (the OPEN alternative to F#)




-- 
*Emiliano Tortorella*
+598 98941176
emi...@gmail.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] importing VM from ESXI

2013-09-26 Thread Richard W.M. Jones
On Thu, Sep 26, 2013 at 09:35:13AM -0300, emi...@gmail.com wrote:
 Thanks Richard!
 
 SELinux is set as Permissive, the tmp directory has permission as 777
 root:root. What do you mean with the setting of TMPDIR?, I'm going to start
 the process again using the strace  as you tell me but I'm not sure if I'm
 going to be able to deduce where is the issue with the output of that
 command.

Pastebin the whole strace somewhere and post a link to this mailing list.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
libguestfs lets you edit virtual machines.  Supports shell scripting,
bindings from many languages.  http://libguestfs.org
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] vdsm live migration errors in latest master

2013-09-26 Thread Federico Simoncelli
- Original Message -
 From: Dan Kenigsberg dan...@redhat.com
 To: Federico Simoncelli fsimo...@redhat.com
 Cc: Dead Horse deadhorseconsult...@gmail.com, users users@ovirt.org, 
 vdsm-de...@fedorahosted.org,
 aba...@redhat.com
 Sent: Thursday, September 26, 2013 2:09:15 PM
 Subject: Re: [Users] vdsm live migration errors in latest master
 
 On Thu, Sep 26, 2013 at 05:35:46AM -0400, Federico Simoncelli wrote:
  - Original Message -
   From: Dan Kenigsberg dan...@redhat.com
   To: Federico Simoncelli fsimo...@redhat.com
   Cc: Dead Horse deadhorseconsult...@gmail.com, users
   users@ovirt.org, vdsm-de...@fedorahosted.org,
   aba...@redhat.com
   Sent: Thursday, September 26, 2013 1:38:15 AM
   Subject: Re: [Users] vdsm live migration errors in latest master
   
   On Tue, Sep 24, 2013 at 12:04:14PM -0400, Federico Simoncelli wrote:
- Original Message -
 From: Dan Kenigsberg dan...@redhat.com
 To: Dead Horse deadhorseconsult...@gmail.com
 Cc: users@ovirt.org users@ovirt.org,
 vdsm-de...@fedorahosted.org,
 fsimo...@redhat.com, aba...@redhat.com
 Sent: Tuesday, September 24, 2013 11:44:48 AM
 Subject: Re: [Users] vdsm live migration errors in latest master
 
 On Mon, Sep 23, 2013 at 04:05:34PM -0500, Dead Horse wrote:
  Seeing failed live migrations and these errors in the vdsm logs
  with
  latest
  VDSM/Engine master.
  Hosts are EL6.4
 
 Thanks for posting this report.
 
 The log is from the source of migration, right?
 Could you trace the history of the hosts of this VM? Could it be that
 it
 was started on an older version of vdsm (say ovirt-3.3.0) and then
 (due
 to migration or vdsm upgrade) got into a host with a much newer vdsm?
 
 Would you share the vmCreate (or vmMigrationCreate) line for this Vm
 in
 your log? I smells like an unintended regression of
 http://gerrit.ovirt.org/17714
 vm: extend shared property to support locking
 
 solving it may not be trivial, as we should not call
 _normalizeDriveSharedAttribute() automatically on migration
 destination,
 as it may well still be apart of a 3.3 clusterLevel.
 
 Also, migration from vdsm with extended shared property, to an ovirt
 3.3
 vdsm is going to explode (in a different way), since the destination
 does not expect the extended values.
 
 Federico, do we have a choice but to revert that patch, and use
 something like shared3 property instead?

I filed a bug at:

https://bugzilla.redhat.com/show_bug.cgi?id=1011608

A possible fix could be:

http://gerrit.ovirt.org/#/c/19509
   
   Beyond this, we must make sure that on Engine side, the extended shared
   values would be used only for clusterLevel 3.4 and above.
   
   Are the extended shared values already used by Engine?
  
  Yes. That's the idea. Actually to be fair, the second case you mentioned
  (migrating from extended shared property to old vdsm) it wouldn't have been
  possible I suppose (the issue here is that Dead Horse has one or more
  hosts running on master instead of 3.3). The extended shared property would
  have appeared only in 3.4 and to allow the migration you would have had to
  upgrade all the nodes.
  
  But anyway since we were also talking about a new 3.3.1 barnch I just went
  ahead and covered all cases.
 
 I do not see how the 3.3.1 branch is relevant to the discussion, as its
 Vdsm is NOT going to support clusterLevel 3.4.

That is what I was referring to.

If 3.3.1 was 3.3.0 + backported patches then we just wouldn't backport the
extended shared attributes patch and that's it. But from what I understood
3.3.1 will be rebased on master (where instead we have the extended shared
attributes) and that is why we have to cover both migration direction cases
(instead of just the simple getattr one).

 Pardon my slowliness, but would you confirm that this feature is to be
 used only on clusterLevel 3.4 and above? If so, I'm +2ing your patch.

Yes, the extended attributes will be used in the hosted engine and cluster
level 3.4.
But what the engine does is not relevant to +2ing correct vdsm patches.

-- 
Federico
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-26 Thread Gianluca Cecchi
I was able to restart engine and the two hosts.
All restarted again.
Now the effect to run the VM is that it remains in paused state

- start VM (about 21:54 today)
it starts and goes into paused mode (arrow icon near VM)
From image
https://docs.google.com/file/d/0BwoPbcrMv8mvRXlaa19sdFpmQ0E/edit?usp=sharing

you see that apparently the execute action terminates at 21:54 but the
VM maintains paused state.

- if I try other actions on the same VM there is no message preventing
me from that
and it stays in paused mode
see several attempt actions to solve situation

- at 21:58 host becomes unreponsive from the gui, no network ping from
engine and if I go into its console I see the login prompt but not
able to connect...

- power off ovnode01
icon near VM becomes now question mark (?)

- power on ovnode01
vm goes into stop mode (red square)
ovnode01 joins again cluster

vdsm log in gzip format starting today before start of vm
https://docs.google.com/file/d/0BwoPbcrMv8mvXzY2eEcwR0VXazQ/edit?usp=sharing

engine.log in gzip format
https://docs.google.com/file/d/0BwoPbcrMv8mvU1RuLVRVYVZ0SXM/edit?usp=sharing


PS: at the moment no fenging action set up. COuld I set any agent for
host virtualized inside VMware?

from a gluster point of view on ovnode01 under /var/log/glusterfs
[root@ovnode01 glusterfs]# ls -lrt
total 2008
drwxr-xr-x. 2 root root4096 Sep 25 00:05 bricks
-rw---. 1 root root   59038 Sep 26 22:09 nfs.log
-rw---. 1 root root   51992 Sep 26 22:09 glustershd.log
-rw---. 1 root root   40230 Sep 26 22:09
rhev-data-center-mnt-glusterSD-ovnode01:gv01.log
-rw---. 1 root root  422757 Sep 26 22:47 etc-glusterfs-glusterd.vol.log
-rw---. 1 root root 1449411 Sep 26 22:47 cli.log

In etc-glusterfs-glusterd.vol.log
I see several lines like this
[2013-09-26 20:19:53.450793] I
[glusterd-handler.c:1007:__glusterd_handle_cli_get_volume] 0-glusterd:
Received get vol req

qemu process:
qemu  4565 1  0 22:21 ?00:00:09
/usr/bin/qemu-system-x86_64 -machine accel=kvm -name C6 -S -machine
pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 2048 -smp
1,sockets=1,cores=1,threads=1 -uuid
409c5dbe-5e70-40de-bf73-46ef484ea2d7 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=19-3,serial=421F7170-C703-34E3-9628-4588D841F8B1,uuid=409c5dbe-5e70-40de-bf73-46ef484ea2d7
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/C6.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2013-09-26T20:21:00,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
file=/rhev/data-center/mnt/ovirt.localdomain.local:_var_lib_exports_iso/6e80607d-5437-4fc5-b73c-66794f6381e0/images/----/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
-drive 
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
-netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:10,bus=pci.0,addr=0x3,bootindex=3
-chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.com.redhat.rhevm.vdsm,server,nowait
-device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev 
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.org.qemu.guest_agent.0,server,nowait
-device 
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice 
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global
qxl-vga.vram_size=67108864 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7

The VM result as started in events of the gui but its icon remains in pause.
See the image:
https://docs.google.com/file/d/0BwoPbcrMv8mvZ1RnUkg4aVhlckk/edit?usp=sharing

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Disk state - Illegal?

2013-09-26 Thread Dan Ferris

I was off today, so I just saw this.  I will try it out tomorrow.

Thanks!

Dan

On 9/25/2013 8:53 PM, Andrew Lau wrote:

I noticed that too, I wasn't sure if it was a bug or just how I had
setup my NFS share..

There were three steps I did to remove the disk images, I'm sure there's
a 100% easier solution..:

I found the easiest way (graphically) was go to your
https://ovirtengine/api/disks and so a search for the illegal disk.
Append the extra ID eg. diskhref=/api/disks/lk342-dfsdf...
into your URL this'll give you your image ID.

Go to your storage share:
cd /data/storage-id/master/vms/storage-id
grep -ir 'vmname' *
You'll find the image-id reference here too.

Then the image you will want to remove is in the
/data/storage-id/images/image-id
I assume you could safely remove this whole folder if you wanted to
delete the disk.

To remove the illegal state I did it through the API so again with the
URL above https://ovirtengine/disks/disk-id send a DELETE using HTTP/CURL

Again, this was a poor mans solution but it worked for me.

On Thu, Sep 26, 2013 at 4:04 AM, Dan Ferris
dfer...@prometheusresearch.com
mailto:dfer...@prometheusresearch.comwrote:

Hi,

I have another hopefully simple question.

One VM that I am trying to remove says that it's disk state is
illegal and when I try to remove the disk it says that it failed
to initiate the removing of the disk.

Is there an easy way to get rid of these illegal disk images?

Dan
_
Users mailing list
Users@ovirt.org mailto:Users@ovirt.org
http://lists.ovirt.org/__mailman/listinfo/users
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users