Re: [ovirt-users] oVirt 3.5.4.2-1 - Snapshot Failure

2015-09-18 Thread Greg Padgett

On 09/18/2015 04:55 AM, Christian Rebel wrote:

Dear all,

I upgraded to 3.5.4.2-1.el7.centos, but I have still the Problem that I
can´t create  a snapshot!

Below some parts of the vdsm logs, does anyone have an idea to fix it,
please help...


Hi Christian, can you attach the full engine and vdsm log?  They may 
provide further clues about what went wrong.



###

167fb52d-0a51-4fea-9517-c668933ef4e2::ERROR::2015-09-18
09:33:33,957::task::866::Storage.TaskManager.Task::(_setError)
Task=`167fb52d-0a51-4fea-9517-c668933ef4e2`::Unexpected error

efb2447e-db4c-4dbd-84e4-5f8039949000::ERROR::2015-09-18
09:33:34,433::task::866::Storage.TaskManager.Task::(_setError)
Task=`efb2447e-db4c-4dbd-84e4-5f8039949000`::Unexpected error

167fb52d-0a51-4fea-9517-c668933ef4e2::ERROR::2015-09-18
09:33:34,802::task::866::Storage.TaskManager.Task::(_setError)
Task=`167fb52d-0a51-4fea-9517-c668933ef4e2`::Unexpected error

efb2447e-db4c-4dbd-84e4-5f8039949000::ERROR::2015-09-18
09:33:34,987::task::866::Storage.TaskManager.Task::(_setError)
Task=`efb2447e-db4c-4dbd-84e4-5f8039949000`::Unexpected error

VolumeDoesNotExist: Volume does not exist:
('36fd2e26-31a3-4284-8193-abd11c01a0f4',)

VolumeDoesNotExist: Volume does not exist:
('786e719d-7f54-4501-9e8d-688c1e482309',)

167fb52d-0a51-4fea-9517-c668933ef4e2::ERROR::2015-09-18
09:33:32,883::volume::264::Storage.Volume::(clone) cannot clone image
6281b597-020d-4ea7-a954-bb798a0ca4f1 volume
2a2015a1-f62c-4e32-8b04-77ece2ba4cc1 to
/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95b7-48371ae32253/images/6281b597-020d-4ea7-a954-bb798a0ca4f1/4e522ef0-279f-4c02-b581-c233d666eee1

efb2447e-db4c-4dbd-84e4-5f8039949000::ERROR::2015-09-18
09:33:34,375::volume::264::Storage.Volume::(clone) cannot clone image
e7e99288-ad83-406e-9cb6-7a5aa443de9b volume
c5762dec-d9d1-4842-84d1-05896d4d27fb to
/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95b7-48371ae32253/images/e7e99288-ad83-406e-9cb6-7a5aa443de9b/c60aa316-a16a-413e-81f3-e24858c3c7a7

CannotCloneVolume: Cannot clone volume:
u"src=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95b7-48371ae32253/images/6281b597-020d-4ea7-a954-bb798a0ca4f1/2a2015a1-f62c-4e32-8b04-77ece2ba4cc1,
dst=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95b7-48371ae32253/images/6281b597-020d-4ea7-a954-bb798a0ca4f1/4e522ef0-279f-4c02-b581-c233d666eee1:
Volume does not exist: ('36fd2e26-31a3-4284-8193-abd11c01a0f4',)"

CannotCloneVolume: Cannot clone volume:
u"src=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95b7-48371ae32253/images/e7e99288-ad83-406e-9cb6-7a5aa443de9b/c5762dec-d9d1-4842-84d1-05896d4d27fb,
dst=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95b7-48371ae32253/images/e7e99288-ad83-406e-9cb6-7a5aa443de9b/c60aa316-a16a-413e-81f3-e24858c3c7a7:
Volume does not exist: ('786e719d-7f54-4501-9e8d-688c1e482309',)"

VolumeCreationError: Error creating a new volume: (u'Volume creation
4e522ef0-279f-4c02-b581-c233d666eee1 failed: Cannot clone volume:
u"src=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95b7-48371ae32253/images/6281b597-020d-4ea7-a954-bb798a0ca4f1/2a2015a1-f62c-4e32-8b04-77ece2ba4cc1,
dst=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95b7-48371ae32253/images/6281b597-020d-4ea7-a954-bb798a0ca4f1/4e522ef0-279f-4c02-b581-c233d666eee1:
Volume does not exist: (\'36fd2e26-31a3-4284-8193-abd11c01a0f4\',)"',)

VolumeCreationError: Error creating a new volume: (u'Volume creation
c60aa316-a16a-413e-81f3-e24858c3c7a7 failed: Cannot clone volume:
u"src=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95b7-48371ae32253/images/e7e99288-ad83-406e-9cb6-7a5aa443de9b/c5762dec-d9d1-4842-84d1-05896d4d27fb,
dst=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95b7-48371ae32253/images/e7e99288-ad83-406e-9cb6-7a5aa443de9b/c60aa316-a16a-413e-81f3-e24858c3c7a7:
Volume does not exist: (\'786e719d-7f54-4501-9e8d-688c1e482309\',)"',)

167fb52d-0a51-4fea-9517-c668933ef4e2::DEBUG::2015-09-18
09:33:33,959::task::919::Storage.TaskManager.Task::(_runJobs)
Task=`167fb52d-0a51-4fea-9517-c668933ef4e2`::aborting: Task is aborted:
'Error creating a new volume' - code 205

efb2447e-db4c-4dbd-84e4-5f8039949000::DEBUG::2015-09-18
09:33:34,440::task::919::Storage.TaskManager.Task::(_runJobs)
Task=`efb2447e-db4c-4dbd-84e4-5f8039949000`::aborting: Task is aborted:
'Error creating a new volume' - code 205



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Storage Migration

2015-09-18 Thread Greg Padgett

On 09/14/2015 05:20 AM, Markus Stockhausen wrote:

Hi,

somehow I got lost about the possibility to do a live storage migration.
We are using OVirt 3.5.4 + FC20 Nodes (virt-preview - qemu 2.1.3)

 From the WebUI I have the following possibilites:

1) disk without snapshot: VMs tab -> Disks -> Move: Button is active
but it does not allow to do a migration. No selectable storage domain
although we have 2 NFS systems. Gives warning hints about
"you are doing live migration, bla bla, ..."

2) disk with snapshot: VMs tab -> Disk -> Move: Button greyed out

3) BUT! Disks tab -> Move: Works! No hints about "live migration"
I do not dare to click go ...

While 1/2 might be consistent they do not match to 3. Maybe someone
can give a hint what should work, what not and where me might have
an error.


Hi Markus,

Is this still relevant?  If so...

Live storage migration requires the following (from the code that 
determines if the "Move" buttons in 1/2 above are greyed or not):


 - cluster and dc version must be at least 3.2
 - vm must be stateful and up or paused
 - disk to migrate must be the active layer (ie not a snapshot)
 - disk images must be 'OK'

Is it possible one of these changed for the VM(s) in question?

Thanks,
Greg


Thanks.

Markus



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] moving storage away from a single point of failure

2015-09-18 Thread Robert Story
Hi,

I'm running oVirt 3.5 in our lab, and currently I'm using NFS to a single
server. I'd like to move away from having a single point of failure.
Watching the mailing list, all the issues with gluster getting out of sync
and replica issues has me nervous about gluster, plus I just have 2
machines with lots of drive bays for storage. I've been reading about GFS2
and DRBD, and wanted opinions on if either is a good/bad idea, or to see if
there are other alternatives.

My oVirt setup is currently 5 nodes and about 25 VMs, might double in size
eventually, but probably won't get much bigger than that.


Thanks,

Robert

-- 
Senior Software Engineer @ Parsons


pgptbaK5zkgz4.pgp
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] FreeIPA

2015-09-18 Thread Alon Bar-Lev


- Original Message -
> From: supo...@logicworks.pt
> To: "users" 
> Sent: Friday, September 18, 2015 5:45:18 PM
> Subject: [ovirt-users] FreeIPA
> 
> Hi,
> 
> Is there any documentation about FreeIPA integration with oVirt 3.5 and how
> to configure it?
> 

Hi,

Please find documentation at [1][2].

Regards,
Alon Bar-Lev.

[1] http://www.ovirt.org/Features/AAA
[2] 
https://gerrit.ovirt.org/gitweb?p=ovirt-engine-extension-aaa-ldap.git;a=blob;f=README;hb=ovirt-engine-extension-aaa-ldap-1.0
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] FreeIPA

2015-09-18 Thread suporte
Hi, 

Is there any documentation about FreeIPA integration with oVirt 3.5 and how to 
configure it? 

Thanks 

Jose 

-- 

Jose Ferradeira 
http://www.logicworks.pt 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] required info

2015-09-18 Thread Budur Nagaraju
HI

Need some info on how to re register the vms on the new Engine  ? if the
old Engine got corrupted.

Thanks,
Nagaraju
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Systemd-Script to put the node in "maintenance" on shutdown

2015-09-18 Thread Luca Bertoncello
Hi again,

I'm trying to write a systemd-script (for CentOS7) in order to automatically 
put the host in "maintenance" on shutdown and to activate it after boot.
I wrote a python-script that to that and it works so that I can start it and 
see the host in "maintenance" and having all VMs migrated.

Unfortunately I can't call this script on shutdown/reboot and wait until all 
VMs are migrated and the host is in maintenance.

Here my script:

[Unit]
Description=oVirt interface for managing host
After=remote-fs.target vdsmd.service multipathd.service libvirtd.service 
time-sync.target iscsid.service rpcbind.service supervdsmd.service 
sanlock.service vdsm-network.service
Wants=remote-fs.target vdsmd.service multipathd.service libvirtd.service 
time-sync.target iscsid.service rpcbind.service supervdsmd.service 
sanlock.service vdsm-network.service

[Service]
Type=simple
RemainAfterExit=yes
ExecStart=/usr/local/bin/ovirt-maintenance.sh active
ExecStop=/usr/local/bin/ovirt-maintenance.sh maintenance
KillMode=none

[Install]
WantedBy=multi-user.target

Could someone help me and say what I'm doing wrong?

Thanks a lot

Mit freundlichen Grüßen

Luca Bertoncello

-- 
Besuchen Sie unsere Webauftritte:

www.queo.bizAgentur für Markenführung und Kommunikation
www.queoflow.comIT-Consulting und Individualsoftwareentwicklung

Luca Bertoncello
Administrator
Telefon:+49 351 21 30 38 0
Fax:+49 351 21 30 38 99
E-Mail: l.bertonce...@queo-group.com

queo GmbH
Tharandter Str. 13
01159 Dresden
Sitz der Gesellschaft: Dresden
Handelsregistereintrag: Amtsgericht Dresden HRB 22352
Geschäftsführer: Rüdiger Henke, André Pinkert
USt-IdNr.: DE234220077
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Required help in installation of Guest Tools

2015-09-18 Thread Simone Tiraboschi
On Fri, Sep 18, 2015 at 3:16 PM, Budur Nagaraju  wrote:

> HI
>
> Required help in installaling guest tool ,can some one help me in getting
> the ISO image ?
>

Fedora/Centol/RHEL VMs:
   sudo yum install ovirt-guest-agent-common
   sudo systemctl enable ovirt-guest-agent.service && sudo systemctl start
ovirt-guest-agent.service

Windows guest tool for Windows VMs:
  ovirt 3.5:
 yum install ovirt-guest-tools
  ovirt 3.6:
 yum install ovirt-guest-tools-iso
To get the ISO, than import into your ISO storage domain and share to your
VMs to run the installer




>
> Thanks,
> Nagaraju
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Required help in installation of Guest Tools

2015-09-18 Thread Budur Nagaraju
HI

Required help in installaling guest tool ,can some one help me in getting
the ISO image ?

Thanks,
Nagaraju
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Default user id password

2015-09-18 Thread Budur Nagaraju
Hi Alon,

Accepted the engine password.

Thanks
Nagaraju
On Sep 18, 2015 4:39 PM, "Alon Bar-Lev"  wrote:

> Please try:
> 1. blank.
> 2. the engine admin password.
>
> I know recently sysprep default was modified to blank.
>
> - Original Message -
> > From: "Budur Nagaraju" 
> > To: users@ovirt.org
> > Sent: Friday, September 18, 2015 2:08:25 PM
> > Subject: [ovirt-users] Default user id password
> >
> > HI
> >
> > After installing windows 7 OS in ovirt ,by default its creating the user
> id
> > as "user" ,May I know the default password for the vms ?
> >
> > Thanks,
> > Nagaraju
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Default user id password

2015-09-18 Thread Alon Bar-Lev
Please try:
1. blank.
2. the engine admin password.

I know recently sysprep default was modified to blank.

- Original Message -
> From: "Budur Nagaraju" 
> To: users@ovirt.org
> Sent: Friday, September 18, 2015 2:08:25 PM
> Subject: [ovirt-users] Default user id password
> 
> HI
> 
> After installing windows 7 OS in ovirt ,by default its creating the user id
> as "user" ,May I know the default password for the vms ?
> 
> Thanks,
> Nagaraju
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Default user id password

2015-09-18 Thread Budur Nagaraju
HI

After installing windows 7 OS in ovirt ,by default its creating the user id
as "user" ,May I know the default password for the vms ?

Thanks,
Nagaraju
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 3.6 webadmin vm attributes empty

2015-09-18 Thread Tomas Jelinek
could you please have a look if there are some exceptions in the browser's 
javascript console?

- Original Message -
> From: "Marc Werner" 
> To: "Amador Pahim" 
> Cc: "users@ovirt.org" 
> Sent: Thursday, September 17, 2015 1:35:51 PM
> Subject: Re: [ovirt-users] ovirt 3.6 webadmin vm attributes empty
> 
> 
> 
> Datacenter is up.
> 
> 
> 
> 
> 
> 
> Von: Amador Pahim [mailto:apa...@redhat.com]
> Gesendet: Mittwoch, 16. September 2015 15:35
> An: Marc Werner; 'users@ovirt.org'
> Betreff: Re: [ovirt-users] ovirt 3.6 webadmin vm attributes empty
> 
> 
> 
> 
> 
> Is the DataCenter Up?
> 
> On 09/15/2015 07:21 AM, Marc Werner wrote:
> 
> 
> 
> 
> Hi
> 
> If I try to edit a vm most of dialog is empty and I cant chos e anything.
> 
> Is that a known bug?
> 
> 
> 
> 
> 
> Best regards
> 
> Marc
> 
> 
> 
> 
> 
> ovirt-engine.noarch 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6 @ovirt-3.6
> 
> ovirt-engine-backend.noarch 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6
> @ovirt-3.6
> 
> ovirt-engine-cli.noarch 3.6.0.1-0.1.20150821.gitac5082d.el6 @ovirt-3.6
> 
> ovirt-engine-dbscripts.noarch 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6
> @ovirt-3.6
> 
> ovirt-engine-extension-aaa-jdbc.noarch
> 1.0.0-0.0.master.20150831142838.git4d9c713.el6 @ovirt-3.6
> 
> ovirt-engine-extensions-api-impl.noarch
> 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6 @ovirt-3.6
> 
> ovirt-engine-jboss-as.x86_64 7.1.1-1.el6 @ovirt-3.5-pre
> 
> ovirt-engine-lib.noarch 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6
> @ovirt-3.6
> 
> ovirt-engine-restapi.noarch 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6
> @ovirt-3.6
> 
> ovirt-engine-sdk-python.noarch 3.6.0.1-0.1.20150821.gitc8ddcd8.el6 @ovirt-3.6
> 
> ovirt-engine-setup.noarch 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6
> @ovirt-3.6
> 
> ovirt-engine-setup-base.noarch 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6
> @ovirt-3.6
> 
> ovirt-engine-setup-plugin-ovirt-engine.noarch
> 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6 @ovirt-3.6
> 
> ovirt-engine-setup-plugin-ovirt-engine-common.noarch
> 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6 @ovirt-3.6
> 
> ovirt-engine-setup-plugin-vmconsole-proxy-helper.noarch
> 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6 @ovirt-3.6
> 
> ovirt-engine-setup-plugin-websocket-proxy.noarch
> 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6 @ovirt-3.6
> 
> ovirt-engine-tools.noarch 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6
> @ovirt-3.6
> 
> ovirt-engine-userportal.noarch 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6
> @ovirt-3.6
> 
> ovirt-engine-vmconsole-proxy-helper.noarch
> 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6 @ovirt-3.6
> 
> ovirt-engine-webadmin-portal.noarch
> 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6 @ovirt-3.6
> 
> ovirt-engine-websocket-proxy.noarch
> 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6 @ovirt-3.6
> 
> ovirt-engine-wildfly.x86_64 8.2.0-1.el6 @ovirt-3.6
> 
> ovirt-engine-wildfly-overlay.noarch 001-2.el6 @ovirt-3.6
> 
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Automatically migrate VM between hosts in the same cluster

2015-09-18 Thread Alex Crow



Well, right now my problem is to understand how can I "simulate" this Power 
management. Maybe in the future, if we decide that oVirt  is the right solution for us, 
we will buy some Power management system (APC or similar).
But now, for the experiments, I can't ask my boss to pay >500€ for a device 
that maybe we will not use...


GIven the outlay you will pay for your server kit regardless of using 
Ovirt or not, EUR 500 is nothing. As I said, what about ebay? I've just 
looked and there's a 24 outlet one for GBP 100, and an 8 port one for 
$44, both Buy It Now! And that's just a UK search.




By the way: I find really funny, that I can't define an Cluster with 
automatically migration of the VM without Power management. OK, I know what it 
can happens if the host is not really dead, but at least to allow that, with 
warnings and so on, should be nice...


Live migration works fine without power management. If a host gets too 
busy, VMs will migrate away from it.


Alex


.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 3.5.4.2-1 - Snapshot Failure

2015-09-18 Thread Christian Rebel
Dear all,

 

I upgraded to 3.5.4.2-1.el7.centos, but I have still the Problem that I
can´t create  a snapshot!

Below some parts of the vdsm logs, does anyone have an idea to fix it,
please help...

 

###

 

167fb52d-0a51-4fea-9517-c668933ef4e2::ERROR::2015-09-18
09:33:33,957::task::866::Storage.TaskManager.Task::(_setError)
Task=`167fb52d-0a51-4fea-9517-c668933ef4e2`::Unexpected error

efb2447e-db4c-4dbd-84e4-5f8039949000::ERROR::2015-09-18
09:33:34,433::task::866::Storage.TaskManager.Task::(_setError)
Task=`efb2447e-db4c-4dbd-84e4-5f8039949000`::Unexpected error

167fb52d-0a51-4fea-9517-c668933ef4e2::ERROR::2015-09-18
09:33:34,802::task::866::Storage.TaskManager.Task::(_setError)
Task=`167fb52d-0a51-4fea-9517-c668933ef4e2`::Unexpected error

efb2447e-db4c-4dbd-84e4-5f8039949000::ERROR::2015-09-18
09:33:34,987::task::866::Storage.TaskManager.Task::(_setError)
Task=`efb2447e-db4c-4dbd-84e4-5f8039949000`::Unexpected error

 

VolumeDoesNotExist: Volume does not exist:
('36fd2e26-31a3-4284-8193-abd11c01a0f4',)

VolumeDoesNotExist: Volume does not exist:
('786e719d-7f54-4501-9e8d-688c1e482309',)

 

167fb52d-0a51-4fea-9517-c668933ef4e2::ERROR::2015-09-18
09:33:32,883::volume::264::Storage.Volume::(clone) cannot clone image
6281b597-020d-4ea7-a954-bb798a0ca4f1 volume
2a2015a1-f62c-4e32-8b04-77ece2ba4cc1 to
/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95
b7-48371ae32253/images/6281b597-020d-4ea7-a954-bb798a0ca4f1/4e522ef0-279f-4c
02-b581-c233d666eee1

efb2447e-db4c-4dbd-84e4-5f8039949000::ERROR::2015-09-18
09:33:34,375::volume::264::Storage.Volume::(clone) cannot clone image
e7e99288-ad83-406e-9cb6-7a5aa443de9b volume
c5762dec-d9d1-4842-84d1-05896d4d27fb to
/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95
b7-48371ae32253/images/e7e99288-ad83-406e-9cb6-7a5aa443de9b/c60aa316-a16a-41
3e-81f3-e24858c3c7a7

 

CannotCloneVolume: Cannot clone volume:
u"src=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-4
90f-95b7-48371ae32253/images/6281b597-020d-4ea7-a954-bb798a0ca4f1/2a2015a1-f
62c-4e32-8b04-77ece2ba4cc1,
dst=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490
f-95b7-48371ae32253/images/6281b597-020d-4ea7-a954-bb798a0ca4f1/4e522ef0-279
f-4c02-b581-c233d666eee1: Volume does not exist:
('36fd2e26-31a3-4284-8193-abd11c01a0f4',)"

CannotCloneVolume: Cannot clone volume:
u"src=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-4
90f-95b7-48371ae32253/images/e7e99288-ad83-406e-9cb6-7a5aa443de9b/c5762dec-d
9d1-4842-84d1-05896d4d27fb,
dst=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490
f-95b7-48371ae32253/images/e7e99288-ad83-406e-9cb6-7a5aa443de9b/c60aa316-a16
a-413e-81f3-e24858c3c7a7: Volume does not exist:
('786e719d-7f54-4501-9e8d-688c1e482309',)"

 

VolumeCreationError: Error creating a new volume: (u'Volume creation
4e522ef0-279f-4c02-b581-c233d666eee1 failed: Cannot clone volume:
u"src=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-4
90f-95b7-48371ae32253/images/6281b597-020d-4ea7-a954-bb798a0ca4f1/2a2015a1-f
62c-4e32-8b04-77ece2ba4cc1,
dst=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490
f-95b7-48371ae32253/images/6281b597-020d-4ea7-a954-bb798a0ca4f1/4e522ef0-279
f-4c02-b581-c233d666eee1: Volume does not exist:
(\'36fd2e26-31a3-4284-8193-abd11c01a0f4\',)"',)

VolumeCreationError: Error creating a new volume: (u'Volume creation
c60aa316-a16a-413e-81f3-e24858c3c7a7 failed: Cannot clone volume:
u"src=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-4
90f-95b7-48371ae32253/images/e7e99288-ad83-406e-9cb6-7a5aa443de9b/c5762dec-d
9d1-4842-84d1-05896d4d27fb,
dst=/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490
f-95b7-48371ae32253/images/e7e99288-ad83-406e-9cb6-7a5aa443de9b/c60aa316-a16
a-413e-81f3-e24858c3c7a7: Volume does not exist:
(\'786e719d-7f54-4501-9e8d-688c1e482309\',)"',)

 

167fb52d-0a51-4fea-9517-c668933ef4e2::DEBUG::2015-09-18
09:33:33,959::task::919::Storage.TaskManager.Task::(_runJobs)
Task=`167fb52d-0a51-4fea-9517-c668933ef4e2`::aborting: Task is aborted:
'Error creating a new volume' - code 205

efb2447e-db4c-4dbd-84e4-5f8039949000::DEBUG::2015-09-18
09:33:34,440::task::919::Storage.TaskManager.Task::(_runJobs)
Task=`efb2447e-db4c-4dbd-84e4-5f8039949000`::aborting: Task is aborted:
'Error creating a new volume' - code 205

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Automatically migrate VM between hosts in the same cluster

2015-09-18 Thread Luca Bertoncello
> Really? in the main page you have also:
> High Availability Considerations
> A highly available host requires a power management device and its fencing 
> parameters configured. In addition, for a virtual machine to be highly 
> available when its host becomes non-operational, it needs to be started on
> another available host in the cluster. To enable the migration of highly 
> available virtual machines:
> • Power management must be configured for the hosts running the highly 
> available virtual machines.
> • The host running the highly available virtual machine must be part of a 
> cluster which has other available hosts.
> • The destination host must be running.
> • The source and destination host must have access to the data domain on 
> which the virtual machine resides.
> • The source and destination host must have access to the same virtual 
> networks and VLANs.
> • There must be enough CPUs on the destination host that are not in use to 
> support the virtual machine's requirements.
> • There must be enough RAM on the destination host that is not in use to 
> support the virtual machine's requirements.
>
> What in particular is not clear and need better explanation?

Well, right now my problem is to understand how can I "simulate" this Power 
management. Maybe in the future, if we decide that oVirt  is the right solution 
for us, we will buy some Power management system (APC or similar).
But now, for the experiments, I can't ask my boss to pay >500€ for a device 
that maybe we will not use...

By the way: I find really funny, that I can't define an Cluster with 
automatically migration of the VM without Power management. OK, I know what it 
can happens if the host is not really dead, but at least to allow that, with 
warnings and so on, should be nice...

Regards

Mit freundlichen Grüßen

Luca Bertoncello

-- 
Besuchen Sie unsere Webauftritte:

www.queo.bizAgentur für Markenführung und Kommunikation
www.queoflow.comIT-Consulting und Individualsoftwareentwicklung

Luca Bertoncello
Administrator
Telefon:+49 351 21 30 38 0
Fax:+49 351 21 30 38 99
E-Mail: l.bertonce...@queo-group.com

queo GmbH
Tharandter Str. 13
01159 Dresden
Sitz der Gesellschaft: Dresden
Handelsregistereintrag: Amtsgericht Dresden HRB 22352
Geschäftsführer: Rüdiger Henke, André Pinkert
USt-IdNr.: DE234220077
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Automatically migrate VM between hosts in the same cluster

2015-09-18 Thread Alex Crow



On 18/09/15 07:59, Luca Bertoncello wrote:

Hi Alex


2) My question was: "what can I do, so that in case of Kernel Panic or similar, the 
VM will be migrated (live or not) to another host?"

You would make the VMs HA and acquire a fencing solution.

What do you mean now? Have two VM and build a cluster? This is not what we 
want...
If it's possible, I'd like to have more Host AS CLUSTER with migration of the 
VM between the nodes...
I think oVirt already do that, since I have to create Clusters. For me a Cluster is not 
just "more nodes with the same CPU", but also something with load balancing or 
high availability...


No, use the built in HA in Ovirt. Which requires a fencing solution. 
This means that if a host goes down the VMs on it will autostart on 
another host in the cluster. Yes, it's a cluster of *hosts*. In fact in 
the Ovirt Interface you can see "clusters" in the tree view!


We use the built in Ovirt HA for >200 VMs over 6 hosts and it works just 
fine. We lost a host fairly recently and only a couple of people (out of 
over 300) who use the services of a single VM noticed.


Ovirt also supports balancing the VM load across hosts. That is part of 
the cluster policy. You can also set affinity and anti-affinity of VMs. 
You can also have hosts actually migrate off VMs when they are idle and 
the host to power down to save on your electricity bill. When load on 
other hosts reaches a limit, the host will be powered back on and VMs 
will start migrating to it.



3) I'd like to have a shutdown-script on the host that put the host in 
Maintenance and wait until it's done, so that I can just shutdown or reboot it 
without any other action. Is it possible? It would help to manage the power

  > failure, too, assuming that other hosts have better UPS (it can be 
possible.)

You could probably use the REST API on the Ovirt Engine for that.But it might 
be better to have a highly available machine (VM or not) running something like 
Nagios or Icinga which would perform the monitoring of your
  hosts and connect to the REST API to perform maintenance and shutdown. You 
might also consider a UPS service like NUT (unless you're already doing it).

Well, I already use NUT and we have Icinga monitoring the hosts.
But I can't understand what you mean and (more important!) how can I do it...

I checked the REST API and the CLI. I can write a little script to put an host 
in Maintenance or active it again, that's not the problem.
The problem is to have somewhat starting this script automatically...

Any suggestion? Icinga will not be the solution, since it checks the host every 
5 minutes or so, but we need to have this script started in seconds...

Thanks

Mit freundlichen Grüßen

Luca Bertoncello


The rest is out of scope of Ovirt. Icinga was just a suggestion. You 
don't have to use it - but you can change the check interval for a check 
to whatever you want. I was just stating that you should be wary of 
running your checks/scripts from a host if what you are trying to 
trigger on is that host going down or having issues.


And if 5 minutes is too long maybe you need a bigger UPS? We have 1 hour 
battery runtime on ours at the moment (think it's a 160kVA central UPS 
offhand).


Alex




--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
"Transact" is operated by Integrated Financial Arrangements plc. 29
Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608
5300. (Registered office: as above; Registered in England and Wales
under number: 3727592). Authorised and regulated by the Financial
Conduct Authority (entered on the Financial Services Register; no. 190856).

.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Automatically migrate VM between hosts in the same cluster

2015-09-18 Thread Gianluca Cecchi
On Fri, Sep 18, 2015 at 10:22 AM, Luca Bertoncello <
l.bertonce...@queo-group.com> wrote:

> Hello Gianluca,
>
> > You should also RTFM sooner or later ;-)
> > http://www.ovirt.org/OVirt_Administration_Guide
> >
> > In particular for this HA related concepts:
> >
> http://www.ovirt.org/OVirt_Administration_Guide#Virtual_Machine_High_Availability_Settings_Explained
> > and
> > http://www.ovirt.org/OVirt_Administration_Guide#Host_Resilience
>
> Really, I don't think this documentation is good...
> I read this page many times and try to understand how the program works,
> but I have many doubt...
>


The documentation pages are contributed, so you are encouraged to ask for a
login and modify them when you have clarified the related aspects, so that
the learning curve could be shorter for others.
See the bottom of the page with "Log in / create account

"



>
> In particular, the link you sent about HA: of course I read them and I
> configured the VMs with these settings, but really, I can't understand how
> can I set more host as a Cluster allowing automatically migration of the
> VMs when a host dies...
>
> Regards
>
>
>
Really? in the main page you have also:
High Availability Considerations

A highly available host requires a power management device and its fencing
parameters configured. In addition, for a virtual machine to be highly
available when its host becomes non-operational, it needs to be started on
another available host in the cluster. To enable the migration of highly
available virtual machines:

   - Power management must be configured for the hosts running the highly
   available virtual machines.
   - The host running the highly available virtual machine must be part of
   a cluster which has other available hosts.
   - The destination host must be running.
   - The source and destination host must have access to the data domain on
   which the virtual machine resides.
   - The source and destination host must have access to the same virtual
   networks and VLANs.
   - There must be enough CPUs on the destination host that are not in use
   to support the virtual machine's requirements.
   - There must be enough RAM on the destination host that is not in use to
   support the virtual machine's requirements.



What in particular is not clear and need better explanation?

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Automatically migrate VM between hosts in the same cluster

2015-09-18 Thread Luca Bertoncello
Hello Gianluca,

> You should also RTFM sooner or later ;-)
> http://www.ovirt.org/OVirt_Administration_Guide
>
> In particular for this HA related concepts:
> http://www.ovirt.org/OVirt_Administration_Guide#Virtual_Machine_High_Availability_Settings_Explained
> and
> http://www.ovirt.org/OVirt_Administration_Guide#Host_Resilience

Really, I don't think this documentation is good...
I read this page many times and try to understand how the program works, but I 
have many doubt...

In particular, the link you sent about HA: of course I read them and I 
configured the VMs with these settings, but really, I can't understand how can 
I set more host as a Cluster allowing automatically migration of the VMs when a 
host dies...

Regards

Mit freundlichen Grüßen

Luca Bertoncello

-- 
Besuchen Sie unsere Webauftritte:

www.queo.bizAgentur für Markenführung und Kommunikation
www.queoflow.comIT-Consulting und Individualsoftwareentwicklung

Luca Bertoncello
Administrator
Telefon:+49 351 21 30 38 0
Fax:+49 351 21 30 38 99
E-Mail: l.bertonce...@queo-group.com

queo GmbH
Tharandter Str. 13
01159 Dresden
Sitz der Gesellschaft: Dresden
Handelsregistereintrag: Amtsgericht Dresden HRB 22352
Geschäftsführer: Rüdiger Henke, André Pinkert
USt-IdNr.: DE234220077
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Automatically migrate VM between hosts in the same cluster

2015-09-18 Thread Gianluca Cecchi
On Fri, Sep 18, 2015 at 8:59 AM, Luca Bertoncello <
l.bertonce...@queo-group.com> wrote:

> Hi Alex
>
> > 2) My question was: "what can I do, so that in case of Kernel Panic or
> similar, the VM will be migrated (live or not) to another host?"
> >
> > You would make the VMs HA and acquire a fencing solution.
>
> What do you mean now? Have two VM and build a cluster? This is not what we
> want...
> If it's possible, I'd like to have more Host AS CLUSTER with migration of
> the VM between the nodes...
> I think oVirt already do that, since I have to create Clusters. For me a
> Cluster is not just "more nodes with the same CPU", but also something with
> load balancing or high availability...
>


You should also RTFM sooner or later ;-)
http://www.ovirt.org/OVirt_Administration_Guide

In particular for this HA related concepts:
http://www.ovirt.org/OVirt_Administration_Guide#Virtual_Machine_High_Availability_Settings_Explained
and
http://www.ovirt.org/OVirt_Administration_Guide#Host_Resilience

HIH,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Web console

2015-09-18 Thread Christian Hailer
You need virt-viewer:

 

http://virt-manager.org/download/

 

Regards,

Christian

 

Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von 
Budur Nagaraju
Gesendet: Freitag, 18. September 2015 09:38
An: users@ovirt.org
Betreff: [ovirt-users] Web console

 

HI 

Unable to access the web console of the vm ,asking for console.vv .

any pluggin needs to be installed ,using firefox latest version.

Thanks,

Nagaraju

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Web console

2015-09-18 Thread Budur Nagaraju
HI

Unable to access the web console of the vm ,asking for console.vv .
any pluggin needs to be installed ,using firefox latest version.

Thanks,
Nagaraju
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage and ISO Domain and Engine VM

2015-09-18 Thread Simone Tiraboschi
On Thu, Sep 17, 2015 at 9:06 PM, Joop  wrote:

> On 16-9-2015 15:44, Richard Neuboeck wrote:
> > There is one obvious error in the web UI that I've overlooked so > far.
> 'Events' lists the following: > > 'The Hosted Engine Storage Domain doesn't
> no exist. It should be > imported into the setup.' > > Except finding the
> github link to the source code googling didn't > reveal any hints on what
> to do to remedy this problem. > >
> Go to the Storage tab and Import Domain, fill in the required fields and
> path to your hosted-engine storage domain. Things should proceed from
> there, I hope.
> This info is somewhere on the wiki in the beta releasenotes.
>

That is true but we are still facing a issue there:
 https://bugzilla.redhat.com/1261996

We hope it will be solved for the next beta.


> Joop
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Getting Error while Attaching ISO path

2015-09-18 Thread Christian Hailer
Hi,

 

could you please check your NFS export? Did you export the share with these 
options:

 

/PATH_TO_DIR/iso  
XXX.XXX.XXX.XXX/YY(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

 

?

 

Regards, Christian

 

Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von 
Budur Nagaraju
Gesendet: Freitag, 18. September 2015 09:25
An: users@ovirt.org
Betreff: [ovirt-users] Getting Error while Attaching ISO path

 

HI 

Unable to attach the ISO to the newly created datacenter,below are the logs,

[root@cstlb2 ~]# tail -f /var/log/ovirt-engine/engine.log 
2015-09-18 12:50:08,493 ERROR 
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
(org.ovirt.thread.pool-8-thread-8) [303cb8a5] Transaction rolled-back for 
command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand.
2015-09-18 12:50:08,494 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] 
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] START, 
AttachStorageDomainVDSCommand( storagePoolId = 
92328f51-9152-4730-a558-8c1fd0b4e076, ignoreFailoverLimit = false, 
storageDomainId = 263f7911-c5a2-495a-92c7-ce765b65a5b3), log id: c271dba
2015-09-18 12:50:08,724 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] 
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Failed in AttachStorageDomainVDS 
method
2015-09-18 12:50:08,728 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] 
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Command 
AttachStorageDomainVDSCommand( storagePoolId = 
92328f51-9152-4730-a558-8c1fd0b4e076, ignoreFailoverLimit = false, 
storageDomainId = 263f7911-c5a2-495a-92c7-ce765b65a5b3) execution failed. 
Exception: IrsOperationFailedNoFailoverException: IRSGenericException: 
IRSErrorException: Failed to AttachStorageDomainVDS, error = Storage domain 
does not exist: (u'263f7911-c5a2-495a-92c7-ce765b65a5b3',), code = 358
2015-09-18 12:50:08,729 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] 
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] FINISH, 
AttachStorageDomainVDSCommand, log id: c271dba
2015-09-18 12:50:08,730 ERROR 
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] 
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Command 
org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand throw Vdc 
Bll exception. With error message VdcBLLException: 
org.ovirt.engine.core.vdsbroker.irsbroker.IrsOperationFailedNoFailoverException:
 IRSGenericException: IRSErrorException: Failed to AttachStorageDomainVDS, 
error = Storage domain does not exist: 
(u'263f7911-c5a2-495a-92c7-ce765b65a5b3',), code = 358 (Failed with error 
StorageDomainDoesNotExist and code 358)
2015-09-18 12:50:08,733 INFO  
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] 
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Command 
[id=fdb5bcec-5aec-4ca1-a4f1-2ca3895e30f4]: Compensating NEW_ENTITY_ID of 
org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: 
storagePoolId = 92328f51-9152-4730-a558-8c1fd0b4e076, storageId = 
263f7911-c5a2-495a-92c7-ce765b65a5b3.
2015-09-18 12:50:08,736 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Correlation ID: 59c91a6d, Job 
ID: a31fb420-9106-49f6-bf9f-78a87cca4f0a, Call Stack: null, Custom Event ID: 
-1, Message: Failed to attach Storage Domain ISO_DOMAIN to Data Center Pulse. 
(User: admin@internal)
2015-09-18 12:50:08,739 INFO  
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] 
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Lock freed to object EngineLock 
[exclusiveLocks= key: 263f7911-c5a2-495a-92c7-ce765b65a5b3 value: STORAGE
, sharedLocks= ]
2015-09-18 12:50:53,060 INFO  
[org.ovirt.engine.core.bll.storage.GetStorageDomainsWithAttachedStoragePoolGuidQuery]
 (ajp--127.0.0.1-8702-1) vds id b8804829-6107-4486-8c98-5ee4c0f4e797 was chosen 
to fetch the Storage domain info
2015-09-18 12:50:53,103 INFO  
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] 
(ajp--127.0.0.1-8702-2) [1f9cdc9a] Lock Acquired to object EngineLock 
[exclusiveLocks= key: 263f7911-c5a2-495a-92c7-ce765b65a5b3 value: STORAGE
, sharedLocks= ]
2015-09-18 12:50:53,118 INFO  
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] 
(org.ovirt.thread.pool-8-thread-46) [1f9cdc9a] Running command: 
AttachStorageDomainToPoolCommand internal: false. Entities affected :  ID: 
263f7911-c5a2-495a-92c7-ce765b65a5b3 Type: StorageAction group 
MANIPULATE_STORAGE_DOMAIN with role type ADMIN
2015-09-18 12:50:53,123 INFO  
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
(org.ovirt.thread.pool-8-thread-6) [58cbe80] Running command: 
ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
aaa0----123456789aaa Type: SystemAction group 
CREATE_STORAGE_DOMAIN with role type ADMIN
2015-09-18 

[ovirt-users] Getting Error while Attaching ISO path

2015-09-18 Thread Budur Nagaraju
HI

Unable to attach the ISO to the newly created datacenter,below are the logs,

[root@cstlb2 ~]# tail -f /var/log/ovirt-engine/engine.log
2015-09-18 12:50:08,493 ERROR
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand]
(org.ovirt.thread.pool-8-thread-8) [303cb8a5] Transaction rolled-back for
command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand.
2015-09-18 12:50:08,494 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] START,
AttachStorageDomainVDSCommand( storagePoolId =
92328f51-9152-4730-a558-8c1fd0b4e076, ignoreFailoverLimit = false,
storageDomainId = 263f7911-c5a2-495a-92c7-ce765b65a5b3), log id: c271dba
2015-09-18 12:50:08,724 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Failed in
AttachStorageDomainVDS method
2015-09-18 12:50:08,728 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Command
AttachStorageDomainVDSCommand( storagePoolId =
92328f51-9152-4730-a558-8c1fd0b4e076, ignoreFailoverLimit = false,
storageDomainId = 263f7911-c5a2-495a-92c7-ce765b65a5b3) execution failed.
Exception: IrsOperationFailedNoFailoverException: IRSGenericException:
IRSErrorException: Failed to AttachStorageDomainVDS, error = Storage domain
does not exist: (u'263f7911-c5a2-495a-92c7-ce765b65a5b3',), code = 358
2015-09-18 12:50:08,729 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] FINISH,
AttachStorageDomainVDSCommand, log id: c271dba
2015-09-18 12:50:08,730 ERROR
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Command
org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand throw
Vdc Bll exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.irsbroker.IrsOperationFailedNoFailoverException:
IRSGenericException: IRSErrorException: Failed to AttachStorageDomainVDS,
error = Storage domain does not exist:
(u'263f7911-c5a2-495a-92c7-ce765b65a5b3',), code = 358 (Failed with error
StorageDomainDoesNotExist and code 358)
2015-09-18 12:50:08,733 INFO
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Command
[id=fdb5bcec-5aec-4ca1-a4f1-2ca3895e30f4]: Compensating NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot:
storagePoolId = 92328f51-9152-4730-a558-8c1fd0b4e076, storageId =
263f7911-c5a2-495a-92c7-ce765b65a5b3.
2015-09-18 12:50:08,736 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Correlation ID: 59c91a6d,
Job ID: a31fb420-9106-49f6-bf9f-78a87cca4f0a, Call Stack: null, Custom
Event ID: -1, Message: Failed to attach Storage Domain ISO_DOMAIN to Data
Center Pulse. (User: admin@internal)
2015-09-18 12:50:08,739 INFO
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Lock freed to object
EngineLock [exclusiveLocks= key: 263f7911-c5a2-495a-92c7-ce765b65a5b3
value: STORAGE
, sharedLocks= ]
2015-09-18 12:50:53,060 INFO
[org.ovirt.engine.core.bll.storage.GetStorageDomainsWithAttachedStoragePoolGuidQuery]
(ajp--127.0.0.1-8702-1) vds id b8804829-6107-4486-8c98-5ee4c0f4e797 was
chosen to fetch the Storage domain info
2015-09-18 12:50:53,103 INFO
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(ajp--127.0.0.1-8702-2) [1f9cdc9a] Lock Acquired to object EngineLock
[exclusiveLocks= key: 263f7911-c5a2-495a-92c7-ce765b65a5b3 value: STORAGE
, sharedLocks= ]
2015-09-18 12:50:53,118 INFO
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(org.ovirt.thread.pool-8-thread-46) [1f9cdc9a] Running command:
AttachStorageDomainToPoolCommand internal: false. Entities affected :  ID:
263f7911-c5a2-495a-92c7-ce765b65a5b3 Type: StorageAction group
MANIPULATE_STORAGE_DOMAIN with role type ADMIN
2015-09-18 12:50:53,123 INFO
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand]
(org.ovirt.thread.pool-8-thread-6) [58cbe80] Running command:
ConnectStorageToVdsCommand internal: true. Entities affected :  ID:
aaa0----123456789aaa Type: SystemAction group
CREATE_STORAGE_DOMAIN with role type ADMIN
2015-09-18 12:50:53,124 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(org.ovirt.thread.pool-8-thread-6) [58cbe80] START,
ConnectStorageServerVDSCommand(HostName = host1, HostId =
b8804829-6107-4486-8c98-5ee4c0f4e797, storagePoolId =
----, storageType = NFS, connectionList =
[{ id: 6fbce0a8-955f-4ad4-8822-1ea0c31990fb, connection:
cstlb2.bnglab.psecure.net:/var/lib/exports/iso, iqn: null, vfsType: null,
mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo