Re: [ovirt-users] Host Device Mapping

2017-01-22 Thread Martin Polednik

On 20/01/17 14:38 -0600, Bryan Sockel wrote:


Hi,

I am trying to map a host device to a VM, I am able to get the host device
mapped to the VM, but i am unable to start the vm.  I receive the following
Error:

Error while executing action:

VM:
Cannot run VM. There is no host that satisfies current scheduling
constraints. See below for details:
The host vm-host-1 did not satisfy internal filter HostDevice because it
does not support host device passthrough..
The host vm-host-2 did not satisfy internal filter HostDevice because it
does not support host device passthrough..


Is the host device you're trying to pass through on the PCI bus? Is
IOMMU enabled in your bios and kernel? Host requirements section in
[1] could help.

[1] 
http://www.ovirt.org/develop/release-management/features/engine/hostdev-passthrough/


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Storage Domain sizing

2017-01-22 Thread Grundmann, Christian
Hi,

Ii am about to migrate to a new storage.

Whats the best practice in sizing? 

1 big Storage Domain or multiple smaller ones?

 

My current Setup:

11 Hosts

7 FC Storage Domains 1 TB each

 

Can anyone tell me the pro and cons of 1 vs. many?

 

 

Thx Christian



smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Migrating the vms

2017-01-22 Thread Budur Nagaraju
HI

While migrating vm to different node getting the error ,below are the
logs,can you please help me in resolving the issue?



http://pastebin.com/nvCHEtxE


Thanks,
Nagaraju
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-storage import fails on hyperconverged glusterFS

2017-01-22 Thread Sahina Bose
On Fri, Jan 20, 2017 at 3:01 PM, Liebe, André-Sebastian <
andre.li...@gematik.de> wrote:

> Hello List,
>
> I run into trouble after moving our hosted engine from nfs to
> hyperconverged glusterFS by backup/restore[1] procedure. The engine logs
> it  can't import and activate the hosted-storage although I can see the
> storage.
> Any Hints how to fix this?
>
> - I created the ha-replica-3 gluster volume prior to hosted-engine-setup
> using the hosts short name.
> - Then ran hosted-engine-setup to install an new hosted engine (by
> installing centOS7 and ovirt-engine amnually)
> - inside the new hosted-engine I restored the last successfull backup
> (wich was in running state)
> - then I connected to the engine-database and removed the old
> hosted-engine by hand (as part of this patch would do:
> https://gerrit.ovirt.org/#/c/64966/) and all known hosts (after marking
> all vms as down, where I got ETL error messages later on for this)
>

Did you also clean up the old HE storage domain? The error further down
indicates that engine has a reference to the HE storage domain.

- then I finished up the engine installation by running the engine-setup
> inside the hosted_engine
> - and finally completed the hosted-engine-setup
>
>
> The new hosted engine came up successfully with all prior known storage
> and after enabling glusterFS, the cluster this HA-host is part of, I could
> see it in the volumes and storage tab. After adding the remaining two
> hosts, the volume was marked as active.
>
> But here's the the error message I get repeadately since then:
> > 2017-01-19 08:49:36,652 WARN  [org.ovirt.engine.core.bll.storage.domain.
> ImportHostedEngineStorageDomainCommand] (org.ovirt.thread.pool-6-thread-10)
> [3b955ecd] Validation of action 'ImportHostedEngineStorageDomain' failed
> for user SYSTEM. Reasons: VAR__ACTION__ADD,VAR__TYPE__
> STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_ALREADY_EXIST
>
>
> There are also some repeating messages about this ha-replica-3 volume,
> because I used the hosts short name on volume creation, which I can't
> change afaik without a complete cluster shutdown.
> > 2017-01-19 08:48:03,134 INFO  
> > [org.ovirt.engine.core.bll.AddUnmanagedVmsCommand]
> (DefaultQuartzScheduler3) [7471d7de] Running command:
> AddUnmanagedVmsCommand internal: true.
> > 2017-01-19 08:48:03,134 INFO  [org.ovirt.engine.core.
> vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler3)
> [7471d7de] START, FullListVDSCommand(HostName = ,
> FullListVDSCommandParameters:{runAsync='true', 
> hostId='f62c7d04-9c95-453f-92d5-6dabf9da874a',
> vds='Host[,f62c7d04-9c95-453f-92d5-6dabf9da874a]',
> vmIds='[dfea96e8-e94a-407e-af46-3019fd3f2991]'}), log id: 2d0941f9
> > 2017-01-19 08:48:03,163 INFO  [org.ovirt.engine.core.
> vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler3)
> [7471d7de] FINISH, FullListVDSCommand, return: [{guestFQDN=,
> emulatedMachine=pc, pid=0, guestDiskMapping={}, 
> devices=[Ljava.lang.Object;@4181d938,
> cpuType=Haswell-noTSX, smp=2, vmType=kvm, memSize=8192,
> vmName=HostedEngine, username=, exitMessage=XML error: maximum vcpus count
> must be an integer, vmId=dfea96e8-e94a-407e-af46-3019fd3f2991,
> displayIp=0, displayPort=-1, guestIPs=, spiceSecureChannels=smain,
> sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
> exitCode=1, nicModel=rtl8139,pv, exitReason=1, status=Down, maxVCpus=None,
> clientIp=, statusTime=6675071780, display=vnc, displaySecurePort=-1}], log
> id: 2d0941f9
> > 2017-01-19 08:48:03,163 ERROR [org.ovirt.engine.core.
> vdsbroker.vdsbroker.VdsBrokerObjectsBuilder] (DefaultQuartzScheduler3)
> [7471d7de] null architecture type, replacing with x86_64, %s
> > 2017-01-19 08:48:17,779 INFO  
> > [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
> (DefaultQuartzScheduler3) [7471d7de] START, 
> GlusterServersListVDSCommand(HostName
> = lvh2, VdsIdVDSCommandParametersBase:{runAsync='true',
> hostId='23297fc2-db12-4778-a5ff-b74d6fc9554b'}), log id: 57d029dc
> > 2017-01-19 08:48:18,177 INFO  
> > [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
> (DefaultQuartzScheduler3) [7471d7de] FINISH, GlusterServersListVDSCommand,
> return: [172.31.1.22/24:CONNECTED, lvh3.lab.gematik.de:CONNECTED,
> lvh4.lab.gematik.de:CONNECTED], log id: 57d029dc
> > 2017-01-19 08:48:18,180 INFO  
> > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler3) [7471d7de] START, 
> GlusterVolumesListVDSCommand(HostName
> = lvh2, GlusterVolumesListVDSParameters:{runAsync='true',
> hostId='23297fc2-db12-4778-a5ff-b74d6fc9554b'}), log id: 5cd11a39
> > 2017-01-19 08:48:18,282 WARN  [org.ovirt.engine.core.vdsbroker.gluster.
> GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler3) [7471d7de]
> Could not associate brick 'lvh2:/data/gluster/0/brick' of volume
> '7dc6410d-8f2a-406c-812a-8235fa6f721c' with correct network as no gluster
> network found in cluster '57ff41c2-0297-039d-039c-00

Re: [ovirt-users] master storage domain stuck in locked state

2017-01-22 Thread Bill Bill
Hi,

Thanks for the reply back. Unfortunately, I don’t have the logs anymore as 
we’ve already pulled those servers out of the rack to be used for other 
services.


From: Yaniv Kaul
Sent: Sunday, January 22, 2017 4:45 PM
To: Bill Bill
Cc: Ovirt Users; Maor 
Lipchuk
Subject: Re: [ovirt-users] master storage domain stuck in locked state



On Jan 22, 2017 10:13 PM, "Bill Bill" 
mailto:jax2...@outlook.com>> wrote:
Hello,

It was 4.0.5 however, we’ve decided to pull the plug on oVirt for now as it’s 
too risky in taking down possibly a large number or servers due to this issue. 
I think oVirt should be a little less “picky” if you will, on storage 
connections. For example, this specific issue prevented anything storage 
related from being done. Because the “master” was locked you cannot:

Add other storage
Activate hosts
Start VM’s
Reinitialize the datacenter
Remove storage

These points above a huge – while oVirt is indeed open source, upstream of RHEV 
and doesn’t cost anything, I feel that in scenarios like this it could be the 
downfall of oVirt itself being too risky.

The logging with oVirt seems to be crazy though – we’ve been testing it now for 
about 2.5 years, maybe 3 years? Once oVirt gets in a state where it cannot 
connect to something, it just goes haywire – many likely don’t see this 
however, every time these things happened it when we’re testing failover 
scenarios to see how oVirt responds.

A few recommendations I would make are:

Thank you for your recommendations. I agree with some, wholly disagree with 
others.
I'd still appreciate if you could send us the requested logs.

TIA,
Y.


Drop the whole “master” storage thing – it complicates setting storage up. 
Either connect, or don’t connect. If there’s connectivity issues, oVirt gets 
hung up on switching to this “master” storage. If you have a single storage 
domain, you’ll likely have problems as we’ve experienced because once oVirt 
cannot find the “master” it begins to go berserk, then spirals out of control 
there. It might not on small setups with a few hypervisors, but on an install 
with a few hundred VM’s, large number of hypervisors etc, it seems to get ugly 
real quick.

Stop trying to reconnect things, I think that’s what I’m looking for. When 
something fails, oVirt just goes in a loop over and over which eventually 
causes dashboard issues, crazy amounts of logs etc. It would be better if oVirt 
would just stop, make a log entry and then quit, maybe after a few times.

In my case, I could mount the storage manually to ALL hosts, I could even force 
start the VM’s with virsh. The oVirt dashboard just kept saying it was locked, 
and wouldn’t let you do anything at all with the entire datacenter.

At this time, we’ve pushed these servers back into production using our current 
hypervisor software which is stable but does not have the benefits of oVirt. 
It’ll be revisited later on and is still in use for non-production things.


From: Maor Lipchuk
Sent: Sunday, January 22, 2017 7:33 AM
To: Bill Bill
Cc: users
Subject: Re: [ovirt-users] master storage domain stuck in locked state



On Sun, Jan 22, 2017 at 2:31 PM, Maor Lipchuk 
mailto:mlipc...@redhat.com>> wrote:
Hi Bill,

Can you please attach the engine and VDSM logs.
Does the storage domain still stuck?

Also which oVirt version are you using?


Regards,
Maor

On Sat, Jan 21, 2017 at 3:11 AM, Bill Bill 
mailto:jax2...@outlook.com>> wrote:

Also cannot reinitialize the datacenter because the storage domain is locked.

From: Bill Bill
Sent: Friday, January 20, 2017 8:08 PM
To: users
Subject: RE: master storage domain stuck in locked state

Spoke too soon. Some hosts came back up but the storage domain is still locked 
so no vm’s can be started. What is the proper way to force this to be unlocked? 
Each time we look to move into production after successful testing, something 
like this always seems to pop up at the last minute rending oVirt questionable 
in terms of reliability for some unknown issue.



From: Bill Bill
Sent: Friday, January 20, 2017 7:54 PM
To: users
Subject: RE: master storage domain stuck in locked state


So apparently something didn’t change the metadata to master before connection 
was lost. I changed the metadata role to master and it came backup. Seems 
emailing in helped because every time I can’t figure something out, email in a 
find it shortly after.


From: Bill Bill
Sent: Friday, January 20, 2017 7:43 PM
To: users
Subject: master storage domain stuck in locked state

No clue how to get this out. I can mount all storage manually on the 
hypervisors. It seems like after a reboot oVirt is now having some issue and 

Re: [ovirt-users] master storage domain stuck in locked state

2017-01-22 Thread Yaniv Kaul
On Jan 22, 2017 10:13 PM, "Bill Bill"  wrote:

Hello,



It was 4.0.5 however, we’ve decided to pull the plug on oVirt for now as
it’s too risky in taking down possibly a large number or servers due to
this issue. I think oVirt should be a little less “picky” if you will, on
storage connections. For example, this specific issue prevented anything
storage related from being done. Because the “master” was locked you cannot:



Add other storage

Activate hosts

Start VM’s

Reinitialize the datacenter

Remove storage



These points above a huge – while oVirt is indeed open source, upstream of
RHEV and doesn’t cost anything, I feel that in scenarios like this it could
be the downfall of oVirt itself being too risky.



The logging with oVirt seems to be crazy though – we’ve been testing it now
for about 2.5 years, maybe 3 years? Once oVirt gets in a state where it
cannot connect to something, it just goes haywire – many likely don’t see
this however, every time these things happened it when we’re testing
failover scenarios to see how oVirt responds.



A few recommendations I would make are:


Thank you for your recommendations. I agree with some, wholly disagree with
others.
I'd still appreciate if you could send us the requested logs.

TIA,
Y.



Drop the whole “master” storage thing – it complicates setting storage up.
Either connect, or don’t connect. If there’s connectivity issues, oVirt
gets hung up on switching to this “master” storage. If you have a single
storage domain, you’ll likely have problems as we’ve experienced because
once oVirt cannot find the “master” it begins to go berserk, then spirals
out of control there. It might not on small setups with a few hypervisors,
but on an install with a few hundred VM’s, large number of hypervisors etc,
it seems to get ugly real quick.



Stop trying to reconnect things, I think that’s what I’m looking for. When
something fails, oVirt just goes in a loop over and over which eventually
causes dashboard issues, crazy amounts of logs etc. It would be better if
oVirt would just stop, make a log entry and then quit, maybe after a few
times.



In my case, I could mount the storage manually to ALL hosts, I could even
force start the VM’s with virsh. The oVirt dashboard just kept saying it
was locked, and wouldn’t let you do anything at all with the entire
datacenter.



At this time, we’ve pushed these servers back into production using our
current hypervisor software which is stable but does not have the benefits
of oVirt. It’ll be revisited later on and is still in use for
non-production things.





*From: *Maor Lipchuk 
*Sent: *Sunday, January 22, 2017 7:33 AM
*To: *Bill Bill 
*Cc: *users 
*Subject: *Re: [ovirt-users] master storage domain stuck in locked state




On Sun, Jan 22, 2017 at 2:31 PM, Maor Lipchuk  wrote:

> Hi Bill,
>
> Can you please attach the engine and VDSM logs.
> Does the storage domain still stuck?
>

Also which oVirt version are you using?


>
> Regards,
> Maor
>
> On Sat, Jan 21, 2017 at 3:11 AM, Bill Bill  wrote:
>
>>
>>
>> Also cannot reinitialize the datacenter because the storage domain is
>> locked.
>>
>>
>>
>> *From: *Bill Bill 
>> *Sent: *Friday, January 20, 2017 8:08 PM
>> *To: *users 
>> *Subject: *RE: master storage domain stuck in locked state
>>
>>
>>
>> Spoke too soon. Some hosts came back up but the storage domain is still
>> locked so no vm’s can be started. What is the proper way to force this to
>> be unlocked? Each time we look to move into production after successful
>> testing, something like this always seems to pop up at the last minute
>> rending oVirt questionable in terms of reliability for some unknown issue.
>>
>>
>>
>>
>>
>>
>>
>> *From: *Bill Bill 
>> *Sent: *Friday, January 20, 2017 7:54 PM
>> *To: *users 
>> *Subject: *RE: master storage domain stuck in locked state
>>
>>
>>
>>
>>
>> So apparently something didn’t change the metadata to master before
>> connection was lost. I changed the metadata role to master and it came
>> backup. Seems emailing in helped because every time I can’t figure
>> something out, email in a find it shortly after.
>>
>>
>>
>>
>>
>> *From: *Bill Bill 
>> *Sent: *Friday, January 20, 2017 7:43 PM
>> *To: *users 
>> *Subject: *master storage domain stuck in locked state
>>
>>
>>
>> No clue how to get this out. I can mount all storage manually on the
>> hypervisors. It seems like after a reboot oVirt is now having some issue
>> and the storage domain is stuck in locked state. Because of this, can’t
>> activate any other storage either, so the other domains are in maintenance
>> and the master sits in locked state, has been for hours.
>>
>>
>>
>> This sticks out on a hypervisor:
>>
>>
>>
>> StoragePoolWrongMaster: Wrong Master domain or its version:
>> u'SD=d8a0172e-837f-4552-92c7-566dc4e548e4, pool=3fd2ad92-e1eb-49c2-906d-0
>> 0ec233f610a'
>>
>>
>>
>> Not sure, nothing changed other than a reboot of the storage.
>>
>>
>>
>> Engine log shows:
>>
>>
>>
>> [org.ovirt.engine.core.vdsbr

Re: [ovirt-users] master storage domain stuck in locked state

2017-01-22 Thread Bill Bill
Hello,

It was 4.0.5 however, we’ve decided to pull the plug on oVirt for now as it’s 
too risky in taking down possibly a large number or servers due to this issue. 
I think oVirt should be a little less “picky” if you will, on storage 
connections. For example, this specific issue prevented anything storage 
related from being done. Because the “master” was locked you cannot:

Add other storage
Activate hosts
Start VM’s
Reinitialize the datacenter
Remove storage

These points above a huge – while oVirt is indeed open source, upstream of RHEV 
and doesn’t cost anything, I feel that in scenarios like this it could be the 
downfall of oVirt itself being too risky.

The logging with oVirt seems to be crazy though – we’ve been testing it now for 
about 2.5 years, maybe 3 years? Once oVirt gets in a state where it cannot 
connect to something, it just goes haywire – many likely don’t see this 
however, every time these things happened it when we’re testing failover 
scenarios to see how oVirt responds.

A few recommendations I would make are:

Drop the whole “master” storage thing – it complicates setting storage up. 
Either connect, or don’t connect. If there’s connectivity issues, oVirt gets 
hung up on switching to this “master” storage. If you have a single storage 
domain, you’ll likely have problems as we’ve experienced because once oVirt 
cannot find the “master” it begins to go berserk, then spirals out of control 
there. It might not on small setups with a few hypervisors, but on an install 
with a few hundred VM’s, large number of hypervisors etc, it seems to get ugly 
real quick.

Stop trying to reconnect things, I think that’s what I’m looking for. When 
something fails, oVirt just goes in a loop over and over which eventually 
causes dashboard issues, crazy amounts of logs etc. It would be better if oVirt 
would just stop, make a log entry and then quit, maybe after a few times.

In my case, I could mount the storage manually to ALL hosts, I could even force 
start the VM’s with virsh. The oVirt dashboard just kept saying it was locked, 
and wouldn’t let you do anything at all with the entire datacenter.

At this time, we’ve pushed these servers back into production using our current 
hypervisor software which is stable but does not have the benefits of oVirt. 
It’ll be revisited later on and is still in use for non-production things.


From: Maor Lipchuk
Sent: Sunday, January 22, 2017 7:33 AM
To: Bill Bill
Cc: users
Subject: Re: [ovirt-users] master storage domain stuck in locked state



On Sun, Jan 22, 2017 at 2:31 PM, Maor Lipchuk 
mailto:mlipc...@redhat.com>> wrote:
Hi Bill,

Can you please attach the engine and VDSM logs.
Does the storage domain still stuck?

Also which oVirt version are you using?


Regards,
Maor

On Sat, Jan 21, 2017 at 3:11 AM, Bill Bill 
mailto:jax2...@outlook.com>> wrote:

Also cannot reinitialize the datacenter because the storage domain is locked.

From: Bill Bill
Sent: Friday, January 20, 2017 8:08 PM
To: users
Subject: RE: master storage domain stuck in locked state

Spoke too soon. Some hosts came back up but the storage domain is still locked 
so no vm’s can be started. What is the proper way to force this to be unlocked? 
Each time we look to move into production after successful testing, something 
like this always seems to pop up at the last minute rending oVirt questionable 
in terms of reliability for some unknown issue.



From: Bill Bill
Sent: Friday, January 20, 2017 7:54 PM
To: users
Subject: RE: master storage domain stuck in locked state


So apparently something didn’t change the metadata to master before connection 
was lost. I changed the metadata role to master and it came backup. Seems 
emailing in helped because every time I can’t figure something out, email in a 
find it shortly after.


From: Bill Bill
Sent: Friday, January 20, 2017 7:43 PM
To: users
Subject: master storage domain stuck in locked state

No clue how to get this out. I can mount all storage manually on the 
hypervisors. It seems like after a reboot oVirt is now having some issue and 
the storage domain is stuck in locked state. Because of this, can’t activate 
any other storage either, so the other domains are in maintenance and the 
master sits in locked state, has been for hours.

This sticks out on a hypervisor:

StoragePoolWrongMaster: Wrong Master domain or its version: 
u'SD=d8a0172e-837f-4552-92c7-566dc4e548e4, 
pool=3fd2ad92-e1eb-49c2-906d-00ec233f610a'

Not sure, nothing changed other than a reboot of the storage.

Engine log shows:

[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] 
(DefaultQuartzScheduler8) [5696732b] START, SetVdsStatusVDSCommand(HostName = 
U31U32NodeA, SetVdsStatusVDSCommandParameters:{runAsync='true', 
hostId='

Re: [ovirt-users] Fence Agent cisco_ucs not working anymore since 4.0.5

2017-01-22 Thread Florian Schmid
Hello, 

thank you very much for these bug reports! Good to here, that there will be a 
fix in 4.0.7. 

Do you maybe know a workaround until this version comes out? 

I tried already to add it manually in database, but I cannot really test it. 
4.1 RC, I can't test, because I have no unused blades around, sorry! 

BR Florian 





 
UBIMET GmbH - weather matters 
Ing. Florian Schmid • IT Infrastruktur Austria 


A-1220 Wien • Donau-City-Straße 11 • Tel +43 1 263 11 22 DW 469 • Fax +43 1 263 
11 22 219 
fsch...@ubimet.com • www.ubimet.com • Mobile: +43 664 8323379 


Sitz: Wien • Firmenbuchgericht: Handelsgericht Wien • FN 248415 t 


 



The information contained in this message (including any attachments) is 
confidential and may be legally privileged or otherwise protected from 
disclosure. This message is intended solely for the addressee(s). If you are 
not the intended recipient, please notify the sender by return e-mail and 
delete this message from your system. Any unauthorized use, reproduction, or 
dissemination of this message is strictly prohibited. Please note that e-mails 
are susceptible to change. UBIMET GmbH shall not be liable for the improper or 
incomplete transmission of the information contained in this communication, nor 
shall it be liable for any delay in its receipt. UBIMET GmbH accepts no 
liability for loss or damage caused by software viruses and you are advised to 
carry out a virus check on any attachments contained in this message. 





Von: "Yaniv Kaul"  
An: "Florian Schmid"  
CC: "Ovirt Users"  
Gesendet: Sonntag, 22. Januar 2017 17:06:52 
Betreff: Re: [ovirt-users] Fence Agent cisco_ucs not working anymore since 
4.0.5 

Sounds a bit like https://bugzilla.redhat.com/show_bug.cgi?id=1411231 - which 
is a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1398898 and will 
be fixed soon in 4.0.7 (or you can grab 4.1 RC and test it). 
Can you verify it? 
TIA, 
Y. 

On Sat, Jan 21, 2017 at 9:12 PM, Florian Schmid < fsch...@ubimet.com > wrote: 


Hello, 

we use some Cicso UCS blades in our ovirt infrastructure and since we upgraded 
from 4.0.4 to 4.0.5, when I want to edit a host, I need to change the power 
management cisco ucs agent, because it has an unsupported option in field 
"Options"! 
My problem is, that it looks like, that the filed "Options" only allows integer 
number anymore as values! 
BUT for the ucs fence agent, I need to specify the "suborg=" option and this is 
not a number. It is the name of the sub-organization like oVirt for example! 

So since this version of ovirt, our complete power management stopped working 
and therfore, we don't have important features like HA anymore. 

With ovirt 4.0.4 or earlier, my options looked like this: 
Slot ovirt-2 
Options suborg=org-oVirt,ssl_insecure=1 

"Slot" is the name of the service profile and "Options" have the suborg 
specified and insecure ssl! 
ssl_insecure=1 is working in version 4.0.5 and later. 
We have already some hosts running 4.0.6, but not the engine yet! 
fence-agents-cisco-ucs versions we use: 
- 4.0.11-27.el7_2.9.x86_64 
- 4.0.11-47.el7_3.2.x86_64 

Can someone please help me? 

Thank you very much! 
Best regards 
Florian 
___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Monitoring disk I/O

2017-01-22 Thread Yaniv Kaul
Very nice.

We are working on integrating those specific metrics, by enhancing the virt
plugin of collectd[1].
See [2] for the work.

In 4.1, collectd will be already installed on the host, and will be able to
provide some statistics already.
We plan to further integrate it with external tools to collect, aggregate
and visualize metrics.
Y.

[1] https://collectd.org/
[2] https://github.com/collectd/collectd/pull/2103

On Fri, Jan 20, 2017 at 9:20 AM, Ernest Beinrohr  wrote:

> On 19.01.2017 21:42, Michael Watters wrote:
>
> Does ovirt have any way to monitor disk I/O for each VM or disk in a
> storage pool?  I am receiving disk latency warnings and would like to
> know which VMs are causing the most disk I/O.
>
>
> We have a homebrew IO vm monitoring, libvirt uses cgroups which record CPU
> and IO stats for each VM. It's a little tricky to follow the VM while it
> migrates, but once done, we have cpu and IO graphs for each VM.
>
> Basicly for each hypervisor we periodicky poll cgroup info for all its VMs:
>
> for vm in $vms
> do
> (
> echo -n "$HOST:$vm:"
> vm=${vm/-/x2d}
> egrep -v "$IGNORED_REGEX" /sys/fs/cgroup/blkio/machine.
> slice/machine-qemu*$vm*/blkio.throttle.io_serviced | grep ^253:.*Read |
> cut -f3 -d " " | paste -sd+ | bc
> echo -n ":"
> egrep -v "$IGNORED_REGEX" /sys/fs/cgroup/blkio/machine.
> slice/machine-qemu*$vm*/blkio.throttle.io_serviced | grep ^253:.*Write |
> cut -f3 -d " " | paste -sd+ | bc
> echo -n ":"
> egrep -v "$IGNORED_REGEX" /sys/fs/cgroup/blkio/machine.
> slice/machine-qemu*$vm*/blkio.throttle.io_service_bytes | grep
> ^253:.*Read | cut -f3 -d " " | paste -sd+ | bc
> echo -n ":"
> egrep -v "$IGNORED_REGEX" /sys/fs/cgroup/blkio/machine.
> slice/machine-qemu*$vm*/blkio.throttle.io_service_bytes | grep
> ^253:.*Write | cut -f3 -d " " | paste -sd+ | bc
> echo -n ":"
> cat /sys/fs/cgroup/cpuacct/machine.slice/*$vm*/cpuacct.usage
> ) | tr -d '\n'
> echo ""
> done
>
> and then we MRTG it.
> --
> Ernest Beinrohr, AXON PRO
> Ing , RHCE
> , RHCVA ,
> LPIC , VCA
> ,
> +421-2-62410360 <+421%202/624%20103%2060> +421-903-482603
> <+421%20903%20482%20603>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fence Agent cisco_ucs not working anymore since 4.0.5

2017-01-22 Thread Yaniv Kaul
Sounds a bit like https://bugzilla.redhat.com/show_bug.cgi?id=1411231 -
which is a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1398898
and will be fixed soon in 4.0.7 (or you can grab 4.1 RC and test it).

Can you verify it?
TIA,
Y.

On Sat, Jan 21, 2017 at 9:12 PM, Florian Schmid  wrote:

> Hello,
>
> we use some Cicso UCS blades in our ovirt infrastructure and since we
> upgraded from 4.0.4 to 4.0.5, when I want to edit a host, I need to change
> the power management cisco ucs agent, because it has an unsupported option
> in field "Options"!
> My problem is, that it looks like, that the filed "Options" only allows
> integer number anymore as values!
> BUT for the ucs fence agent, I need to specify the "suborg=" option and
> this is not a number. It is the name of the sub-organization like oVirt for
> example!
>
> So since this version of ovirt, our complete power management stopped
> working and therfore, we don't have important features like HA anymore.
>
> With ovirt 4.0.4 or earlier, my options looked like this:
> Slotovirt-2
> Options suborg=org-oVirt,ssl_insecure=1
>
> "Slot" is the name of the service profile and "Options" have the suborg
> specified and insecure ssl!
> ssl_insecure=1 is working in version 4.0.5 and later.
> We have already some hosts running 4.0.6, but not the engine yet!
> fence-agents-cisco-ucs versions we use:
> - 4.0.11-27.el7_2.9.x86_64
> - 4.0.11-47.el7_3.2.x86_64
>
> Can someone please help me?
>
> Thank you very much!
> Best regards
> Florian
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] domain types for export and iso: no fcp and no iscsi

2017-01-22 Thread Maor Lipchuk
On Sun, Jan 22, 2017 at 3:26 PM, Nir Soffer  wrote:

> On Sun, Jan 22, 2017 at 3:08 PM, Gianluca Cecchi
>  wrote:
> > Il 22/Gen/2017 13:23, "Maor Lipchuk"  ha scritto:
> >
> >
> > That is indeed the behavior.
> > ISO and export are only supported through mount options.
> >
> >
> > Ok
> >
> >
> > I think that the goal is to replace those types of storage domains.
> >
> >
> > In which sense/way?
> >
> > For example the backup that the export storage domain is used for can be
> > used with a regular data storage domain since oVirt 3.6.
> >
> >
> > I have not understood ... can you detail this?

In some future version, we can replace iso and export domain with a regular
> data domain.
>
> For example, iso file will be uploaded to regular volume (on file or
> block). When
> you want to attach an iso to a vm, we can activate the volume and have the
> vm
> use the path to the volume.
>
> For export domain, we would could copy the disks to regular volumes. The vm
> metadata will be kept in the OVF_STORE of the domain.
>
> Or, instead of export, you can download the vm to ova file, and
> instead of import,
> upload the vm from ova file.
>


Here is an example how the user can migrate VMs/Templates between different
Data Centers with the "import storage domain" instead of using export
domain.

https://www.youtube.com/watch?v=DLcxDB0MY38&list=PL2NsEhIoqsJFsDWYKZ0jzJba11L_-xSds&index=4

so this is only one feature the export storage domain is used for today
that as of oVirt 3.5 can be done with any storage domain.
In the future we plan to enhance it to pre-define a certain storage domain
as a backup storage domain which will have features similar to the export
storage domain and more.


> > I have available a test environment with several San LUNs and no easy
> way to
> > connect to nfs or similar external resources... how could I provide
> export
> > and iso in this case? Creating a specialized vm and exporting from there,
> > even if it would be a circular dependency then..? Suggestions?
>
> Maybe upload image via the ui/REST?
>
> Nir
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] domain types for export and iso: no fcp and no iscsi

2017-01-22 Thread Nir Soffer
On Sun, Jan 22, 2017 at 3:08 PM, Gianluca Cecchi
 wrote:
> Il 22/Gen/2017 13:23, "Maor Lipchuk"  ha scritto:
>
>
> That is indeed the behavior.
> ISO and export are only supported through mount options.
>
>
> Ok
>
>
> I think that the goal is to replace those types of storage domains.
>
>
> In which sense/way?
>
> For example the backup that the export storage domain is used for can be
> used with a regular data storage domain since oVirt 3.6.
>
>
> I have not understood ... can you detail this?

In some future version, we can replace iso and export domain with a regular
data domain.

For example, iso file will be uploaded to regular volume (on file or
block). When
you want to attach an iso to a vm, we can activate the volume and have the vm
use the path to the volume.

For export domain, we would could copy the disks to regular volumes. The vm
metadata will be kept in the OVF_STORE of the domain.

Or, instead of export, you can download the vm to ova file, and
instead of import,
upload the vm from ova file.

> I have available a test environment with several San LUNs and no easy way to
> connect to nfs or similar external resources... how could I provide export
> and iso in this case? Creating a specialized vm and exporting from there,
> even if it would be a circular dependency then..? Suggestions?

Maybe upload image via the ui/REST?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] domain types for export and iso: no fcp and no iscsi

2017-01-22 Thread Gianluca Cecchi
Il 22/Gen/2017 13:23, "Maor Lipchuk"  ha scritto:


That is indeed the behavior.
ISO and export are only supported through mount options.


Ok


I think that the goal is to replace those types of storage domains.


In which sense/way?

For example the backup that the export storage domain is used for can be
used with a regular data storage domain since oVirt 3.6.


I have not understood ... can you detail this?

I have available a test environment with several San LUNs and no easy way
to connect to nfs or similar external resources... how could I provide
export and iso in this case? Creating a specialized vm and exporting from
there, even if it would be a circular dependency then..? Suggestions?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] IvyBridge CPU type is westmere. oVirt3.6

2017-01-22 Thread Sandro Bonazzola
Il 22/Gen/2017 08:26, "久保田"  ha scritto:

Hello,

I am a user of oVirt Engine 3.6.2.6-1.el7.centos .
Ovirt 3.6 still has a bug that classifies IvyBridge CPU(i7-3770K) as
Westmere and clusters it.

Is this bug fixed in ovirt 4.x?


Letting virtualization team to answer this



and
Is there a prospect of correcting this bug in ovirt 3.6?


No, 3.6 reached end of life when 4.0 has been released. Please note that
4.1 will arrive soon so the upcoming 4.0.7 will be the last 4.0 release. I
would suggest to upgrade.




When IvyBridge is added to cluster of SandyBridge, an error of
CPU_TYPE_INCONPATIBLE_WITH_CLUSTER appears.

engine.log
2017-01-22 15:25:00,004 INFO  [org.ovirt.engine.core.bll.AutoRecoveryManager]
(DefaultQuartzScheduler_Worker-11) [] Autorecovering 1 hosts
2017-01-22 15:25:00,004 INFO  [org.ovirt.engine.core.bll.AutoRecoveryManager]
(DefaultQuartzScheduler_Worker-11) [] Autorecovering hosts id:
726249f0-fa7d-496b-b86b-432e8909a2f3 , name :XXX
2017-01-22 15:25:00,005 INFO  [org.ovirt.engine.core.bll.ActivateVdsCommand]
(DefaultQuartzScheduler_Worker-11) [33e5f398] Lock Acquired to object
'EngineLock:{exclusiveLocks='[726249f0-fa7d-496b-b86b-432e8909a2f3=]', sharedLocks='null'}'
2017-01-22 15:25:00,006 INFO  [org.ovirt.engine.core.bll.ActivateVdsCommand]
(DefaultQuartzScheduler_Worker-11) [33e5f398] Running command:
ActivateVdsCommand internal: true. Entities affected :  ID:
726249f0-fa7d-496b-b86b-432e8909a2f3
Type: VDSAction group MANIPULATE_HOST with role type ADMIN
2017-01-22 15:25:00,006 INFO  [org.ovirt.engine.core.bll.ActivateVdsCommand]
(DefaultQuartzScheduler_Worker-11) [33e5f398] Before acquiring lock in
order to prevent monitoring for host 'XXX'
from data-center 'Default'
2017-01-22 15:25:00,006 INFO  [org.ovirt.engine.core.bll.ActivateVdsCommand]
(DefaultQuartzScheduler_Worker-11) [33e5f398] Lock acquired, from now a
monitoring of host will be skipped for host 'XXX'
from data-center 'Default'
2017-01-22 15:25:00,014 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(DefaultQuartzScheduler_Worker-11) [33e5f398] START,
SetVdsStatusVDSCommand(HostName
= green.team-boss.com, SetVdsStatusVDSCommandParameters:{runAsync
='true', hostId='726249f0-fa7d-496b-b86b-432e8909a2f3',
status='Unassigned', nonOperationalReason='NONE',
stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 2284a60f
2017-01-22 15:25:00,016 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(DefaultQuartzScheduler_Worker-11) [33e5f398] FINISH,
SetVdsStatusVDSCommand, log id: 2284a60f
2017-01-22 15:25:00,024 INFO  [org.ovirt.engine.core.bll.ActivateVdsCommand]
(DefaultQuartzScheduler_Worker-11) [] Activate finished. Lock released.
Monitoring can run now for host 'XXX' from
data-center 'Default'
2017-01-22 15:25:00,024 INFO  [org.ovirt.engine.core.bll.ActivateVdsCommand]
(DefaultQuartzScheduler_Worker-11) [] Lock freed to object
'EngineLock:{exclusiveLocks='[726249f0-fa7d-496b-b86b-432e8909a2f3=]', sharedLocks='null'}'
2017-01-22 15:25:02,888 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler_Worker-60) [] START, GetHardwareInfoVDSCommand(HostName
= XXX,
VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',hostId=
'726249f0-fa7d-496b-b86b-432e8909a2f3',vds='Host[XXX,
726249f0-fa7d-496b-b86b-432e8909a2f3]'}),log id: 1927e79c
2017-01-22 15:25:02,892 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler_Worker-60) [] FINISH, GetHardwareInfoVDSCommand,
log id: 1927e79c
2017-01-22 15:25:02,894 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager]
(DefaultQuartzScheduler_Worker-60) [] Host 'XXX'is running with SELinux in
'DISABLED' mode
2017-01-22 15:25:02,911 INFO  [org.ovirt.engine.core.bll.
HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-60)
[1afd70be] Running command: HandleVdsCpuFlagsOrClusterChangedCommand
internal: true. Entities affected :  ID: 726249f0-fa7d-496b-b86b-432e8909a2f3
Type: VDS
2017-01-22 15:25:02,925 INFO
[org.ovirt.engine.core.bll.SetNonOperationalVdsCommand]
(DefaultQuartzScheduler_Worker-60) [65c681a5] Running command:
SetNonOperationalVdsCommand internal: true. Entities affected :  ID:
726249f0-fa7d-496b-b86b-432e8909a2f3 Type: VDS
2017-01-22 15:25:02,926 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(DefaultQuartzScheduler_Worker-60) [65c681a5] START,
SetVdsStatusVDSCommand(HostName
=
XXX, SetVdsStatusVDSCommandParameters:{runAsync='true',hostId='
726249f0-fa7d-496b-b86b-432e8909a2f3', status='NonOperational',
nonOperationalReason='CPU_TYPE_INCOMPATIBLE_WITH_
CLUSTER',stopSpmFailureLogged='false', maintenanceReason='null'}), log id:
4c07f536
2017-01-22 15:25:02,930 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(DefaultQuartzScheduler_Worker-60) [65c681a5] FINISH,
SetVdsStatusVDSCommand, log id: 4c07f536
2017-01-22 15:25:02,936 INFO  [org.ovirt.engine.core.dal.
dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_W

Re: [ovirt-users] Failed to attach a disk

2017-01-22 Thread Maor Lipchuk
Hi Fabrice,

Can you please attach the VDSM and engine logs

Thanks,
Maor

On Wed, Jan 18, 2017 at 5:05 PM, Fabrice Bacchella <
fabrice.bacche...@icloud.com> wrote:

> I upgraded an host to the latest version of vdsm:
> vdsm-4.18.21-1.el7.centos.x86_64, on a CentOS Linux release 7.3.1611
> (Core)
>
> I then created a disk that I wanted to attach to a running vm, but il
> fails, with the message in /var/log/libvirt/qemu/.log:
>
> Could not open '/rhev/data-center/17434f4e-8d1a-4a88-ae39-d2ddd46b3b9b/
> 7c5291d3-11e2-420f-99ad-47a376013671/images/4d33f997-
> 94b0-42c1-8052-5364993b85e9/8e613dd3-eebc-476a-a830-e2a8236ea8a8':
> Permission denied
>
> I tried to have a look at the disks images, and got a strange result:
>
> -rw-rw 1 vdsm qemu 1.0M May 18  2016 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/ed18c515-09c9-4a71-af0a-7f0934193a65/
> b5e53c81-2279-4f2b-b282-69db430d36d4.lease
> -rw-rw 1 vdsm qemu 1.0M May 18  2016 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/3a00232b-c1f9-4b9b-910e-caf8b0321609/
> 4f6d5c63-6a36-4356-832e-f52427d9512e.lease
> -rw-rw 1 vdsm qemu 1.0M May 23  2016 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/b0f4c517-e492-409f-934f-1561281a242b/
> a3d60d8a-f89b-41dd-b519-fb652301b1f5.lease
> -rw-rw 1 vdsm qemu 1.0M May 23  2016 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/465df4e9-3c62-4501-889f-cbab65ed0e0d/
> 7a9b9033-f5f8-4eaa-ac94-6cc0c4ff6120.lease
> -rw-rw 1 vdsm kvm  1.0M Jan  6 18:00 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/baf01c4e-ede9-4e4e-a265-172695d81a83/
> 4cdd72a7-b347-4479-accd-ab08d61552f9.lease
> -rw-rw 1 vdsm kvm  1.0M Jan 18 15:38 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/4d33f997-94b0-42c1-8052-5364993b85e9/
> 8e613dd3-eebc-476a-a830-e2a8236ea8a8.lease
>
> -rw-r--r-- 1 vdsm qemu 314 May 23  2016 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/b0f4c517-e492-409f-934f-1561281a242b/
> a3d60d8a-f89b-41dd-b519-fb652301b1f5.meta
> -rw-r--r-- 1 vdsm kvm  314 Jan  6 18:00 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/465df4e9-3c62-4501-889f-cbab65ed0e0d/
> 7a9b9033-f5f8-4eaa-ac94-6cc0c4ff6120.meta
> -rw-r--r-- 1 vdsm kvm  307 Jan  6 18:00 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/baf01c4e-ede9-4e4e-a265-172695d81a83/
> 4cdd72a7-b347-4479-accd-ab08d61552f9.meta
> -rw-r--r-- 1 vdsm kvm  437 Jan 18 11:32 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/ed18c515-09c9-4a71-af0a-7f0934193a65/
> b5e53c81-2279-4f2b-b282-69db430d36d4.meta
> -rw-r--r-- 1 vdsm kvm  437 Jan 18 11:32 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/3a00232b-c1f9-4b9b-910e-caf8b0321609/
> 4f6d5c63-6a36-4356-832e-f52427d9512e.meta
> -rw-r--r-- 1 vdsm kvm  310 Jan 18 15:38 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/4d33f997-94b0-42c1-8052-5364993b85e9/
> 8e613dd3-eebc-476a-a830-e2a8236ea8a8.meta
>
> -rw-rw 1 vdsm qemu  16G Jan  9 17:25 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/baf01c4e-ede9-4e4e-a265-172695d81a83/
> 4cdd72a7-b347-4479-accd-ab08d61552f9
> -rw-rw 1 vdsm qemu  30K Jan 18 11:32 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/ed18c515-09c9-4a71-af0a-7f0934193a65/
> b5e53c81-2279-4f2b-b282-69db430d36d4
> -rw-rw 1 vdsm qemu  30K Jan 18 11:32 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/3a00232b-c1f9-4b9b-910e-caf8b0321609/
> 4f6d5c63-6a36-4356-832e-f52427d9512e
> -rw-rw 1 vdsm kvm  300G Jan 18 15:38 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/4d33f997-94b0-42c1-8052-5364993b85e9/
> 8e613dd3-eebc-476a-a830-e2a8236ea8a8
> -rw-rw 1 vdsm qemu  32G Jan 18 15:58 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/b0f4c517-e492-409f-934f-1561281a242b/
> a3d60d8a-f89b-41dd-b519-fb652301b1f5
> -rw-rw 1 vdsm qemu  32G Jan 18 15:58 /rhev/data-center/17434f4e-
> 8d1a-4a88-ae39-d2ddd46b3b9b/7c5291d3-11e2-420f-99ad-
> 47a376013671/images/465df4e9-3c62-4501-889f-cbab65ed0e0d/
> 7a9b9033-f5f8-4eaa-ac94-6cc0c4ff6120
>
>
> What a strange mix of group owner. Any explanation for that ? Is that a
> known bug ?
>
> The new disk is the 300G one, owned by kvm.
>
>
> _

Re: [ovirt-users] fast import to ovirt

2017-01-22 Thread Maor Lipchuk
Hi paf1,

Have you tried the import storage domain feature?
You can take a look how it is being done at
http://www.ovirt.org/develop/release-management/features/storage/importstoragedomain/
Work flow for Import File Storage Domain - UI flow
https://www.youtube.com/watch?v=YbU-DIwN-Wc

On Thu, Jan 19, 2017 at 11:44 AM, p...@email.cz  wrote:

> Hello,
> how can I import Vm from different ovirt envir..? There is no common mgmt
> ovirt. ( ovirt 3.5 -> 4.0 )
> Gluster FS used.
> Will ovirt accept "rsync" file migrations , meaning will update oVirt DB
> automaticaly  ?
> I'd prefer more quickly method then export-umount oV1-mount oV2-import .
>
> regards
> paf1
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] master storage domain stuck in locked state

2017-01-22 Thread Maor Lipchuk
On Sun, Jan 22, 2017 at 2:31 PM, Maor Lipchuk  wrote:

> Hi Bill,
>
> Can you please attach the engine and VDSM logs.
> Does the storage domain still stuck?
>

Also which oVirt version are you using?


>
> Regards,
> Maor
>
> On Sat, Jan 21, 2017 at 3:11 AM, Bill Bill  wrote:
>
>>
>>
>> Also cannot reinitialize the datacenter because the storage domain is
>> locked.
>>
>>
>>
>> *From: *Bill Bill 
>> *Sent: *Friday, January 20, 2017 8:08 PM
>> *To: *users 
>> *Subject: *RE: master storage domain stuck in locked state
>>
>>
>>
>> Spoke too soon. Some hosts came back up but the storage domain is still
>> locked so no vm’s can be started. What is the proper way to force this to
>> be unlocked? Each time we look to move into production after successful
>> testing, something like this always seems to pop up at the last minute
>> rending oVirt questionable in terms of reliability for some unknown issue.
>>
>>
>>
>>
>>
>>
>>
>> *From: *Bill Bill 
>> *Sent: *Friday, January 20, 2017 7:54 PM
>> *To: *users 
>> *Subject: *RE: master storage domain stuck in locked state
>>
>>
>>
>>
>>
>> So apparently something didn’t change the metadata to master before
>> connection was lost. I changed the metadata role to master and it came
>> backup. Seems emailing in helped because every time I can’t figure
>> something out, email in a find it shortly after.
>>
>>
>>
>>
>>
>> *From: *Bill Bill 
>> *Sent: *Friday, January 20, 2017 7:43 PM
>> *To: *users 
>> *Subject: *master storage domain stuck in locked state
>>
>>
>>
>> No clue how to get this out. I can mount all storage manually on the
>> hypervisors. It seems like after a reboot oVirt is now having some issue
>> and the storage domain is stuck in locked state. Because of this, can’t
>> activate any other storage either, so the other domains are in maintenance
>> and the master sits in locked state, has been for hours.
>>
>>
>>
>> This sticks out on a hypervisor:
>>
>>
>>
>> StoragePoolWrongMaster: Wrong Master domain or its version:
>> u'SD=d8a0172e-837f-4552-92c7-566dc4e548e4, pool=3fd2ad92-e1eb-49c2-906d-0
>> 0ec233f610a'
>>
>>
>>
>> Not sure, nothing changed other than a reboot of the storage.
>>
>>
>>
>> Engine log shows:
>>
>>
>>
>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
>> (DefaultQuartzScheduler8) [5696732b] START, SetVdsStatusVDSCommand(HostName
>> = U31U32NodeA, SetVdsStatusVDSCommandParameters:{runAsync='true',
>> hostId='70e2b8e4-0752-47a8-884c-837a00013e79', status='NonOperational',
>> nonOperationalReason='STORAGE_DOMAIN_UNREACHABLE',
>> stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 6db9820a
>>
>>
>>
>> No idea why it says unreachable, it certainly is because I can manually
>> mount ALL storage to the hypervisor.
>>
>>
>>
>> Sent from Mail  for
>> Windows 10
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] master storage domain stuck in locked state

2017-01-22 Thread Maor Lipchuk
Hi Bill,

Can you please attach the engine and VDSM logs.
Does the storage domain still stuck?

Regards,
Maor

On Sat, Jan 21, 2017 at 3:11 AM, Bill Bill  wrote:

>
>
> Also cannot reinitialize the datacenter because the storage domain is
> locked.
>
>
>
> *From: *Bill Bill 
> *Sent: *Friday, January 20, 2017 8:08 PM
> *To: *users 
> *Subject: *RE: master storage domain stuck in locked state
>
>
>
> Spoke too soon. Some hosts came back up but the storage domain is still
> locked so no vm’s can be started. What is the proper way to force this to
> be unlocked? Each time we look to move into production after successful
> testing, something like this always seems to pop up at the last minute
> rending oVirt questionable in terms of reliability for some unknown issue.
>
>
>
>
>
>
>
> *From: *Bill Bill 
> *Sent: *Friday, January 20, 2017 7:54 PM
> *To: *users 
> *Subject: *RE: master storage domain stuck in locked state
>
>
>
>
>
> So apparently something didn’t change the metadata to master before
> connection was lost. I changed the metadata role to master and it came
> backup. Seems emailing in helped because every time I can’t figure
> something out, email in a find it shortly after.
>
>
>
>
>
> *From: *Bill Bill 
> *Sent: *Friday, January 20, 2017 7:43 PM
> *To: *users 
> *Subject: *master storage domain stuck in locked state
>
>
>
> No clue how to get this out. I can mount all storage manually on the
> hypervisors. It seems like after a reboot oVirt is now having some issue
> and the storage domain is stuck in locked state. Because of this, can’t
> activate any other storage either, so the other domains are in maintenance
> and the master sits in locked state, has been for hours.
>
>
>
> This sticks out on a hypervisor:
>
>
>
> StoragePoolWrongMaster: Wrong Master domain or its version:
> u'SD=d8a0172e-837f-4552-92c7-566dc4e548e4, pool=3fd2ad92-e1eb-49c2-906d-
> 00ec233f610a'
>
>
>
> Not sure, nothing changed other than a reboot of the storage.
>
>
>
> Engine log shows:
>
>
>
> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
> (DefaultQuartzScheduler8) [5696732b] START, SetVdsStatusVDSCommand(HostName
> = U31U32NodeA, SetVdsStatusVDSCommandParameters:{runAsync='true',
> hostId='70e2b8e4-0752-47a8-884c-837a00013e79', status='NonOperational',
> nonOperationalReason='STORAGE_DOMAIN_UNREACHABLE',
> stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 6db9820a
>
>
>
> No idea why it says unreachable, it certainly is because I can manually
> mount ALL storage to the hypervisor.
>
>
>
> Sent from Mail  for
> Windows 10
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] domain types for export and iso: no fcp and no iscsi

2017-01-22 Thread Maor Lipchuk
On Sun, Jan 22, 2017 at 2:28 AM, Gianluca Cecchi 
wrote:

> Hello,
> I didn't notice before, but it seems that in 4.0.6 when I select "export"
> or "ISO" in Domain Function, then for Storage Type I can only select NFS,
> GlusterFS or POSIX compliant FS.
> If this is true, why this limitation? Any chance to change it, allowing
> FCP and iSCSI also for these kinds of domains?
>

Hi Gianluca,

That is indeed the behavior.
ISO and export are only supported through mount options.

I think that the goal is to replace those types of storage domains.
For example the backup that the export storage domain is used for can be
used with a regular data storage domain since oVirt 3.6.

Yaniv, do we have any open RFEs on our future plans?


>
> Thanks,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted-storage import fails on hyperconverged glusterFS

2017-01-22 Thread Liebe , André-Sebastian
Hello List,

I run into trouble after moving our hosted engine from nfs to hyperconverged 
glusterFS by backup/restore[1] procedure. The engine logs it  can't import and 
activate the hosted-storage although I can see the storage.
Any Hints how to fix this?

- I created the ha-replica-3 gluster volume prior to hosted-engine-setup using 
the hosts short name.
- Then ran hosted-engine-setup to install an new hosted engine (by installing 
centOS7 and ovirt-engine amnually)
- inside the new hosted-engine I restored the last successfull backup (wich was 
in running state)
- then I connected to the engine-database and removed the old hosted-engine by 
hand (as part of this patch would do: https://gerrit.ovirt.org/#/c/64966/) and 
all known hosts (after marking all vms as down, where I got ETL error messages 
later on for this)
- then I finished up the engine installation by running the engine-setup inside 
the hosted_engine
- and finally completed the hosted-engine-setup


The new hosted engine came up successfully with all prior known storage and 
after enabling glusterFS, the cluster this HA-host is part of, I could see it 
in the volumes and storage tab. After adding the remaining two hosts, the 
volume was marked as active.

But here's the the error message I get repeadately since then:
> 2017-01-19 08:49:36,652 WARN  
> [org.ovirt.engine.core.bll.storage.domain.ImportHostedEngineStorageDomainCommand]
>  (org.ovirt.thread.pool-6-thread-10) [3b955ecd] Validation of action 
> 'ImportHostedEngineStorageDomain' failed for user SYSTEM. Reasons: 
> VAR__ACTION__ADD,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_ALREADY_EXIST


There are also some repeating messages about this ha-replica-3 volume, because 
I used the hosts short name on volume creation, which I can't change afaik 
without a complete cluster shutdown.
> 2017-01-19 08:48:03,134 INFO  
> [org.ovirt.engine.core.bll.AddUnmanagedVmsCommand] (DefaultQuartzScheduler3) 
> [7471d7de] Running command: AddUnmanagedVmsCommand internal: true.
> 2017-01-19 08:48:03,134 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] 
> (DefaultQuartzScheduler3) [7471d7de] START, FullListVDSCommand(HostName = , 
> FullListVDSCommandParameters:{runAsync='true', 
> hostId='f62c7d04-9c95-453f-92d5-6dabf9da874a', 
> vds='Host[,f62c7d04-9c95-453f-92d5-6dabf9da874a]', 
> vmIds='[dfea96e8-e94a-407e-af46-3019fd3f2991]'}), log id: 2d0941f9
> 2017-01-19 08:48:03,163 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] 
> (DefaultQuartzScheduler3) [7471d7de] FINISH, FullListVDSCommand, return: 
> [{guestFQDN=, emulatedMachine=pc, pid=0, guestDiskMapping={}, 
> devices=[Ljava.lang.Object;@4181d938, cpuType=Haswell-noTSX, smp=2, 
> vmType=kvm, memSize=8192, vmName=HostedEngine, username=, exitMessage=XML 
> error: maximum vcpus count must be an integer, 
> vmId=dfea96e8-e94a-407e-af46-3019fd3f2991, displayIp=0, displayPort=-1, 
> guestIPs=, 
> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
>  exitCode=1, nicModel=rtl8139,pv, exitReason=1, status=Down, maxVCpus=None, 
> clientIp=, statusTime=6675071780, display=vnc, displaySecurePort=-1}], log 
> id: 2d0941f9
> 2017-01-19 08:48:03,163 ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder] 
> (DefaultQuartzScheduler3) [7471d7de] null architecture type, replacing with 
> x86_64, %s
> 2017-01-19 08:48:17,779 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
> (DefaultQuartzScheduler3) [7471d7de] START, 
> GlusterServersListVDSCommand(HostName = lvh2, 
> VdsIdVDSCommandParametersBase:{runAsync='true', 
> hostId='23297fc2-db12-4778-a5ff-b74d6fc9554b'}), log id: 57d029dc
> 2017-01-19 08:48:18,177 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
> (DefaultQuartzScheduler3) [7471d7de] FINISH, GlusterServersListVDSCommand, 
> return: [172.31.1.22/24:CONNECTED, lvh3.lab.gematik.de:CONNECTED, 
> lvh4.lab.gematik.de:CONNECTED], log id: 57d029dc
> 2017-01-19 08:48:18,180 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] 
> (DefaultQuartzScheduler3) [7471d7de] START, 
> GlusterVolumesListVDSCommand(HostName = lvh2, 
> GlusterVolumesListVDSParameters:{runAsync='true', 
> hostId='23297fc2-db12-4778-a5ff-b74d6fc9554b'}), log id: 5cd11a39
> 2017-01-19 08:48:18,282 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
> (DefaultQuartzScheduler3) [7471d7de] Could not associate brick 
> 'lvh2:/data/gluster/0/brick' of volume '7dc6410d-8f2a-406c-812a-8235fa6f721c' 
> with correct network as no gluster network found in cluster 
> '57ff41c2-0297-039d-039c-0362'
> 2017-01-19 08:48:18,284 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
> (DefaultQuartzScheduler3) [7471d7de] Could not associate brick 
> 'lvh3:/data/gluster/0/brick' of volume '7dc6410d-8f2a-406c-812a-8235fa6f721c' 
>

[ovirt-users] Invitation: DeepDive: VM to host affinity @ Tue Jan 24, 2017 4pm - 4:50pm (IST) (users@ovirt.org)

2017-01-22 Thread yanir quinn
BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:REQUEST
BEGIN:VEVENT
DTSTART:20170124T14Z
DTEND:20170124T145000Z
DTSTAMP:20170119T133801Z
ORGANIZER;CN=yanir quinn:mailto:yani...@gmail.com
UID:rh3ink2nbv41t4qi9gok4ta...@google.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=dfedi...@redhat.com;X-NUM-GUESTS=0:mailto:dfedi...@redhat.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE
 ;CN=yanir quinn;X-NUM-GUESTS=0:mailto:yani...@gmail.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=msi...@redhat.com;X-NUM-GUESTS=0:mailto:msi...@redhat.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=users@ovirt.org;X-NUM-GUESTS=0:mailto:users@ovirt.org
X-GOOGLE-HANGOUT:https://plus.google.com/hangouts/_/calendar/eWFuaXJxMkBnbW
 FpbC5jb20.rh3ink2nbv41t4qi9gok4talvs
CLASS:PUBLIC
CREATED:20170119T133801Z
DESCRIPTION:This event has a Google Hangouts video call.\nJoin: https://plu
 s.google.com/hangouts/_/calendar/eWFuaXJxMkBnbWFpbC5jb20.rh3ink2nbv41t4qi9g
 ok4talvs?hs=121\n\nView your event at https://www.google.com/calendar/event
 ?action=VIEW&eid=cmgzaW5rMm5idjQxdDRxaTlnb2s0dGFsdnMgdXNlcnNAb3ZpcnQub3Jn&t
 ok=MTcjeWFuaXJxMkBnbWFpbC5jb21iN2EwMWZmOGE2NzcwOTQzY2JkZjg3ZDllNTNhZDNiZGRh
 ODQwZjQ5&ctz=Asia/Jerusalem&hl=en.
LAST-MODIFIED:20170119T133801Z
LOCATION:
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:DeepDive: VM to host affinity
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR


invite.ics
Description: application/ics
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users