Re: [ovirt-users] Performance Issue

2018-04-18 Thread Tony Brian Albers
On 18/04/18 14:54, Gianluca Cecchi wrote:
> On Wed, Apr 18, 2018 at 2:25 PM, Tony Brian Albers  > wrote:
> 
> You need a lot faster disks to keep up with all that random I/O.
> 
> "Storage IO is average on 75 M/s Read Disk and 25 Write Disk. ( SSD
>   >     Raid )"
> 
> Ehm, are you running VM's on a DD box?  Why on earth would you do that?
> 
> /tony
> 
> 
> On 2018-04-18 14:09, Thomas Fecke wrote:
> > Atleast we found the Bottle neck. Our Data Domain is the Problem
> > 
> 
> 
> I guess and hope he made a mix between Data Store and Storage domain ;-)
> 

Hopefully, but I've seen people running vmware on isilon. Guess how that 
worked out for them.. ;)

/tony

-- 
Tony Albers
Systems administrator, IT-development
Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark.
Tel: +45 2566 2383 / +45 8946 2316
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt-ha-broker not work

2018-04-18 Thread dhy336
Hi, my hosted-engine is working, but after more than one hour,  the 
hosted-engine status is false, but hosted-engine is work,  I can visit webadmin 
UI,additional, demaon server "ovirt-ha-broker"  is not run, I try to restart 
it, find some error.
should I debug ovirt-ha-broker and ovirt-ha-agent?
thanks...
Logs:
[root@hosted-engine1 ~]# hosted-engine --vm-status

--== Host 1 status ==--
conf_on_shared_storage : TrueStatus up-to-date  : 
FalseHostname   : hosted-engine1Host ID 
   : 1Engine status  : unknown stale-dataScore  
: 3400stopped: FalseLocal 
maintenance  : Falsecrc32  : 
2f3d4df9local_conf_timestamp   : 5238Host timestamp 
: 5235Extra metadata (valid at timestamp): metadata_parse_version=1 
   metadata_feature_version=1  timestamp=5235 (Wed Apr 18 23:27:00 2018)
   host-id=1   score=3400  vm_conf_refresh_time=5238 (Wed Apr 18 
23:27:02 2018)conf_on_shared_storage=True maintenance=False   
state=EngineUp  stopped=False[root@hosted-engine1 ~]# [root@hosted-engine1 ~]# 
[root@hosted-engine1 ~]# [root@hosted-engine1 ~]# systemctl restart 
ovirt-ha-broker[root@hosted-engine1 ~]# journalctl -xe-- Defined-By: systemd-- 
Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel-- -- Unit 
mom-vdsm.service has begun starting up.Apr 19 11:06:20 hosted-engine1 
vdsm[4733]: WARN ping was deprecated in favor of ping2 and 
confirmConnectivityApr 19 11:06:30 hosted-engine1 vdsm[4733]: WARN cannot read 
eth0 speedApr 19 11:06:34 hosted-engine1 ovirt-ha-agent[30837]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Unable to refresh 
vm.conf from the shared storage. Has this HE cluster correctlyApr 19 11:06:36 
hosted-engine1 vdsm[4733]: WARN unhandled write eventApr 19 11:06:44 
hosted-engine1 ovirt-ha-agent[30837]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call 
last):File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 
191, in _run_agent  
return action(he)File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 
64, in action_proper  
return he.start_monitoring()
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
 line 421, in start_monitoring  
self._config.refresh_vm_conf()  
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", line 
492, in refresh_vm_conf 
 content_from_ovf = self._get_vm_conf_content_from_ovf_store()  
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", line 
438, in _get_vm_conf_content_from_ovf_store 
 conf = ovf2VmParams.confFromOvf(heovf) 
   File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py",
 line 283, in confFromOvf   
   vmConf = toDict(ovf)
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py",
 line 210, in toDict  
vmParams['vmId'] = tree.find('Content/Section').attrib[OVF_NS + 'id']   
 File "lxml.etree.pyx", line 2272, 
in lxml.etree._Attrib.__getitem__ (src/lxml/lxml.etree.c:55336) 
 KeyError: 
'{http://schemas.dmtf.org/ovf/envelope/1/}id'Apr 19 11:06:44 hosted-engine1 
ovirt-ha-agent[30837]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent 
ERROR Trying to restart agentApr 19 11:06:45 hosted-engine1 vdsm[4733]: WARN 
cannot read eth0 speedApr 19 11:06:46 hosted-engine1 systemd[1]: 
mom-vdsm.service holdoff time over, scheduling restart.Apr 19 11:06:46 
hosted-engine1 systemd[1]: Cannot add dependency job for unit 
lvm2-lvmetad.socket, ignoring: Unit is masked.Apr 19 11:06:46 hosted-engine1 
systemd[1]: Started MOM instance configured for VDSM purposes.-- Subject: Unit 
mom-vdsm.service has finished start-up-- Defined-By: systemd-- Support: 
http://lists.freedesktop.org/mailman/listinfo/systemd-devel-- -- Unit 
mom-vdsm.service has finished 

Re: [ovirt-users] vm's are diskless

2018-04-18 Thread johan . vermeulen7
I would like to add:

these now-diskless and inactive vm's originaly ran on a now-broken down hosts.
After that stopped running the vm's happily ran on another host before the 
update-an-reboot.

They are now listed under the cluster, but are not listed under storage, nor 
are the disks.

greetings, J.

- Oorspronkelijk bericht -
Van: "johan vermeulen7" 
Aan: "Benny Zlotnik" 
Cc: "users" 
Verzonden: Donderdag 19 april 2018 04:34:19
Onderwerp: Re: [ovirt-users] vm's are diskless

Hello Benny,

thanks for helping met out.

vdsm.log:

2018-04-19 04:21:22,374+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=0640a9e1-fa51-493f-ab4c-6d441031598c (api:46)
2018-04-19 04:21:22,374+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=0640a9e1-fa51-493f-ab4c-6d441031598c (api:52)
2018-04-19 04:21:22,374+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:569)
2018-04-19 04:21:27,380+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=af974d41-d8c2-443b-ad32-d554a019f61a (api:46)
2018-04-19 04:21:27,380+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=af974d41-d8c2-443b-ad32-d554a019f61a (api:52)
2018-04-19 04:21:27,380+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:569)
2018-04-19 04:21:32,385+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=e94bd6f1-b62b-4544-a138-759b9cad4224 (api:46)
2018-04-19 04:21:32,386+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=e94bd6f1-b62b-4544-a138-759b9cad4224 (api:52)
2018-04-19 04:21:32,386+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:569)
2018-04-19 04:21:32,559+0200 INFO  (periodic/1) [vdsm.api] START 
repoStats(options=None) from=internal, 
task_id=f80072ef-9b78-402f-8ae7-aabe2ec82d6c (api:46)
2018-04-19 04:21:32,559+0200 INFO  (periodic/1) [vdsm.api] FINISH repoStats 
return={} from=internal, task_id=f80072ef-9b78-402f-8ae7-aabe2ec82d6c (api:52)
2018-04-19 04:21:36,305+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmStats succeeded in 0.01 seconds (__init__:539)
2018-04-19 04:21:36,312+0200 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmIoTunePolicies succeeded in 0.00 seconds (__init__:539)
2018-04-19 04:21:37,391+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=89a487e6-2b67-48d5-8424-ac0017339e73 (api:46)
2018-04-19 04:21:37,392+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=89a487e6-2b67-48d5-8424-ac0017339e73 (api:52)
2018-04-19 04:21:37,392+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:569)
2018-04-19 04:21:42,397+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=ee3dab0d-8036-41ea-b5ab-13249118e168 (api:46)
2018-04-19 04:21:42,398+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=ee3dab0d-8036-41ea-b5ab-13249118e168 (api:52)
2018-04-19 04:21:42,398+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:569)
2018-04-19 04:21:47,399+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=077fdc38-0cd2-480d-9ada-cd5884c6e388 (api:46)
2018-04-19 04:21:47,399+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=077fdc38-0cd2-480d-9ada-cd5884c6e388 (api:52)
2018-04-19 04:21:47,399+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:569)
2018-04-19 04:21:47,583+0200 INFO  (periodic/1) [vdsm.api] START 
repoStats(options=None) from=internal, 
task_id=15a59bf1-6514-4b68-8172-22bde5187701 (api:46)
2018-04-19 04:21:47,584+0200 INFO  (periodic/1) [vdsm.api] FINISH repoStats 
return={} from=internal, task_id=15a59bf1-6514-4b68-8172-22bde5187701 (api:52)
2018-04-19 04:21:51,315+0200 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmStats succeeded in 0.01 seconds (__init__:539)
2018-04-19 04:21:51,322+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmIoTunePolicies succeeded in 0.00 seconds (__init__:539)
2018-04-19 04:21:52,405+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=aed02ac4-137e-4add-bb20-c9391b45e569 (api:46)
2018-04-19 04:21:52,405+0200 INFO  (vmrecovery) [vdsm.api] FINISH 

Re: [ovirt-users] vm's are diskless

2018-04-18 Thread johan . vermeulen7
Hello Benny,

thanks for helping met out.

vdsm.log:

2018-04-19 04:21:22,374+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=0640a9e1-fa51-493f-ab4c-6d441031598c (api:46)
2018-04-19 04:21:22,374+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=0640a9e1-fa51-493f-ab4c-6d441031598c (api:52)
2018-04-19 04:21:22,374+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:569)
2018-04-19 04:21:27,380+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=af974d41-d8c2-443b-ad32-d554a019f61a (api:46)
2018-04-19 04:21:27,380+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=af974d41-d8c2-443b-ad32-d554a019f61a (api:52)
2018-04-19 04:21:27,380+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:569)
2018-04-19 04:21:32,385+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=e94bd6f1-b62b-4544-a138-759b9cad4224 (api:46)
2018-04-19 04:21:32,386+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=e94bd6f1-b62b-4544-a138-759b9cad4224 (api:52)
2018-04-19 04:21:32,386+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:569)
2018-04-19 04:21:32,559+0200 INFO  (periodic/1) [vdsm.api] START 
repoStats(options=None) from=internal, 
task_id=f80072ef-9b78-402f-8ae7-aabe2ec82d6c (api:46)
2018-04-19 04:21:32,559+0200 INFO  (periodic/1) [vdsm.api] FINISH repoStats 
return={} from=internal, task_id=f80072ef-9b78-402f-8ae7-aabe2ec82d6c (api:52)
2018-04-19 04:21:36,305+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmStats succeeded in 0.01 seconds (__init__:539)
2018-04-19 04:21:36,312+0200 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmIoTunePolicies succeeded in 0.00 seconds (__init__:539)
2018-04-19 04:21:37,391+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=89a487e6-2b67-48d5-8424-ac0017339e73 (api:46)
2018-04-19 04:21:37,392+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=89a487e6-2b67-48d5-8424-ac0017339e73 (api:52)
2018-04-19 04:21:37,392+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:569)
2018-04-19 04:21:42,397+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=ee3dab0d-8036-41ea-b5ab-13249118e168 (api:46)
2018-04-19 04:21:42,398+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=ee3dab0d-8036-41ea-b5ab-13249118e168 (api:52)
2018-04-19 04:21:42,398+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:569)
2018-04-19 04:21:47,399+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=077fdc38-0cd2-480d-9ada-cd5884c6e388 (api:46)
2018-04-19 04:21:47,399+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=077fdc38-0cd2-480d-9ada-cd5884c6e388 (api:52)
2018-04-19 04:21:47,399+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:569)
2018-04-19 04:21:47,583+0200 INFO  (periodic/1) [vdsm.api] START 
repoStats(options=None) from=internal, 
task_id=15a59bf1-6514-4b68-8172-22bde5187701 (api:46)
2018-04-19 04:21:47,584+0200 INFO  (periodic/1) [vdsm.api] FINISH repoStats 
return={} from=internal, task_id=15a59bf1-6514-4b68-8172-22bde5187701 (api:52)
2018-04-19 04:21:51,315+0200 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmStats succeeded in 0.01 seconds (__init__:539)
2018-04-19 04:21:51,322+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmIoTunePolicies succeeded in 0.00 seconds (__init__:539)
2018-04-19 04:21:52,405+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=aed02ac4-137e-4add-bb20-c9391b45e569 (api:46)
2018-04-19 04:21:52,405+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 
task_id=aed02ac4-137e-4add-bb20-c9391b45e569 (api:52)
2018-04-19 04:21:52,405+0200 INFO  (vmrecovery) [vds] recovery: waiting for 
storage pool to go up (clientIF:569)
2018-04-19 04:21:57,411+0200 INFO  (vmrecovery) [vdsm.api] START 
getConnectedStoragePoolsList(options=None) from=internal, 
task_id=caa68f2e-ebc4-46eb-824e-12d3ab1709bc (api:46)
2018-04-19 04:21:57,411+0200 INFO  (vmrecovery) [vdsm.api] FINISH 
getConnectedStoragePoolsList return={'poollist': []} from=internal, 

Re: [ovirt-users] vm's are diskless

2018-04-18 Thread Benny Zlotnik
Can you attach engine and vdsm logs?
Also, which version are you using?


On Wed, 18 Apr 2018, 19:23 ,  wrote:

> Hello All,
>
> after an update and a reboot, 3 vm's are indicated as diskless.
> When I try to add disks I indeed see 3 available disks, but I also see that
> all 3 are indicated to be smaller then 1GB
> Also I do not know what disk goes with which vm.
>
> The version I'm running is now users@ovirt.org;
> I apologize if this question was raised ( many ) times before.
>
> Greetings, J.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to add host to cluster after network

2018-04-18 Thread ~Stack~
On 04/18/2018 09:55 AM, ~Stack~ wrote:
> On 04/18/2018 08:41 AM, Eitan Raviv wrote:
>> Hi Stack,
>>
>> I read through your ordeal and I would like to post a few comments:
> 
> Thanks I appreciate it!
> 
>>   * When I try to reproduce your scenario with the second network set to
>> 'not required' before on-boarding the second host, it  is processed
>> and set to 'up' by the engine without any hiccups or any errors in
>> the log.
> 
> Hrm. Yeah, I think I can reproduce the failure. I've only done it once,
> but I have the chance to test so just to make sue I've got the right
> information I'm going to run a another test specifically for it.
> 

I agree with you, Eitan. I did a complete rebuild and made sure my
alternate network was set to 'not required' before adding the second
host. I successfully added a second host. It is possible I did something
else wrong in that first test.

Since this is an acceptable work-around for now, I am going to finish
building my hosts out so I can move forward with this project.

I would still like feedback on my other questions in the original post
if anyone is willing.

Thanks!
~Stack~



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vm's are diskless

2018-04-18 Thread johan . vermeulen7


- Oorspronkelijk bericht -
Van: "johan vermeulen7" 
Aan: "users" 
Verzonden: Woensdag 18 april 2018 18:16:56
Onderwerp: [ovirt-users] vm's are diskless

Hello All, 

after an update and a reboot, 3 vm's are indicated as diskless. 
When I try to add disks I indeed see 3 available disks, but I also see that 
all 3 are indicated to be smaller then 1GB 
Also I do not know what disk goes with which vm. 

The version I'm running is now users@ovirt.org; 
I apologize if this question was raised ( many ) times before. 

Greetings, J. 

Update on previous: in properties, the vm's are allocated to a host that is now 
unresponsive.
I now corrected that but that does not seem to help the virt disk issue.

greetings, J.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] vm's are diskless

2018-04-18 Thread johan . vermeulen7
Hello All, 

after an update and a reboot, 3 vm's are indicated as diskless. 
When I try to add disks I indeed see 3 available disks, but I also see that 
all 3 are indicated to be smaller then 1GB 
Also I do not know what disk goes with which vm. 

The version I'm running is now users@ovirt.org; 
I apologize if this question was raised ( many ) times before. 

Greetings, J. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to add host to cluster after network

2018-04-18 Thread ~Stack~
On 04/18/2018 09:55 AM, ~Stack~ wrote:
> On 04/18/2018 08:41 AM, Eitan Raviv wrote:
[snip]

>> but on my setup it can be resolved: initially the second
>> network is proclaimed missing and the host becomes non-operational,
>> with its interfaces disappearing from the engine as you reported.
>> But if the second network is rendered 'not-required' or even deleted
>> for that matter from the engine, engine succeeds in reconnecting to
>> the second host within a couple of minutes, and the host gains 'up'
>> status.
> 
> Setting the second network to 'not-required' does not seem to break my
> hosts out of their infinite loop.

Confirmed. Setting the second network to 'not required' did not break
the loop. I hard powered off the box, let ovirt set it as down (thus
breaking the loop), then powered it back on. The loop continued (at
least twice anyway - takes roughly 5 minutes for a loop).

> 
> I haven't tried deleting the second network yet. Let me try that before
> I rebuild to test the first point.

Confirmed. Same thing as above only this time I deleted every network
but ovirtmgmt. Again, went through 2 full loops without resolving.

I am going to do a fresh rebuild and test by having the second network
set to 'not required' before adding a second host.

~Stack~



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to add host to cluster after network

2018-04-18 Thread ~Stack~
On 04/18/2018 08:41 AM, Eitan Raviv wrote:
> Hi Stack,
> 
> I read through your ordeal and I would like to post a few comments:

Thanks I appreciate it!

>   * When I try to reproduce your scenario with the second network set to
> 'not required' before on-boarding the second host, it  is processed
> and set to 'up' by the engine without any hiccups or any errors in
> the log.

Hrm. Yeah, I think I can reproduce the failure. I've only done it once,
but I have the chance to test so just to make sue I've got the right
information I'm going to run a another test specifically for it.


>   * On the other hand, if the network is 'required' the scenario
> reproduces,

Whoo! I'm not completely crazy! I'm just lucky to discover a new bug I
suppose. :-)

> but on my setup it can be resolved: initially the second
> network is proclaimed missing and the host becomes non-operational,
> with its interfaces disappearing from the engine as you reported.
> But if the second network is rendered 'not-required' or even deleted
> for that matter from the engine, engine succeeds in reconnecting to
> the second host within a couple of minutes, and the host gains 'up'
> status.

Setting the second network to 'not-required' does not seem to break my
hosts out of their infinite loop.

I haven't tried deleting the second network yet. Let me try that before
I rebuild to test the first point.

Thank you for your feedback. It is much appreciated.

~Stack~



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] oVirt 4.2.3 Second Release Candidate is now available

2018-04-18 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.2.3 Second Release Candidate, as of April 18th, 2018

This update is a release candidate of the third in a series of
stabilization updates to the 4.2
series.
This is pre-release software. This pre-release should not to be used in
production.

This release is available now for:
* Red Hat Enterprise Linux 7.5 or later
* CentOS Linux (or similar) 7.4 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.5 or later
* CentOS Linux (or similar) 7.4 or later
* oVirt Node 4.2

See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes: due to compose and build issues:
- oVirt Appliance will be available tomorrow morning
- oVirt Node will be available tomorrow morning [2]

Additional Resources:
* Read more about the oVirt 4.2.3 release highlights:
http://www.ovirt.org/release/4.2.3/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.2.3/
[2] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/

-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

sbona...@redhat.com


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt - AD authentication Issues.

2018-04-18 Thread Michael Mortensen (MCMR)
Hi,

Re. FQDN:
The full username is a combination of the full username from your AD, e.g. 
ban-m...@banone..net, and the internal 
domain which was configured during LDAP setup, e.g. "@internal" or in this case 
"@". During the setup you were asked to put a name or something and 
here you could put whatever - it has no real effect as far as I can tell. It 
could have been "@banone" for all oVirt cared, I believe.

Re. user login:
oVirt differs between being able to log in that is being authorized and logging 
into the portals. If you make sure your user account has admin privileges, you 
should be able to log into the administration portal, too. Check the 
permissions.


// Mike



From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of G, 
Maghesh Kumar (Nokia - IN/Bangalore)
Sent: 18. april 2018 11:27
To: users@ovirt.org
Subject: [ovirt-users] oVirt - AD authentication Issues.

Hi,


Description of problem:

Not able to perform operations like Administration portal or VM Portal.

Also not sure why FQDN appears twice!...

ERROR: The user 
ban-m...@banone.nsn-rdnet.net@BANONE.nsn-rdnet.net
 is not authorized to perform login



oVirt Engine Version: Ovirt-4.2.2

Host is installed with RHEL 7.4





Actual results:

2018-04-18 14:35:51,388+05 INFO  
[org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-55) 
[27275ec2] Running command: CreateUserSessionCommand internal: false.
2018-04-18 14:35:51,412+05 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-55) [27275ec2] EVENT_ID: USER_VDC_LOGIN_FAILED(114), User 
ban-m...@banone.nsn-rdnet.net@BANONE.nsn-rdnet.net
 connecting from '10.136.189.117' failed to log in.
2018-04-18 14:35:51,413+05 ERROR 
[org.ovirt.engine.core.aaa.servlet.SsoPostLoginServlet] (default task-55) [] 
The user 
ban-m...@banone.nsn-rdnet.net@BANONE.nsn-rdnet.net
 is not authorized to perform login

[cid:image001.png@01D3D730.41E6E790]


Please guide us how to proceed!..

Thank you!.

Regards,
Maghesh

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt - AD authentication Issues.

2018-04-18 Thread G, Maghesh Kumar (Nokia - IN/Bangalore)
Hi,


Description of problem:

Not able to perform operations like Administration portal or VM Portal.

Also not sure why FQDN appears twice!...

ERROR: The user ban-m...@banone.nsn-rdnet.net@BANONE.nsn-rdnet.net is not 
authorized to perform login



oVirt Engine Version: Ovirt-4.2.2

Host is installed with RHEL 7.4





Actual results:

2018-04-18 14:35:51,388+05 INFO  
[org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-55) 
[27275ec2] Running command: CreateUserSessionCommand internal: false.
2018-04-18 14:35:51,412+05 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-55) [27275ec2] EVENT_ID: USER_VDC_LOGIN_FAILED(114), User 
ban-m...@banone.nsn-rdnet.net@BANONE.nsn-rdnet.net connecting from 
'10.136.189.117' failed to log in.
2018-04-18 14:35:51,413+05 ERROR 
[org.ovirt.engine.core.aaa.servlet.SsoPostLoginServlet] (default task-55) [] 
The user ban-m...@banone.nsn-rdnet.net@BANONE.nsn-rdnet.net is not authorized 
to perform login

[cid:image001.png@01D3D723.58D508E0]


Please guide us how to proceed!..

Thank you!.

Regards,
Maghesh

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to add host to cluster after network

2018-04-18 Thread Eitan Raviv
Hi Stack,

I read through your ordeal and I would like to post a few comments:

   - When I try to reproduce your scenario with the second network set to
   'not required' before on-boarding the second host, it  is processed and set
   to 'up' by the engine without any hiccups or any errors in the log.
   - On the other hand, if the network is 'required' the scenario
   reproduces, but on my setup it can be resolved: initially the second
   network is proclaimed missing and the host becomes non-operational, with
   its interfaces disappearing from the engine as you reported. But if the
   second network is rendered 'not-required' or even deleted for that matter
   from the engine, engine succeeds in reconnecting to the second host within
   a couple of minutes, and the host gains 'up' status.

HTH

On Tue, Apr 17, 2018 at 11:35 PM, ~Stack~  wrote:

> Greetings,
>
> After a few days of trial, error, and madness - I *think* I found the
> source of my problem. Or at least I can now replicate it reliably. These
> are the basics of my speed-run-to-test-failures setup.
>
> Fresh minimal install of Scientific Linux 7.4 on a physical host for my
> engine. Add the 4.2 repo and run engine-setup - just blast through the
> defaults. Configure it with default DC and cluster.
>
> Fresh minimal install of Scientific Linux 7.4 on node1 - configure only
> the primary network card. Add the ovirt repo.
>
> Add the host into cluster. Provisions just fine. Life is good.
>
> Now here is where things split.
>
> Scenario 1: build node2 same as node 1 configuring only the primary
> network card and add it as a host. Provisions just fine. Life is good.
>
> Scenario 2: Configure a second network. In my case a BMC/IPMI network.
> Doesn't matter if it is required or not - both will cause failures
> however the errors are slightly more evident with required. Make sure
> the network is assigned to your node1 and is properly assigned an IP and
> configured in the up state. Now build node2 same as before with only the
> primary network configured and add it as a host.
>
> Failure followed by infinite loop of setting it into Non-Operational!
>
>
> The pop-up gives you some crap about "Host has no default route." but
> that is 100% a red-herring.
>
> Dig a little deeper and you get a message like this:
> "node2 does not comply with the cluster Default networks, the following
> networks are missing on host: 'ovirtmgmt'"
>
> Ah. That's a bit more relevant, but why can't it configure it? Or at
> least get to the point where it asks me "Hey, networking is a bit off -
> do you want to configure that now?" That would be nice...
>
> Fortunately the troubleshooting guide has something about that!
> https://www.ovirt.org/documentation/how-to/troubleshooting/
> troubleshooting/
>
> Unfortunately, it doesn't do anything to help. Even after doing these
> steps, the loop just keeps going...nothing changes.
> https://www.ovirt.org/develop/developer-guide/vdsm/
> installing-vdsm-from-rpm/
>
> Scratch it all and completely rebuild AGAIN for...
> Scenario 3: Configure a second network (BMC) and assign it to node1 just
> like before. Build out node2 same as node1 but this time add in the
> EXACT SAME NETWORK CONFIGURATION THAT IS WORKING ON NODE1 - ALL of the
> ifcfg-* files (but update the IP address to correct host, obviously).
> Now add it as a host.
>
> Doh! Same error. :-/
>
> OK fine. Let's really get into it. First off, the networking page for
> the host is blank. It never pulls back the network cards so you can't
> actually make changes via the web page. Nor can you assign networks. So
> the web interface doesn't help at all.
>
> Let's look at the engine log instead.
>
>
> 2018-04-17 14:33:00,336-05 INFO
> [org.ovirt.engine.core.bll.VdsEventListener]
> (EE-ManagedThreadFactory-engine-Thread-1091) []
> ResourceManager::vdsNotResponding entered for Host
> 'f0a3d515-8ba2-490e-8d65-54edbb52cefc', '192.168.1.4'
> 2018-04-17 14:33:00,360-05 INFO
> [org.ovirt.engine.core.bll.pm.VdsNotRespondingTreatmentCommand]
> (EE-ManagedThreadFactory-engine-Thread-1091) [5291eee5] Lock Acquired to
> object
> 'EngineLock:{exclusiveLocks='[f0a3d515-8ba2-490e-8d65-
> 54edbb52cefc=VDS_FENCE]',
> sharedLocks=''}'
> 2018-04-17 14:33:00,388-05 ERROR
> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-44) [2b853e43] Host
> 'node2' is set to Non-Operational, it is missing the following networks:
> 'ovirtmgmt'
> 2018-04-17 14:33:00,403-05 WARN
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-44) [2b853e43] EVENT_ID:
> VDS_SET_NONOPERATIONAL_NETWORK(519), Host node2 does not comply with the
> cluster Default networks, the following networks are missing on host:
> 'ovirtmgmt'
> 2018-04-17 14:33:00,407-05 INFO
> [org.ovirt.engine.core.bll.pm.VdsNotRespondingTreatmentCommand]
> (EE-ManagedThreadFactory-engine-Thread-1091) [5291eee5] 

Re: [ovirt-users] Performance Issue

2018-04-18 Thread Gianluca Cecchi
On Wed, Apr 18, 2018 at 2:25 PM, Tony Brian Albers  wrote:

> You need a lot faster disks to keep up with all that random I/O.
>
> "Storage IO is average on 75 M/s Read Disk and 25 Write Disk. ( SSD
>  > Raid )"
>
> Ehm, are you running VM's on a DD box?  Why on earth would you do that?
>
> /tony
>
>
> On 2018-04-18 14:09, Thomas Fecke wrote:
> > Atleast we found the Bottle neck. Our Data Domain is the Problem
> >


I guess and hope he made a mix between Data Store and Storage domain ;-)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Performance Issue

2018-04-18 Thread Tony Brian Albers
You need a lot faster disks to keep up with all that random I/O.

"Storage IO is average on 75 M/s Read Disk and 25 Write Disk. ( SSD
 > Raid )"

Ehm, are you running VM's on a DD box?  Why on earth would you do that?

/tony


On 2018-04-18 14:09, Thomas Fecke wrote:
> Atleast we found the Bottle neck. Our Data Domain is the Problem
> 
> IOStat:
> 
> Device: rrqm/s   wrqm/s r/s w/s    rkB/s    wkB/s 
> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
> 
> sda   0,00   615,67   78,00  962,33  2125,33 37065,33
> 75,34 5,68    5,42    1,10    5,77   0,96  99,40
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> 
>     0,00    0,00    1,44   22,63    0,00   75,93
> 
> Device: rrqm/s   wrqm/s r/s w/s    rkB/s    wkB/s 
> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
> 
> sda   0,67   655,33   92,00 1033,67  2446,67 38390,67
> 72,56 5,66    5,06    1,83    5,35   0,88  99,50
> 
> Any Idea why we got so much write Requests? Its getting up to 2000.
> 
> I guess the IO is from our Windows VM´s
> 
> *From:*users-boun...@ovirt.org  *On Behalf Of 
> *Thomas Fecke
> *Sent:* Dienstag, 17. April 2018 16:18
> *To:* Roy Golan 
> *Cc:* users@ovirt.org
> *Subject:* Re: [ovirt-users] Performance Issue
> 
> Guest: Win 10 and 2016
> 
> Disk: VirtISCSI – Thin provision
> 
> Domain: Single Data Domain – NFS 4
> 
> *From:*Roy Golan 
> *Sent:* Dienstag, 17. April 2018 12:58
> *To:* Thomas Fecke 
> *Cc:* users@ovirt.org
> *Subject:* Re: [ovirt-users] Performance Issue
> 
> On Tue, 17 Apr 2018 at 12:58 Thomas Fecke  > wrote:
> 
> Hey Thank you,
> 
> If been monitoring vor about an Hour now. The Templates VM´s are
> really slow – copy a new Template is slow. Non Template VM´s are fast
> 
> Storage IO is average on 75 M/s Read Disk and 25 Write Disk. ( SSD
> Raid )
> 
> Network is on 500 Mbits/s internal ( 10 Gbit Stroage connection )
> 
> And about 25Mbit/s external ( 400.000 k Internet )
> 
> I realy cant find that bottle neck
> 
> 
> what type is the disks, the domain, how many domains ?
> 
> *From:*Roy Golan >
> *Sent:* Dienstag, 17. April 2018 11:52
> *To:* Thomas Fecke >
> *Cc:* users@ovirt.org 
> *Subject:* Re: [ovirt-users] Performance Issue
> 
> On Tue, 17 Apr 2018 at 12:45 Thomas Fecke  > wrote:
> 
> Okay,
> 
> it seems to be that the Storage IO is the Problem – every Write
> and Read Process takes a lot of time
> 
> Sometimes the copy Job stops with 0/Byte and the traffic goes up
> and down like a mountain.
> 
> But the Storage Read and Write M/s looking fine… I don’t get it
> 
> Any toughts?
> 
> Keep monitoring you storage backend interface and see what's going
> on there if you don't see anything special on the host. It could be
> the network that leads to slow IO as well, who knows.
> 
> If the initial VM creation is taking long you might want to create a
> pool from your template with Pre-Started VMs, and that would at
> least save you from the wait when you actually need the VM.
> 
> *From:* users-boun...@ovirt.org 
> > *On
> Behalf Of *Thomas Fecke
> *Sent:* Dienstag, 17. April 2018 10:58
> *To:* users@ovirt.org 
> *Subject:* [ovirt-users] Performance Issue
> 
> Hey Guys,
> 
> We Deploy a lot of Templates. We got our Training Environment
> build in Ovirt.
> 
> Our Problem – when we Deploy the Same Template. The VM´s getting
> slower every Time we deply a new VM from that Template.
> 
> And I really don’t know why. RAM locking good – CPU and Network
> looking good.
> 
> But it take about 15 Minute to create a new Template based VM –
> normaly it takes about 30 Seconds.
> 
> I checked nload and the Interface traffic – but its not really high.
> 
> Can someone explain why its getting so slow and how to troubleshoot?
> 
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 


-- 
Tony Albers
Systems administrator, IT-development
Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark.
Tel: +45 2566 2383 / +45 8946 2316

[ovirt-users] Rebooted host shows running vms

2018-04-18 Thread Bruckner, Simone
Hi all,

  we had an unexpected shutdown of one of our hypervisor nodes caused by a 
hardware problem. We ran "Confirm that that host has beed rebooted" and as long 
as the host is in maintenance mode, we see 0 vms running. But when we activate 
the host, it shows 14 vms running. How can we get this cleaned up?

We run oVirt 4.2.1.7-1.el7.centos.

Thank you and all the best,
Simone Bruckner

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Performance Issue

2018-04-18 Thread Thomas Fecke
Atleast we found the Bottle neck. Our Data Domain is the Problem

IOStat:

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda   0,00   615,67   78,00  962,33  2125,33 37065,3375,34 
5,685,421,105,77   0,96  99,40

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   0,000,001,44   22,630,00   75,93

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await r_await w_await  svctm  %util
sda   0,67   655,33   92,00 1033,67  2446,67 38390,6772,56 
5,665,061,835,35   0,88  99,50


Any Idea why we got so much write Requests? Its getting up to 2000.

I guess the IO is from our Windows VM´s







From: users-boun...@ovirt.org  On Behalf Of Thomas 
Fecke
Sent: Dienstag, 17. April 2018 16:18
To: Roy Golan 
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Performance Issue

Guest: Win 10 and 2016
Disk: VirtISCSI – Thin provision
Domain: Single Data Domain – NFS 4


From: Roy Golan 
Sent: Dienstag, 17. April 2018 12:58
To: Thomas Fecke 
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Performance Issue


On Tue, 17 Apr 2018 at 12:58 Thomas Fecke 
> wrote:
Hey Thank you,

If been monitoring vor about an Hour now. The Templates VM´s are really slow – 
copy a new Template is slow. Non Template VM´s are fast

Storage IO is average on 75 M/s Read Disk and 25 Write Disk. ( SSD Raid )
Network is on 500 Mbits/s internal ( 10 Gbit Stroage connection )
And about 25Mbit/s external ( 400.000 k Internet )

I realy cant find that bottle neck


what type is the disks, the domain, how many domains ?

From: Roy Golan >
Sent: Dienstag, 17. April 2018 11:52
To: Thomas Fecke >
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Performance Issue


On Tue, 17 Apr 2018 at 12:45 Thomas Fecke 
> wrote:
Okay,

it seems to be that the Storage IO is the Problem – every Write and Read 
Process takes a lot of time

Sometimes the copy Job stops with 0/Byte and the traffic goes up and down like 
a mountain.


But the Storage Read and Write M/s looking fine… I don’t get it

Any toughts?


Keep monitoring you storage backend interface and see what's going on there if 
you don't see anything special on the host. It could be the network that leads 
to slow IO as well, who knows.
If the initial VM creation is taking long you might want to create a pool from 
your template with Pre-Started VMs, and that would at least save you from the 
wait when you actually need the VM.

From: users-boun...@ovirt.org 
> On Behalf Of Thomas 
Fecke
Sent: Dienstag, 17. April 2018 10:58
To: users@ovirt.org
Subject: [ovirt-users] Performance Issue

Hey Guys,

We Deploy a lot of Templates. We got our Training Environment build in Ovirt.

Our Problem – when we Deploy the Same Template. The VM´s getting slower every 
Time we deply a new VM from that Template.

And I really don’t know why. RAM locking good – CPU and Network looking good.

But it take about 15 Minute to create a new Template based VM – normaly it 
takes about 30 Seconds.

I checked nload and the Interface traffic – but its not really high.

Can someone explain why its getting so slow and how to troubleshoot?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Add nodes to single-node hyperconverged cluster

2018-04-18 Thread Denis Chaplygin
Hello!

On Tue, Apr 17, 2018 at 5:49 PM, Joe DiTommasso  wrote:

> Thanks! I realized yesterday that I've got a few hosts I was in the
> process of decommissioning that I can temporarily use for this. So my new
> plan is to build a 3-node cluster with junk hosts and cycle in the good
> ones.
>
>
It is definitely a best way to achieve your goal! :)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] custom hosted-engine issues

2018-04-18 Thread Martin Sivak
Hi,

That part is related to the hosted engine storage. You need an
additional storage domain for regular VMs as specified in the note I
sent you. Add the storage using the webadmin UI.

Best regards

--
Martin Sivak
SLA / oVirt

On Wed, Apr 18, 2018 at 11:55 AM,   wrote:
> Select the type of storage to use.
>
>  Please specify the storage you would like to use (glusterfs, iscsi, fc,
> nfs3, nfs4)[nfs3]:
>
> For NFS storage types, specify the full address, using either the FQDN or IP
> address, and path name of the shared storage domain.
>
>   Please specify the full shared storage connection path to use (example:
> host:/path): storage.example.com:/hosted_engine/nfs
>
> I followed this guide configure my nfs shared storage, but this storage has
> not add to ovirt engine automatically, I do  not know why not to add to
> ovirt engine automatically?
>
> - 原始邮件 -
> 发件人:Martin Sivak 
> 收件人:dhy336 
> 抄送人:users 
> 主题:Re: [ovirt-users] custom hosted-engine issues
> 日期:2018年04月18日 17点40分
>
>
> Hi,
> you need to add a storage domain for VMs first. The hosted engine
> domain and VM will then be auto imported.
> See the following in the Hosted engine deployment guide:
> https://www.ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/
> "Important: Log in as the admin@internal user to continue configuring
> the Engine and add further resources. You must create another data
> domain for the data center to be initialized to host regular virtual
> machine data, and for the Engine virtual machine to be visible."
> You seem to be using oVirt 4.1 so please note that the oVirt 4.2.2
> release now supports much better and safer deployment method.
> Best regards
> --
> Martin Sivak
> SLA / oVirt
> On Wed, Apr 18, 2018 at 11:08 AM,  wrote:
>> Hi,
>> I setup hosted engine, and it is successed, but it has not add my share
>> storage (nfs) to Storage Domain,
>> I don`t find engine VM in webadmin UI compute->Virtual Machines.
>> it has not Hosted Engine sub-tab in webadmin UI when i add host to ovirt
>> engine.
>>
>> would you give me some advise? thanks...
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 回复: 回复:Re: custom hosted-engine_issues

2018-04-18 Thread dhy336
thanks , I am sorry , I mis-understand your means,  I add a data domain , It 
work.- 原始邮件 -
发件人:
收件人:"Martin Sivak" 
抄送人:users 
主题:[ovirt-users] 回复:Re:  custom hosted-engine_issues
日期:2018年04月18日 17点56分

Select the type of storage to use. Please specify the storage you would like to 
use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:
For NFS storage types, specify the full address, using either the FQDN or IP 
address, and path name of the shared storage domain.  Please specify the full 
shared storage connection path to use (example: host:/path): 
storage.example.com:/hosted_engine/nfsI followed this guide configure my nfs 
shared storage, but this storage has not add to ovirt engine automatically, I 
do  not know why not to add to ovirt engine automatically?
- 原始邮件 -
发件人:Martin Sivak 
收件人:dhy336 
抄送人:users 
主题:Re: [ovirt-users] custom hosted-engine issues
日期:2018年04月18日 17点40分


Hi,
you need to add a storage domain for VMs first. The hosted engine
domain and VM will then be auto imported.
See the following in the Hosted engine deployment guide:
https://www.ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/
"Important: Log in as the admin@internal user to continue configuring
the Engine and add further resources. You must create another data
domain for the data center to be initialized to host regular virtual
machine data, and for the Engine virtual machine to be visible."
You seem to be using oVirt 4.1 so please note that the oVirt 4.2.2
release now supports much better and safer deployment method.
Best regards
--
Martin Sivak
SLA / oVirt
On Wed, Apr 18, 2018 at 11:08 AM,   wrote:
> Hi,
> I setup hosted engine, and it is successed, but it has not add my share
> storage (nfs) to Storage Domain,
> I don`t find engine VM in webadmin UI compute->Virtual Machines.
> it has not  Hosted Engine sub-tab in webadmin UI when i add host to ovirt
> engine.
>
> would you give me some advise? thanks...
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 回复:Re: custom hosted-engine issues

2018-04-18 Thread dhy336
Select the type of storage to use. Please specify the storage you would like to 
use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:
For NFS storage types, specify the full address, using either the FQDN or IP 
address, and path name of the shared storage domain.  Please specify the full 
shared storage connection path to use (example: host:/path): 
storage.example.com:/hosted_engine/nfsI followed this guide configure my nfs 
shared storage, but this storage has not add to ovirt engine automatically, I 
do  not know why not to add to ovirt engine automatically?
- 原始邮件 -
发件人:Martin Sivak 
收件人:dhy336 
抄送人:users 
主题:Re: [ovirt-users] custom hosted-engine issues
日期:2018年04月18日 17点40分


Hi,
you need to add a storage domain for VMs first. The hosted engine
domain and VM will then be auto imported.
See the following in the Hosted engine deployment guide:
https://www.ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/
"Important: Log in as the admin@internal user to continue configuring
the Engine and add further resources. You must create another data
domain for the data center to be initialized to host regular virtual
machine data, and for the Engine virtual machine to be visible."
You seem to be using oVirt 4.1 so please note that the oVirt 4.2.2
release now supports much better and safer deployment method.
Best regards
--
Martin Sivak
SLA / oVirt
On Wed, Apr 18, 2018 at 11:08 AM,   wrote:
> Hi,
> I setup hosted engine, and it is successed, but it has not add my share
> storage (nfs) to Storage Domain,
> I don`t find engine VM in webadmin UI compute->Virtual Machines.
> it has not  Hosted Engine sub-tab in webadmin UI when i add host to ovirt
> engine.
>
> would you give me some advise? thanks...
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] custom hosted-engine issues

2018-04-18 Thread Martin Sivak
Hi,

you need to add a storage domain for VMs first. The hosted engine
domain and VM will then be auto imported.

See the following in the Hosted engine deployment guide:
https://www.ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/

"Important: Log in as the admin@internal user to continue configuring
the Engine and add further resources. You must create another data
domain for the data center to be initialized to host regular virtual
machine data, and for the Engine virtual machine to be visible."

You seem to be using oVirt 4.1 so please note that the oVirt 4.2.2
release now supports much better and safer deployment method.

Best regards

--
Martin Sivak
SLA / oVirt

On Wed, Apr 18, 2018 at 11:08 AM,   wrote:
> Hi,
> I setup hosted engine, and it is successed, but it has not add my share
> storage (nfs) to Storage Domain,
> I don`t find engine VM in webadmin UI compute->Virtual Machines.
> it has not  Hosted Engine sub-tab in webadmin UI when i add host to ovirt
> engine.
>
> would you give me some advise? thanks...
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-guest-agent for EL6

2018-04-18 Thread Tomáš Golembiovský
Hi,

On Wed, 11 Apr 2018 15:37:10 -0400
John Nguyen  wrote:

> Hi,
> 
> Is there an OVirt 4.2 compatible version on of the ovirt-guest-agent for
> EL6?
> 
> I found the 1.0.13 Package but it doesn't report information to the Web UI.

That's probably because it's not using the new channel name. What
exactly is the version you are installing? If it is from EPEL you need
ovirt-guest-agent-1.0.13-2.el6.

Tomas

> 
> Thanks,
> John


-- 
Tomáš Golembiovský 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to Communicate with External Provider QEMU - related to RedHat Bug 1426573

2018-04-18 Thread Tomáš Golembiovský
On Tue, 17 Apr 2018 13:34:54 +
"Vrgotic, Marko"  wrote:

> If I try to use same proxy host, from command line, I am able to connect, no 
> problems what so ever:
> 
> [root@aws-ovhv-07 vdsm]# virsh -c 
> qemu+ssh://r...@aws-ovhv-08.avinity.tv/system

You need to be running this as 'vdsm' user. See the "Additional info" in
the description of the bug you referenced. Or see the preparatory steps
described in [1]. The documentation is for Xen, but the steps are
analogous for KVM.

Tomas

[1] https://www.ovirt.org/develop/release-management/features/virt/XenToOvirt/

> setlocale: No such file or directory
> The authenticity of host 'aws-ovhv-08.avinity.tv (172.16.81.57)' can't be 
> established.
> ECDSA key fingerprint is SHA256:Ysbp/LvuOCIIvMbT931rwNN9HfBv1dtJpu0uQMi4lrk.
> ECDSA key fingerprint is MD5:14:c0:9c:bf:2b:65:ee:89:10:b8:21:54:57:72:9c:58.
> Are you sure you want to continue connecting (yes/no)? yes
> r...@aws-ovhv-08.avinity.tv's password:
> Welcome to virsh, the virtualization interactive terminal.
> 
> Type:  'help' for help with commands
>'quit' to quit
> 
> virsh # list
> IdName   State
> 
> 2 tm_vpn running
> 3 tmduck running
> 22scsk29funnela  running
> 25scsk29procArunning
> 51scsk211udc1running
> 52scsk211csm1running
> 53scsk211psm1running
> 54scsk211st1 running
> 55tm_admin   running
> 
> virsh #
> 
> Found the related bug (closed) on Red Hat Bugzilla – Bug 1426573
> 
> On the same bug page, workaround suggested is also applicable.


-- 
Tomáš Golembiovský 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users