Re: [ovirt-users] Export VM from oVirt/RHEV to VMWare

2017-01-16 Thread Colin Coe
Hi all

The VM in question is a Windows 2008R2 server.  Any one have any thoughts
on this?

Thanks

On Mon, Jan 16, 2017 at 8:57 PM, Colin Coe  wrote:

> Hi all
>
> We run RHEV exclusively and I need to export a guest to one of our vendors
> for analysis.
>
> The vendor uses VMWare.  How can I export a VM in OVF format out of RHEV
> v3.5.8?
>
> Thanks
>
> CC
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration Centos 7.3 -> Centos 7.2

2017-01-16 Thread Markus Stockhausen
Hi Yaniv,

for better tracking I opened BZ 1413847.

Best regards.

Markus


Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] New oVirt 4.0.x documentation now live!

2017-01-16 Thread Brian Proffitt
The mixed fonts may be a rendering issue.

I can say that all of the links and images are fixed now and the changes
are live.

Sorry about the errors, something went wrong between the test and
production environments!

BKP

On Mon, Jan 16, 2017 at 2:03 PM, Jakub Niedermertl 
wrote:

> Hi,
>
> as Cam mentioned bunch of links is not working:
>
> * all links in section 'Developer Documentation' section
> * 'Quick Start Guide' in section 'Primary Documentation section'
> * all links in section 'Community Documentation'
>
> Also some links differs in font size and serif/sans-serif. Maybe we
> shouldn't mix the fonts. I personally like to original smaller
> sans-serif font more.
>
> Jakub
> '
>
> On Mon, Jan 16, 2017 at 6:58 PM, cmc  wrote:
> > When I try to go to
> > http://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/, and
> > click on any of the links on that page, e.g.,:
> >
> > http://www.ovirt.org/documentation/upgrade-guide/
> upgrade-guide/chap-Updating_the_oVirt_Environment
> > http://www.ovirt.org/documentation/upgrade-guide/
> upgrade-guide/chap-Updates_between_Minor_Releases
> >
> > I get:
> >
> > Not found :(
> >
> > Sorry, but the page you were trying to view does not exist.
> >
> > It looks like this was the result of either:
> >
> > a mistyped address
> > an out-of-date link
> >
> > In fact, I get that on just about every link I've tried from
> > https://www.ovirt.org/documentation/
> >
> > Am I missing something here perhaps?
> >
> > -C
> >
> > On Mon, Jan 16, 2017 at 5:39 PM, Brian Proffitt 
> wrote:
> >> You wanted it, we delivered!
> >>
> >> The oVirt Project is pleased to announce the availability of all-new
> >> principal documentation[1] for the oVirt 4.0 branch!
> >>
> >> For more information, check out the blog released today[2]!
> >>
> >> Peace,
> >> Brian
> >>
> >>
> >> [1] https://www.ovirt.org/documentation/
> >> [2] https://www.ovirt.org/blog/2017/01/happy-new-documentation/
> >>
> >> --
> >> Brian Proffitt
> >> Principal Community Analyst
> >> Open Source and Standards
> >> @TheTechScribe
> >> 574.383.9BKP
> >>
> >> ___
> >> Devel mailing list
> >> de...@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/devel
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Brian Proffitt
Principal Community Analyst
Open Source and Standards
@TheTechScribe
574.383.9BKP
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration Centos 7.3 -> Centos 7.2

2017-01-16 Thread Yaniv Kaul
On Jan 16, 2017 9:33 PM, "Markus Stockhausen" 
wrote:

Hi there,

maybe i missed the discussion on the mailing list. Today we installed
a new centos host. Of course it has 7.3 and qemu 2.6 after a yum update.
It can be attached to our cluster wihtout problems. We are running Ovirt
4.0.6 but the cluster compatibility level is still 3.6.

We can migrate a VM from qemu 2.3 to 2.6
We cannot migrate a VM from qemu 2.6 to 2.3

What happens:

- qemu is started on the target host (centos 7.2)
- source qemu says: "initiating migration"
- dominfo on target gives:
Id: 21
Name:   testvm
UUID:   d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b
OS Typ: hvm
Status: pausiert
CPU(s): 2
CPU-Zeit:   48,5s
Max Speicher:   8388608 KiB
Verwendeter Speicher: 8388608 KiB
Bleibend:   nein
Automatischer Start: deaktiviert
Verwaltete Sicherung: nein
Sicherheits-Modell: selinux
Sicherheits-DOI: 0
Sicherheitskennung: system_u:system_r:svirt_t:s0:c344,c836 (enforcing)

Anyone experienced this behaviour? Maybe desired?


It's not desired.
VDSM logs from both sides may help.
Y.


Current software versions:

centos 7.2 host:
- libvirt 1.2.17-13.el7_2.6
- qemu 2.3.0-31.el7.21.1

centos 7.3 host:
- libvirt 2.0.0-10.el7_3.2
- qemu 2.6.0-27.1.el7

Ovirt engine
- ovirt 4.0.6

Thanks in advance.

Markus
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt network

2017-01-16 Thread Sverker Abrahamsson

I've followed the instructions to best effort, so hopefully it's right..


Den 2017-01-13 kl. 10:31, skrev Marcin Mirecki:

Please push the patch into: https://gerrit.ovirt.org/ovirt-provider-ovn
(let me know if you need some directions)



- Original Message -

From: "Sverker Abrahamsson" 
To: "Marcin Mirecki" 
Cc: "Ovirt Users" 
Sent: Monday, January 9, 2017 1:45:37 PM
Subject: Re: [ovirt-users] Issue with OVN/OVS and mandatory ovirtmgmt network

Ok, found it. The issue is right here:

  
  
  
  
  
  
  
  
  
  
  
  

There are two elements for virtualport, the first without id and the
second with. On h2 I had fixed this which was the patch I posted earlier
although I switched back to use br-int after understanding that was the
correct way. When that hook was copied to h1 the port gets attached fine.

Patch with updated testcase attached.

/Sverker


Den 2017-01-09 kl. 10:41, skrev Sverker Abrahamsson:

This is the content of vdsm.log on h1 at this time:

2017-01-06 20:54:12,636 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
call VM.create succeeded in 0.01 seconds (__init__:515)
2017-01-06 20:54:12,636 INFO  (vm/6dd5291e) [virt.vm]
(vmId='6dd5291e-6556-4d29-8b4e-ea896e627645') VM wrapper has started
(vm:1901)
2017-01-06 20:54:12,636 INFO  (vm/6dd5291e) [vds] prepared volume
path:
/rhev/data-center/mnt/h2-int.limetransit.com:_var_lib_exports_iso/1d49c4bc-0fec-4503-a583-d476fa3a370d/images/----/CentOS-7-x86_64-NetInstall-1611.iso
(clientIF:374)
2017-01-06 20:54:12,743 INFO  (vm/6dd5291e) [root]  (hooks:108)
2017-01-06 20:54:12,847 INFO  (vm/6dd5291e) [root]  (hooks:108)
2017-01-06 20:54:12,863 INFO  (vm/6dd5291e) [virt.vm]
(vmId='6dd5291e-6556-4d29-8b4e-ea896e627645') 
http://ovirt.org/vm/tune/1.0; type="kvm">
 CentOS7_3
 6dd5291e-6556-4d29-8b4e-ea896e627645
 1048576
 1048576
 4294967296
 16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 hvm
 
 
 
 
 
 oVirt
 oVirt Node
 7-3.1611.el7.centos
 62f1adff-b29e-4a7c-abba-c2c4c73248c6
 6dd5291e-6556-4d29-8b4e-ea896e627645
 
 
 
 
 
 
 
 
 
 
 
 SandyBridge
 
 
 
 
 

  (vm:1988)
2017-01-06 20:54:13,046 INFO  (libvirt/events) [virt.vm]
(vmId='6dd5291e-6556-4d29-8b4e-ea896e627645') CPU running: onResume
(vm:4863)
2017-01-06 20:54:13,058 INFO  (vm/6dd5291e) [virt.vm]
(vmId='6dd5291e-6556-4d29-8b4e-ea896e627645') Starting connection
(guestagent:245)
2017-01-06 20:54:13,060 INFO  (vm/6dd5291e) [virt.vm]
(vmId='6dd5291e-6556-4d29-8b4e-ea896e627645') CPU running: domain
initialization (vm:4863)
2017-01-06 20:54:15,154 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
call Host.getVMFullList succeeded in 0.01 seconds (__init__:515)
2017-01-06 20:54:17,571 INFO  (periodic/2) [dispatcher] Run and
protect: getVolumeSize(sdUUID=u'2ee54fb8-48f2-4576-8cff-f2346504b08b',
spUUID=u'584ebd64-0268-0193-025b-038e',
imgUUID=u'5a3aae57-ffe0-4a3b-aa87-8461669db7f9',
volUUID=u'b6a88789-fcb1-4d3e-911b-2a4d3b6c69c7', options=None)
(logUtils:49)
2017-01-06 20:54:17,573 INFO  (periodic/2) [dispatcher] Run and
protect: getVolumeSize, Return response: {'truesize': '1859723264',
'apparentsize': '21474836480'} (logUtils:52)
2017-01-06 20:54:21,211 INFO  (periodic/2) [dispatcher] Run and
protect: repoStats(options=None) (logUtils:49)
2017-01-06 20:54:21,212 INFO  (periodic/2) [dispatcher] Run and
protect: repoStats, Return response:
{u'2ee54fb8-48f2-4576-8cff-f2346504b08b': {'code': 0, 'actual': True,
'version': 3, 'acquired': True, 'delay': '0.000936552', 'lastCheck':
'1.4', 'valid': True}, u'1d49c4bc-0fec-4503-a583-d476fa3a370d':
{'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay':
'0.000960248', 'lastCheck': '1.4', 'valid': True}} (logUtils:52)
2017-01-06 20:54:23,543 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC
call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)
2017-01-06 20:54:23,641 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call Host.getAllVmIoTunePolicies succeeded in 0.00 seconds (__init__:515)

Re: [ovirt-users] glusterfs heal issues [Solved]

2017-01-16 Thread Gary Pedretty
I figured it out.  I forgot that these gluster volumes as recommended by the 
Ovirt Glusterized setup were created as  Replica 3 Arbiter 1 volumes, so now I 
understand what that truly means,  One brick only contains the directory and 
meta data and so will always be smaller actual disk use.   Sorry for the 
confusion..


Gary



Gary Pedrettyg...@ravnalaska.net 

Systems Manager  www.flyravn.com 

Ravn Alaska   /\907-450-7251
5245 Airport Industrial Road /  \/\ 907-450-7238 fax
Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
Really loving the record green up date! Summmer!!   yourself” Matt 22:39













> On Jan 16, 2017, at 10:31 AM, Gary Pedretty  wrote:
> 
> This is a self hosted Glusterized setup, with 3 hosts.  I have had some 
> glusterfs data storage domains have some disk issues where healing was 
> required.  The self heal seemed to startup and the Ovirt Management portal 
> showed healing taking place in the Volumes/Brick tab.  Later it showed 
> everything ok.  This is a replica 3 volume.  I noticed however that the brick 
> tab was not showing even use of the 3 bricks and looking on the actual hosts 
> a df command also shows uneven use of the bricks.  However gluster volume 
> heal (vol) info shows zero entries for all bricks.  There are no errors 
> reported in the Data Center or Cluster, yet I see this uneven use of the 
> bricks across the 3 hosts.  
> 
> Doing a gluster volume status (vol) detail indeed shows different free disk 
> space across the different bricks.  However the Inode Count and Free Inodes 
> are identical across all bricks.  
> 
> I thought replica 3 was supposed to be mirrored across all nodes.  Any idea 
> why I am seeing the uneven use, or is this just something about glusterfs 
> that is different when it comes to free space vs Inode Count?
> 
> Gary
> 
> 
> 
> Gary Pedrettyg...@ravnalaska.net 
> 
> Systems Manager  www.flyravn.com 
> 
> Ravn Alaska   /\907-450-7251
> 5245 Airport Industrial Road /  \/\ 907-450-7238 fax
> Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
> Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
> Really loving the record green up date! Summmer!!   yourself” Matt 22:39
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Live migration Centos 7.3 -> Centos 7.2

2017-01-16 Thread Markus Stockhausen
Hi there,

maybe i missed the discussion on the mailing list. Today we installed
a new centos host. Of course it has 7.3 and qemu 2.6 after a yum update.
It can be attached to our cluster wihtout problems. We are running Ovirt 
4.0.6 but the cluster compatibility level is still 3.6.

We can migrate a VM from qemu 2.3 to 2.6 
We cannot migrate a VM from qemu 2.6 to 2.3

What happens:

- qemu is started on the target host (centos 7.2)
- source qemu says: "initiating migration"
- dominfo on target gives:
Id: 21
Name:   testvm
UUID:   d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b
OS Typ: hvm
Status: pausiert
CPU(s): 2
CPU-Zeit:   48,5s
Max Speicher:   8388608 KiB
Verwendeter Speicher: 8388608 KiB
Bleibend:   nein
Automatischer Start: deaktiviert
Verwaltete Sicherung: nein
Sicherheits-Modell: selinux
Sicherheits-DOI: 0
Sicherheitskennung: system_u:system_r:svirt_t:s0:c344,c836 (enforcing)

Anyone experienced this behaviour? Maybe desired?

Current software versions:

centos 7.2 host:
- libvirt 1.2.17-13.el7_2.6
- qemu 2.3.0-31.el7.21.1

centos 7.3 host:
- libvirt 2.0.0-10.el7_3.2
- qemu 2.6.0-27.1.el7

Ovirt engine
- ovirt 4.0.6

Thanks in advance.

Markus
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] glusterfs heal issues

2017-01-16 Thread Gary Pedretty
This is a self hosted Glusterized setup, with 3 hosts.  I have had some 
glusterfs data storage domains have some disk issues where healing was 
required.  The self heal seemed to startup and the Ovirt Management portal 
showed healing taking place in the Volumes/Brick tab.  Later it showed 
everything ok.  This is a replica 3 volume.  I noticed however that the brick 
tab was not showing even use of the 3 bricks and looking on the actual hosts a 
df command also shows uneven use of the bricks.  However gluster volume heal 
(vol) info shows zero entries for all bricks.  There are no errors reported in 
the Data Center or Cluster, yet I see this uneven use of the bricks across the 
3 hosts.  

Doing a gluster volume status (vol) detail indeed shows different free disk 
space across the different bricks.  However the Inode Count and Free Inodes are 
identical across all bricks.  

I thought replica 3 was supposed to be mirrored across all nodes.  Any idea why 
I am seeing the uneven use, or is this just something about glusterfs that is 
different when it comes to free space vs Inode Count?

Gary



Gary Pedrettyg...@ravnalaska.net 

Systems Manager  www.flyravn.com 

Ravn Alaska   /\907-450-7251
5245 Airport Industrial Road /  \/\ 907-450-7238 fax
Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
Really loving the record green up date! Summmer!!   yourself” Matt 22:39













___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] New oVirt 4.0.x documentation now live!

2017-01-16 Thread Jakub Niedermertl
Hi,

as Cam mentioned bunch of links is not working:

* all links in section 'Developer Documentation' section
* 'Quick Start Guide' in section 'Primary Documentation section'
* all links in section 'Community Documentation'

Also some links differs in font size and serif/sans-serif. Maybe we
shouldn't mix the fonts. I personally like to original smaller
sans-serif font more.

Jakub
'

On Mon, Jan 16, 2017 at 6:58 PM, cmc  wrote:
> When I try to go to
> http://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/, and
> click on any of the links on that page, e.g.,:
>
> http://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/chap-Updating_the_oVirt_Environment
> http://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/chap-Updates_between_Minor_Releases
>
> I get:
>
> Not found :(
>
> Sorry, but the page you were trying to view does not exist.
>
> It looks like this was the result of either:
>
> a mistyped address
> an out-of-date link
>
> In fact, I get that on just about every link I've tried from
> https://www.ovirt.org/documentation/
>
> Am I missing something here perhaps?
>
> -C
>
> On Mon, Jan 16, 2017 at 5:39 PM, Brian Proffitt  wrote:
>> You wanted it, we delivered!
>>
>> The oVirt Project is pleased to announce the availability of all-new
>> principal documentation[1] for the oVirt 4.0 branch!
>>
>> For more information, check out the blog released today[2]!
>>
>> Peace,
>> Brian
>>
>>
>> [1] https://www.ovirt.org/documentation/
>> [2] https://www.ovirt.org/blog/2017/01/happy-new-documentation/
>>
>> --
>> Brian Proffitt
>> Principal Community Analyst
>> Open Source and Standards
>> @TheTechScribe
>> 574.383.9BKP
>>
>> ___
>> Devel mailing list
>> de...@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] PM proxy

2017-01-16 Thread Slava Bendersky
Hello Everyone, 
All what I see on debug 

2017-01-16 18:15:16,316 DEBUG [org.ovirt.engine.core.bll.pm.FenceProxyLocator] 
(default task-64) [] Evaluating host 'ovirt00.domain.com' 
2017-01-16 18:15:16,362 DEBUG [org.ovirt.engine.core.bll.pm.FenceProxyLocator] 
(default task-64) [] Evaluating host 'ovirt00.domain.com' 
2017-01-16 18:15:16,362 ERROR [org.ovirt.engine.core.bll.pm.FenceProxyLocator] 
(default task-64) [] Can not run fence action on host 'hosted_engine_1', no 
suitable proxy host was found. 


Slava. 



From: "volga629"  
To: "Martin Perina"  
Cc: "users"  
Sent: Friday, January 13, 2017 11:52:17 AM 
Subject: Re: [ovirt-users] PM proxy 

Hello Martin, 
Thank you for reply, I will post more detail soon. 

Slava. 


From: "Martin Perina"  
To: "Slava Bendersky"  
Cc: "users"  
Sent: Friday, January 13, 2017 2:17:28 AM 
Subject: Re: [ovirt-users] PM proxy 

Hi Slava, 

do you have at least one another host in the same cluster or DC which doesn't 
have connection issues (in status Up or Maintenance)? 
If so, please turn on debug logging for power management part using following 
command: 

/usr/share/ovirt-engine-wildfly/bin/jboss-cli.sh --controller= [ 
http://127.0.0.1:8706/ | 127.0.0.1:8706 ] --connect --user=admin@internal 

and enter following inside jboss-cli command prompt: 

/subsystem=logging/logger=org.ovirt.engine.core.bll.pm:add 
/subsystem=logging/logger=org.ovirt.engine.core.bll.pm:write-attribute(name=level,value=DEBUG)
 
quit 

Afterwards you will see more details in engine.log why other hosts were 
rejected during fence proxy selection process. 

Btw above debug log changes are not permanent, they will be reverted on 
ovirt-engine restart or using following command: 

/usr/share/ovirt-engine-wildfly/bin/jboss-cli.sh --controller= [ 
http://127.0.0.1:8706/ | 127.0.0.1:8706 ] --connect --user=admin@internal 
'/subsystem=logging/logger=org.ovirt.engine.core.bll.pm:remove' 


Regards 

Martin Perina 


On Thu, Jan 12, 2017 at 4:42 PM, Slava Bendersky < [ 
mailto:volga...@networklab.ca | volga...@networklab.ca ] > wrote: 



Hello Everyone, 
I need help with this error. What possible missing or miss-configured ? 

2017-01-12 05:17:31,444 ERROR [ [ http://org.ovirt.engine.core.bll.pm/ | 
org.ovirt.engine.core.bll.pm ] .FenceProxyLocator] (default task-38) [] Can not 
run fence action on host 'hosted_engine_1', no suitable proxy host was found 

I tried from shell on host and it works fine. 
Right now settings default dc, cluster from PM proxy definition. 
Slava. 

___ 
Users mailing list 
[ mailto:Users@ovirt.org | Users@ovirt.org ] 
[ http://lists.ovirt.org/mailman/listinfo/users | 
http://lists.ovirt.org/mailman/listinfo/users ] 






___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] New oVirt 4.0.x documentation now live!

2017-01-16 Thread cmc
When I try to go to
http://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/, and
click on any of the links on that page, e.g.,:

http://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/chap-Updating_the_oVirt_Environment
http://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/chap-Updates_between_Minor_Releases

I get:

Not found :(

Sorry, but the page you were trying to view does not exist.

It looks like this was the result of either:

a mistyped address
an out-of-date link

In fact, I get that on just about every link I've tried from
https://www.ovirt.org/documentation/

Am I missing something here perhaps?

-C

On Mon, Jan 16, 2017 at 5:39 PM, Brian Proffitt  wrote:
> You wanted it, we delivered!
>
> The oVirt Project is pleased to announce the availability of all-new
> principal documentation[1] for the oVirt 4.0 branch!
>
> For more information, check out the blog released today[2]!
>
> Peace,
> Brian
>
>
> [1] https://www.ovirt.org/documentation/
> [2] https://www.ovirt.org/blog/2017/01/happy-new-documentation/
>
> --
> Brian Proffitt
> Principal Community Analyst
> Open Source and Standards
> @TheTechScribe
> 574.383.9BKP
>
> ___
> Devel mailing list
> de...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt host activation and lvm looping with high CPU load trying to mount iSCSI storage

2017-01-16 Thread Mark Greenall
Hi,

To try and get a baseline here I've reverted most of the changes we've made and 
am running the host with just the following iSCSI related configuration 
settings. The tweaks had been made over time to try and alleviate several 
storage related problems, but it's possible that fixes in Ovirt (we've 
gradually gone from early 3.x to 4.0.6) make them redundant now and they simply 
compound the problem. I'll start with these configuration settings and then 
move onto trying the vdsm patch.

/etc/multipath.conf (note: polling_interval and max_fds would not get accepted 
in the devices section. I think they are for default only):

# VDSM REVISION 1.3
# VDSM PRIVATE

blacklist {
   devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
   devnode "^hd[a-z]"
   devnode "^sda$"
}

defaults {
deferred_remove yes
dev_loss_tmo30
fast_io_fail_tmo5
flush_on_last_del   yes
max_fds 4096
no_path_retry   fail
polling_interval5
user_friendly_names no
}

devices {
device {
vendor  "EQLOGIC"
product "100E-00"

# Ovirt defaults
deferred_remove yes
dev_loss_tmo30
fast_io_fail_tmo5
flush_on_last_del   yes
#polling_interval5
user_friendly_names no

# Local settings
#max_fds 8192
path_checkertur
path_grouping_policymultibus
path_selector   "round-robin 0"

# Use 4 retries will provide additional 20 seconds gracetime when no
# path is available before the device is disabled. (assuming 5 seconds
# polling interval). This may prevent vms from pausing when there is
# short outage on the storage server or network.
no_path_retry   4
   }

device {
# These settings overrides built-in devices settings. It does not apply
# to devices without built-in settings (these use the settings in the
# "defaults" section), or to devices defined in the "devices" section.
all_devsyes
no_path_retry   fail
}
}


/etc/iscsi/iscsid.conf default apart from:

node.session.initial_login_retry_max = 12
node.session.cmds_max = 1024
node.session.queue_depth = 128
node.startup = manual
node.session.iscsi.FastAbort = No




The following settings have been commented out / removed:

/etc/sysctl.conf:

# For more information, see sysctl.conf(5) and sysctl.d(5).
# Prevent ARP Flux for multiple NICs on the same subnet:
#net.ipv4.conf.all.arp_ignore = 1
#net.ipv4.conf.all.arp_announce = 2
# Loosen RP Filter to alow multiple iSCSI connections
#net.ipv4.conf.all.rp_filter = 2


/lib/udev/rules.d:

# Various Settings for Dell Equallogic disks based on Dell Optimizing SAN 
Environment for Linux Guide
#
# Modify disk scheduler mode to noop
#ACTION=="add|change", SUBSYSTEM=="block", ATTRS{vendor}=="EQLOGIC", 
RUN+="/bin/sh -c 'echo noop > /sys/${DEVPATH}/queue/scheduler'"
# Modify disk timeout value to 60 seconds
#ACTION!="remove", SUBSYSTEM=="block", ATTRS{vendor}=="EQLOGIC", RUN+="/bin/sh 
-c 'echo 60 > /sys/%p/device/timeout'"
# Modify read ahead value to 1024
#ACTION!="remove", SUBSYSTEM=="block", ATTRS{vendor}=="EQLOGIC", RUN+="/bin/sh 
-c 'echo 1024 > /sys/${DEVPATH}/bdi/read_ahead_kb'"

I've also removed our defined iSCSI interfaces and have simply left the Ovirt 
'default'

Rebooted and 'Activated' host:

16:09 - Host Activated
16:10 - Non Operational saying it can't access storage domain 'Unknown'
16:12 - Host Activated again
16:12 - Host not responding goes 'Connecting'
16:15 - Can't access ALL the storage Domains. Host goes Non Operational again
16:17 - Host Activated again
16:18 - Can't access ALL the storage Domains. Host goes Non Operational again
16:20 - Host Autorecovers and goes Activating again
That cycle repeated until I started getting VDSM timeout messages and the 
constant LVM processes and high CPU load. @16:30 I rebooted the host and set 
the status to maintenance.

Second host Activation attempt just resulted in the same cycle as above. Host 
now doesn't come online at all.

Next step will be to try the vdsm patch.

Mark
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] New oVirt 4.0.x documentation now live!

2017-01-16 Thread Brian Proffitt
You wanted it, we delivered!

The oVirt Project is pleased to announce the availability of all-new
principal documentation[1] for the oVirt 4.0 branch!

For more information, check out the blog released today[2]!

Peace,
Brian


[1] https://www.ovirt.org/documentation/
[2] https://www.ovirt.org/blog/2017/01/happy-new-documentation/

-- 
Brian Proffitt
Principal Community Analyst
Open Source and Standards
@TheTechScribe
574.383.9BKP
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Upgrade cluster + adding nodes

2017-01-16 Thread Davide Ferrari
Hello everyone

currently I've a 4-nodes oVirt 4.0.4 cluster running on CentOS 7.2.
Given that we have now oVirt 4.0.6 and CentOS 7.3, what's the best update
path? I was reading threads about qemu-kvm-ev 2.6 when CentOS 7.3 was
released but I was wondering if with 4.0.6 release something has changed.
Moreover, I'd like to add 4 more nodes in the near future, I guess I should
have all the same version and thus I must fully upgrade the original 4
nodes (both OS + oVirt) before adding new, fully up-to-date nodes, right?

Thanks!

-- 
Davide Ferrari
Senior Systems Engineer
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error creating a storage pool

2017-01-16 Thread Fred Rolland
Sergei hi,

In the log, I can see that there is an issue in the VDSM code when parsing
your PV due to the ":" in its name.

Can you please open a bug with the logs ?

Also, to understand why LVM is not seeing your PV , can you run the
following command and supply the ouput ?


lvm pvs - --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
filter = [
'\''a|/dev/mapper/SioFABRICVicinity_iqn.2015-03.com.iofabric:ovirt-master-00|'\'',
'\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1
wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size
/dev/mapper/SioFABRICVicinity_iqn.2015-03.com.iofabric:ovirt-master-00

Thanks,

Fred

On Tue, Jan 10, 2017 at 9:22 AM, Sergei Hanus  wrote:

> Fred, sorry, my fault.
> Now I have verified the contents.
>
>  Sergei.
>
> On Mon, Jan 9, 2017 at 6:27 PM, Fred Rolland  wrote:
>
>> Sergei,
>>
>> The files are empty, can you please resend ?
>>
>> Thanks
>>
>> On Mon, Jan 9, 2017 at 1:00 PM, Sergei Hanus  wrote:
>>
>>> Hi, Fred.
>>> I'm using release 4.0.5.
>>> I made a fresh redeploy today and reproduced the issue.
>>>
>>> I'm attaching engine and vdsm log file, the time to look is around
>>> 12:15:13
>>>
>>> Also, when I try to delete storage domain from engine - it also returns
>>> error, so, this domain seems to be stuck in configuration (it is also
>>> displayed at the end of engine log)
>>>
>>> Appreciate any comments.
>>>
>>> Sergei
>>>
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Ovirt 4.0 - Storage Domain] Fail to upload ISO using engine-iso-uploader

2017-01-16 Thread Sandro Bonazzola
Adding Rafael who is maintaining the tool.

On Tue, Jan 10, 2017 at 8:19 AM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:

> Hi,
>
> I am trying to upload ISO image using engine-iso-uploader command as shown
> below.
> I have received error as shown below.
>
> # engine-iso-uploader -i vms_isos_myHost -u admin@internal -r
> ovirt.lab.com:443 upload /tmp/ubuntu.iso
> Uploading, please wait...
> ERROR: mount.nfs: access denied by server while mounting 190.68.5.100:
> /nfs_share/iso
>

Can you please run with --verbose and attach logs?
At first sight, your server configuration has some issue. Does manual mount
of the nfs share work for you?


>
> My NFS configuration as shown below:
> # cat /etc/exports
> /nfs_share/iso 190.68.5.100(rw,sync,no_subtree_check,all_squash,
> anonuid=36,anongid=36)
> /nfs_share/data190.68.5.100(rw,sync,no_subtree_check,all_squash,
> anonuid=36,anongid=36)
>
> And please note that I have provided proper file and folder permission as
> mention in below link.
> http://www.ovirt.org/documentation/how-to/troubleshooting/
> troubleshooting-nfs-storage-issues/
>
> Please help !!!
>
> Thanks,
> ~Rohit
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Master storage domain in locked state

2017-01-16 Thread Yaniv Dary
Can you please send logs?


Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Thu, Jan 12, 2017 at 12:23 PM, knarra  wrote:

> Hi,
>
> I have three glusterfs storage domains present on my system. data
> (master), vmstore and engine. I tried moving the master storage domain to
> maintenance state , it was stuck in preparing for maintenance for a long
> time and then i rebooted my hosts. Now i see that the master domain moves
> to maintenance state but vmstore which is master now is stuck in locked
> state. Any idea how to come out of this situation.
>
> Any help is much appreciated.
>
> Thanks
>
> kasturi
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVF disk errors

2017-01-16 Thread Yaniv Dary
I would start with the manager.
It is not tested that way.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Mon, Jan 9, 2017 at 4:09 PM, Michael Watters 
wrote:

> Thanks, that's kind of what I'm discovering.  Is it possible for an
> ovirt 4 host to join a 3.6 cluster?  Do I need to upgrade my engine
> first?  We are running ovirt-engine 3.6 on a dedicated server which
> manages two ovirt nodes right now.
>
>
> On Sat, 2017-01-07 at 09:30 +, Pavel Gashev wrote:
> > Michael,
> >
> > oVirt 3.6 doesn't work well on CentOS 7.3. Upgrade vdsm to 4.17.35.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Export VM from oVirt/RHEV to VMWare

2017-01-16 Thread Colin Coe
Hi all

We run RHEV exclusively and I need to export a guest to one of our vendors
for analysis.

The vendor uses VMWare.  How can I export a VM in OVF format out of RHEV
v3.5.8?

Thanks

CC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] Lowering the bar for wiki contribution?

2017-01-16 Thread Roy Golan
On 11 January 2017 at 17:06, Marc Dequènes (Duck)  wrote:

> Quack,
>
> On 01/08/2017 06:39 PM, Barak Korren wrote:
> > On 8 January 2017 at 10:17, Roy Golan  wrote:
> >> Adding infra which I forgot to add from the beginning
>
> Thanks.
>
> > I don't think this is an infra issue, more of a community/working
> > procedures one.
>
> I do thin it is. We are involved in the tooling, for their maintenance,
> for documenting where things are, for suggesting better solutions,
> ensuring security…
>
> > On the one hand, the developers need a place where they create and
> > discuss design documents and road maps. That please needs to be as
> > friction-free as possible to allow developers to work on the code
> > instead of on the documentation tools.
>
> As for code, I think there is need for review, even more for design
> documents, so I don't see why people are bothered by PRs, which is a
> tool they already know fairly well.
>
> For people with few git knowledge, the GitHub web interface allows to
> edit files.
>
> > On the other hand, the user community needs a good, up to date source
> > of information about oVirt and how to use it.
>
> Yes, this official entry point and it needs to be clean.
>
> > Having said the above, I don't think the site project's wiki is the
> > best place for this. The individual project mirrors on GitHub may be
> > better for this
>
> We could indeed split the technical documentation. If people want to
> experiment with the GH wiki pages, I won't interfere.
>
> I read several people in this thread really miss the old wiki, so I
> think it is time to remember why we did not stay in paradise. I was not
> there at the time, but I know the wiki was not well maintained. People
> are comparing our situation to the MediaWiki site, but the workforce is
> nowhere to be compared. There is already no community manager, and noone
> is in charge of any part really, whereas Mediawiki has people in charge
> of every corner of the wiki. Also they developed tools over years to
> monitor, correct, revert… and we don't have any of this. So without any
> process then it was a total mess. More than one year later there was
> still much cleanup to do, and having contributed to it a little bit, I
> fear a sentimental rush to go back to a solution that was abandoned.
>
> Having a header telling if this is a draft or published is far from
> being sufficient. If noone cares you just pile up content that gets
> obsolete, then useless, then misleading for newcomers. You may prefer
> review a posteriori, but in this case you need to have the proper tool
> to be able to search for things to be reviewed, and a in-content
> pseudo-header is really not an easy way to get a todolist.
>
> As for the current builder, it checks every minute for new content to
> build. The current tool (Middleman) is a bit slow, and the machine is
> not ultra speedy, but even in the worst case it should not take more
> than half an hour to see the published result. So I don't know why
> someone suggested to build "at least once a day". There is also an
> experimentation to improve this part.
>
> So to sum up:
>   - the most needed thing here is not a tool but people in charge to
> review the content (additions, cleanup old things, ask devs to update
> some missing part…), this should also allow for faster publishing
>   - I'm not against changing tool, just do not forget what you loose in
> the process, and the migration pain
>   - I think free editing without workflow in our specific case is not
> gonna work because we do not have the needed workforce for things to
> auto-correct
>
> \_o<
>
>
What do you suggest then? how can infra help with this now? fwiw I don't
care only about 'developers', I do want to process to be better.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can't shut down VM

2017-01-16 Thread Sandro Bonazzola
On Sun, Dec 25, 2016 at 12:52 PM, Peter Calum  wrote:

> Hi,
>
> I get this on console on the host when i try to reinstall
>
> Message from syslogd@khk9dsk36 at Dec 25 12:48:36 ...
>  dracut:installkernel failed in module fips
>
> I'll reinstall it from scratch with new ovirt-node image
>

Any update?



>
> thanks
> Peter
>
>
> 2016-12-25 12:31 GMT+01:00 Peter Calum :
>
>> Hi Arik
>>
>> Strange. - When i went back now to check, the host was down, and the VM i
>> had trouble with was also down.
>> The other VM's was migrated to other hosts. But the faulty VM still has a
>> locked snapshot, and i cant start
>> it because of that. I tried to put the host into maintenance mode and
>> then activate i, but under activation it reboots.
>> Should i reinstall that host ? - Or do you want more logs ?
>>
>> thanks,
>> Peter
>>
>> 2016-12-25 11:49 GMT+01:00 Arik Hadas :
>>
>>>
>>> On Sun, Dec 25, 2016 at 12:14 PM, Peter Calum 
>>> wrote:
>>>
 Hi Arik

 Thank you for answering.

 The host for the VM with problems are running normally with 17 oter
 VM's.

>>>
>>> So currently the host is up, 17 other VMs are running and this VM is
>>> stuck in Powering Down?
>>> Could you create a VM and try to run it on that host (at the Host tab in
>>> the VM dialog select this host) and see if it get stuck in WaitForLaunch
>>> state?
>>>
>>>

 thanks,
 Peter


 2016-12-25 10:56 GMT+01:00 Arik Hadas :

> Hi,
> There was some connectivity problem with the host that prevented you
> from shutting down this VM.
> What is the status of the host 80bb6fbe-6479-4642-835d-83f729e97fbb now
> (in the webadmin it is shown as the host that the VM runs on)?
>
> On Sun, Dec 25, 2016 at 11:21 AM, Peter Calum 
> wrote:
>
>> Hi,
>>
>> I have a VM hanging in powering down state, the same VM has at
>> snapshot
>> hanging in state locked. - How do i solve this ?
>>
>> vmId='6d820a57-efef-431d-b98f-99e8fe13b6ac',
>>
>> oVirt Engine Version: 4.0.5.5-1.el7.centos
>>
>> engine log attached.
>>
>> --
>> Venlig hilsen / Kind regards
>>
>> Peter Calum
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>


 --
 Venlig hilsen / Kind regards

 Peter Calum


>>>
>>>
>>
>>
>> --
>> Venlig hilsen / Kind regards
>>
>> Peter Calum
>>
>>
>
>
>
> --
> Venlig hilsen / Kind regards
>
> Peter Calum
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Black Screen Issue when installing Ovirt Hypervisor bare metal

2017-01-16 Thread Sandro Bonazzola
On Wed, Jan 11, 2017 at 4:15 PM, Jeramy Johnson 
wrote:

> Hey Support, Im new to Ovirt and wanted to know if you can help me out
> for some strange reason when i try to install Ovirt Node Hypervisor on a
> machine (baremetal) using the ISO,


Hi, welcome aboard!
Which ISO are you using for the installation?


I get a black screen after I select Install Ovirt Hypervisor and nothing
> happens. Can someone help assist? The machine i'm using for deployment is
> HP 280 Business PC, i5 processor, 8gigs memory, 1tb hard drive.
>

Please note that oVirt is not designed for a single host use case. If you
need to run VMs on a single host there are other solutions designed for it
like Kimchi : https://github.com/kimchi-project/kimchi



> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM service won't start

2017-01-16 Thread Edward Haas
On Fri, Jan 13, 2017 at 11:46 PM, paul.greene.va  wrote:

> Oh, I stumbled onto something relevant.
>
> I noticed on the host that was working correctly that the ifcfg-enp6s0
> file included a line for "BRIDGE=ovirtmgmt", and the other two didn't have
> that line. When I added that line to the other two hosts, and restarted
> networking, I was able to get those hosts in a status of "UP".
>
> That file is autogenerated by VDSM, so I wondered if it would survive a
> reboot. When I rebooted, the line had been removed again by VDSM.
>
> So, I guess the final question then is how to get persistence in keeping
> this BRIDGE line from getting removed across reboots?


VDSM on reboot will compare the current config to the persisted one and try
to sync it.
Perhaps you have a corrupted persistent configuration.

Could you please send us the following items:
- vdsm and supervdsm logs (from /var/log/vdsm)
- All your ifcfg files.
- The persisted VDSM network configuration from:
/var/lib/vdsm/persistence/netconf

Thanks,
Edy.


>
>
>
> On 1/13/2017 2:54 PM, Nir Soffer wrote:
>
>> On Fri, Jan 13, 2017 at 9:24 PM, paul.greene.va
>>  wrote:
>>
>>> Output below ...
>>>
>>>
>>>
>>> On 1/13/2017 1:47 PM, Nir Soffer wrote:
>>>
 On Fri, Jan 13, 2017 at 5:45 PM, paul.greene.va
  wrote:

> All,
>
> I'm having an issue with the vdsmd service refusing to start on a fresh
> install of RHEL 7.2, RHEV version 4.0.
>
> It initially came up correctly, and the command "ip a" showed a
> "vdsmdummy"
> interface and a "ovirtmgmt" interface. However after a couple of
> reboots,
> those interfaces disappeared, and running "systemctl status vdsmd"
> generated
> the message "Dependency failed for Virtual Desktop Service Manager/Job
> vdsmd.service/start failed with result 'dependency'". Didn't say what
> dependency though
>
> I have 3 hosts where this happening on 2 out of 3 hosts. For some odd
> reason, the one host isn't having any problems.
>
> In a Google search I found an instance where system clock timing was
> out
> of
> sync, and that messed it up. I checked all three hosts, as well as the
> RHEV
> manager and they all had chronyd running and the clocks appeared to be
> in
> sync.
>
> After a reboot the virtual interfaces usually initially come up, but go
> down
> again within a few minutes.
>
> Running journalctl -xe gives these three messages:
>
> "failed to start Virtual Desktop Server Manager network restoration"
>
> "Dependency failed for Virtual Desktop Server Manager"  (but it doesn't
> say
> which dependency failed"
>
> "Dependency failed for MOM instance configured for VDSM purposes"
> (again,
> doesn't way which dependency)
>
> Any suggestions?
>
 Can you share the output of:

 systemctl status vdsmd
 systemctl status mom
 systemctl status libvirtd
 journalctl -xe

 Nir

 Sure, here you go 
>>>
>>>
>>>
>>> [root@rhevh03 vdsm]# systemctl status vdsmd
>>> ā vdsmd.service - Virtual Desktop Server Manager
>>> Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled;
>>> vendor
>>> preset: enabled)
>>> Active: inactive (dead)
>>>
>>> Jan 13 12:01:53 rhevh03 systemd[1]: Dependency failed for Virtual Desktop
>>> Server Manager.
>>> Jan 13 12:01:53 rhevh03 systemd[1]: Job vdsmd.service/start failed with
>>> result 'dependency'.
>>> Jan 13 13:51:50 rhevh03 systemd[1]: Dependency failed for Virtual Desktop
>>> Server Manager.
>>> Jan 13 13:51:50 rhevh03 systemd[1]: Job vdsmd.service/start failed with
>>> result 'dependency'.
>>> Jan 13 13:55:15 rhevh03 systemd[1]: Dependency failed for Virtual Desktop
>>> Server Manager.
>>> Jan 13 13:55:15 rhevh03 systemd[1]: Job vdsmd.service/start failed with
>>> result 'dependency'.
>>>
>>>
>>>
>>> [root@rhevh03 vdsm]# systemctl status momd
>>> ā momd.service - Memory Overcommitment Manager Daemon
>>> Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
>>> preset: disabled)
>>> Active: inactive (dead) since Fri 2017-01-13 13:53:09 EST; 2min 26s
>>> ago
>>>Process: 28294 ExecStart=/usr/sbin/momd -c /etc/momd.conf -d
>>> --pid-file
>>> /var/run/momd.pid (code=exited, status=0/SUCCESS)
>>>   Main PID: 28298 (code=exited, status=0/SUCCESS)
>>>
>>> Jan 13 13:53:09 rhevh03 systemd[1]: Starting Memory Overcommitment
>>> Manager
>>> Daemon...
>>> Jan 13 13:53:09 rhevh03 systemd[1]: momd.service: Supervising process
>>> 28298
>>> which is not our child. We'll most likely not notice when it exits.
>>> Jan 13 13:53:09 rhevh03 systemd[1]: Started Memory Overcommitment Manager
>>> Daemon.
>>> Jan 13 13:53:09 rhevh03 python[28298]: No worthy mechs found
>>>
>>>
>>>
>>> [root@rhevh03 vdsm]# systemctl status libvirtd
>>> ā libvirtd.service - Virtualization 

[ovirt-users] Reminder: we have Wildfly's web mgmt console (with auth)

2017-01-16 Thread Roy Golan
For quite some time now we can access Wildfly's web console on
https://127.0.0.1:8706

It's the UI equivalent of jboss.cli but much more convenient. Example of
tasks you can perform there:
- Change logging settings, live
- Tweak the managed thread pool (will send a different thread about it)
- Shutdown/reload the service
- Tweak db connection details
- Get info/stats on the running setup EE components and more

One of the main advantages over the old jmx method is that it uses a plugin
to authenticate the engine user so your credentials should be admin@internal
or any superuser for that matter.

The default is to expose it to localhost and that could be change in
services/ovirt-engine/ovirt-engine.xml.in.For firewalled setups, you can
ssh tunnel your machine to overcome that as always.

R
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users