Re: [ovirt-users] oVirt Reports

2017-01-15 Thread Shirly Radco
oVirt reports is not available since v4.0.
The oVirt DWH is available, can be queried by external reporting solutions
that support sql queries.
We are currently working on a new metrics store solution for oVirt.

Best regards,

Shirly Radco

BI Software Engineer
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109


On Thu, Jan 12, 2017 at 12:44 PM, Marcin Michta 
wrote:

> Hi,
>
> Someone can tell me what kind of informations I can get from ovirt
> reports? oVirt web page is poor about it.
> Screenshots will be helpful.
>
> Thank you,
> Marcin
>
> --
>
>
> ---
> The information in this email is confidential and may be legally
> privileged, it may contain information that is confidential in CodiLime Sp.
> z o. o. It is intended solely for the addressee. Any access to this email
> by third parties is unauthorized. If you are not the intended recipient of
> this message, any disclosure, copying, distribution or any action
> undertaken or neglected in reliance thereon is prohibited and may result in
> your liability for damages.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] DWH URL in 4.0.6 ??

2017-01-15 Thread Shirly Radco
DWH is still available and be queried by external sql based reporting
solutions.
We are currently working on a new metrics  store solution.

Best regards,

Shirly Radco

BI Software Engineer
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109


On Fri, Jan 13, 2017 at 6:36 PM, Alexander Wels  wrote:

> On Friday, January 13, 2017 9:30:09 AM EST Devin Acosta wrote:
> > I upgraded to the latest 4.0.6 and show that the Data Ware House process
> is
> > running, did they change how you access the GUI for it?
> >
> > Going to: https://{fqdn}/ovirt-engine-reports/
> > no longer functions on any of my deployments?
>
> The DWH reports are no longer available in oVirt since 4.0. The process is
> running because the dashboard uses the data to generate its data. But the
> reports are gone.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.0.6 Release is now available

2017-01-15 Thread Gianluca Cecchi
On Sun, Jan 15, 2017 at 4:54 PM, Derek Atkins  wrote:

>
> > - update the self hosted engine environment
> > (with commands:
> > yum update "ovirt-*-setup*"
> > engine-setup
> > )
>
> I did "yum update" and not "yum update "ovirt-*-setup*".. and...
>
> > - verify connection to engine web admin gui is still ok and 4.0.6. Engine
> > os at this time is still 7.2
>
>  I updated the OS to 7.3 in the engine VM.
>
> I think that's the root of this bug, having PG restarted from under dwhd.
> The fact that your engine is still at 7.2 implies you didn't also perform
> the OS update on the engine.  I wanted to do that.  (Not sure why you
> didn't).
>
> -derek
>

See below, I did it at the end, after update of the host

>
> > - shutdown engine VM
> > - put hypervisor host in local maintenance
> > - stop some services (ovirt-ha-agent, ovirt-ha-broker, vdsmd)
> > - run yum update that brings hypervisor at 7.3 and also new vdsm and
> > related packages to 4.0.6 level and also qemu-kvm-ev at 2.6.0-27.1.el7
>
here for the host I used the approach of double update: os packages and
oVirt packages


> > - adjust/merge some rpmnew files (both os in general and ovirt related)
> > - stop again vdsmd (agent and broker remained down)
> > - stop sanlock (here sometimes it goes timeout so I "kill -9" the
> > remaining
> > process otherwise the system is unable to shutdown due to impossibility
> to
> > umount nfs filesystems
> > In fact in my environment the host itself provides nfs mounts for data
> > storage domain and iso one; the umount problem is only with the data one)
> > - shutdown host and reboot it
> > - exit maintenance
> > - engine vm starts after a while
> > - enter global maintenance again
> > - yum update on engine vm and adjust rpmnew files
>

here I have the step where I update the engine VM genaral os packages from
7.2 to 7.3...


> > - shutdown engine vm
> > - exit global maintenance
> > - after a while engine vm starts
> > - power on VMs required.
> >
> > Gianluca
> >
>
>
> --
>Derek Atkins 617-623-3745
>de...@ihtfp.com www.ihtfp.com
>Computer and Internet Security Consultant
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.0.6 Release is now available

2017-01-15 Thread Derek Atkins
Hi,

There's one BIG difference between what you did and what I did...

On Sun, January 15, 2017 10:08 am, Gianluca Cecchi wrote:
> On Sun, Jan 15, 2017 at 3:39 PM, Derek Atkins  wrote:
>
>>
>>
>> FWIW, I'm running on a single-host system with hosted-engine.
>>
>
> I made the same update on two single host environments with self hosted
> engine without any problem.
> My approach was;
>
> - shutdown all VMs except self hosted engine
> - put environment in global maintenance
> - update the self hosted engine environment
> (with commands:
> yum update "ovirt-*-setup*"
> engine-setup
> )

I did "yum update" and not "yum update "ovirt-*-setup*".. and...

> - verify connection to engine web admin gui is still ok and 4.0.6. Engine
> os at this time is still 7.2

 I updated the OS to 7.3 in the engine VM.

I think that's the root of this bug, having PG restarted from under dwhd. 
The fact that your engine is still at 7.2 implies you didn't also perform
the OS update on the engine.  I wanted to do that.  (Not sure why you
didn't).

-derek

> - shutdown engine VM
> - put hypervisor host in local maintenance
> - stop some services (ovirt-ha-agent, ovirt-ha-broker, vdsmd)
> - run yum update that brings hypervisor at 7.3 and also new vdsm and
> related packages to 4.0.6 level and also qemu-kvm-ev at 2.6.0-27.1.el7
> - adjust/merge some rpmnew files (both os in general and ovirt related)
> - stop again vdsmd (agent and broker remained down)
> - stop sanlock (here sometimes it goes timeout so I "kill -9" the
> remaining
> process otherwise the system is unable to shutdown due to impossibility to
> umount nfs filesystems
> In fact in my environment the host itself provides nfs mounts for data
> storage domain and iso one; the umount problem is only with the data one)
> - shutdown host and reboot it
> - exit maintenance
> - engine vm starts after a while
> - enter global maintenance again
> - yum update on engine vm and adjust rpmnew files
> - shutdown engine vm
> - exit global maintenance
> - after a while engine vm starts
> - power on VMs required.
>
> Gianluca
>


-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Black Screen Issue when installing Ovirt Hypervisor bare metal

2017-01-15 Thread Ilya Fedotov
Hello, Jeramy


 You have to read this instructions for your single machine:

https://www.ovirt.org/develop/release-management/features/integration/allinone/

 Its very simple!

 Hope will be fine. Good luck!


with br, Ilya






2017-01-11 18:15 GMT+03:00 Jeramy Johnson :

> Hey Support, Im new to Ovirt and wanted to know if you can help me out
> for some strange reason when i try to install Ovirt Node Hypervisor on a
> machine (baremetal) using the ISO, I get a black screen after I select
> Install Ovirt Hypervisor and nothing happens. Can someone help assist? The
> machine i'm using for deployment is HP 280 Business PC, i5 processor, 8gigs
> memory, 1tb hard drive.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt host activation and lvm looping with high CPU load trying to mount iSCSI storage

2017-01-15 Thread Nir Soffer
On Thu, Jan 12, 2017 at 12:02 PM, Mark Greenall
 wrote:
> Firstly, thanks @Yaniv and thanks @Nir for your responses.
>
> @Yaniv, in answer to this:
>
>>> Why do you have 1 SD per VM?
>
> It's a combination of performance and ease of management. We ran some IO 
> tests with various configurations and settled on this one for a balance of 
> reduced IO contention and ease of management. If there is a better 
> recommended way of handling these then I'm all ears. If you believe having a 
> large amount of storage domains adds to the problem then we can also review 
> the setup.

Yes, having one storage domain per vm is an extremely fragile way to use storage
domains. This makes you system very fragile - any problem in
monitoring one of the
45 storage domains can make entire host non-operational.

You should use storage domains for grouping volumes that need to be separated
from other volumes, for example production, staging, different users,
different types
of storage, etc.

If some vms need high IO, and you want to have one or more devices per vm,
you should use direct luns.

If you need snapshots, live storage migration, etc, use volumes on storage
domain.

I looked at the logs, and I can explain why your system becomes non-operational.

Grepping the domain monitor logs, we see that many storage domains have
very slow (up to 749 seconds read delay):

(filtered the log with awk, I don't have the command now)
Thread-12::DEBUG::2017-01-11
15:09:18,785::check::327::storage.check::(_check_completed)
'/dev/7dfeac70-eaa1-4ba6-ad2a-e3c11564ee3b/metadata' elapsed=0.05
Thread-12::DEBUG::2017-01-11
15:09:28,780::check::327::storage.check::(_check_completed)
'/dev/7dfeac70-eaa1-4ba6-ad2a-e3c11564ee3b/metadata' elapsed=0.03
Thread-12::DEBUG::2017-01-11
15:09:38,778::check::327::storage.check::(_check_completed)
'/dev/7dfeac70-eaa1-4ba6-ad2a-e3c11564ee3b/metadata' elapsed=0.03
Thread-12::DEBUG::2017-01-11
15:09:48,777::check::327::storage.check::(_check_completed)
'/dev/7dfeac70-eaa1-4ba6-ad2a-e3c11564ee3b/metadata' elapsed=0.03
Thread-12::DEBUG::2017-01-11
15:09:55,863::check::327::storage.check::(_check_completed)
'/dev/e70839af-77dd-40c0-a541-d364d30e859a/metadata' elapsed=0.02
Thread-12::DEBUG::2017-01-11
15:09:55,957::check::327::storage.check::(_check_completed)
'/dev/1198e513-bdc8-4d5f-8ee5-8e8dc30d309d/metadata' elapsed=0.03
Thread-12::DEBUG::2017-01-11
15:09:56,070::check::327::storage.check::(_check_completed)
'/dev/640ac4d3-1e14-465a-9a72-cc2f2c4cfe26/metadata' elapsed=0.03
Thread-12::DEBUG::2017-01-11
15:09:56,741::check::327::storage.check::(_check_completed)
'/dev/6e98b678-a955-49b8-aad7-e1e52e26db1f/metadata' elapsed=0.03
Thread-12::DEBUG::2017-01-11
15:09:56,798::check::327::storage.check::(_check_completed)
'/dev/02d31cfc-f095-42e6-8396-d4dbebbb4fed/metadata' elapsed=0.03
Thread-12::DEBUG::2017-01-11
15:09:57,080::check::327::storage.check::(_check_completed)
'/dev/4b23a421-5c1f-4541-a007-c93b7af4986b/metadata' elapsed=0.03
Thread-12::DEBUG::2017-01-11
15:09:57,248::check::327::storage.check::(_check_completed)
'/dev/5d8d49e2-ce0e-402e-9348-94f9576e2e28/metadata' elapsed=0.03
Thread-12::DEBUG::2017-01-11
15:09:57,425::check::327::storage.check::(_check_completed)
'/dev/9fcbb7b1-a13a-499a-a534-119360d57f00/metadata' elapsed=0.08
Thread-12::DEBUG::2017-01-11
15:09:57,715::check::327::storage.check::(_check_completed)
'/dev/a25ded63-2c31-4f1d-a65a-5390e47fda99/metadata' elapsed=0.04
Thread-12::DEBUG::2017-01-11
15:09:57,750::check::327::storage.check::(_check_completed)
'/dev/f6a91d2f-ccae-4440-b1a7-f62ee750a58c/metadata' elapsed=0.05
Thread-12::DEBUG::2017-01-11
15:09:58,007::check::327::storage.check::(_check_completed)
'/dev/bfb1d6b2-b610-4565-b818-ab6ee856e023/metadata' elapsed=0.07
Thread-12::DEBUG::2017-01-11
15:09:58,170::check::327::storage.check::(_check_completed)
'/dev/84cfcb68-190f-4836-8294-d5752c07b762/metadata' elapsed=0.03
Thread-12::DEBUG::2017-01-11
15:09:58,556::check::327::storage.check::(_check_completed)
'/dev/2204a85e-c8c7-4e1e-b8e6-e392645077c6/metadata' elapsed=0.07
Thread-12::DEBUG::2017-01-11
15:09:58,805::check::327::storage.check::(_check_completed)
'/dev/7dfeac70-eaa1-4ba6-ad2a-e3c11564ee3b/metadata' elapsed=0.07
Thread-12::DEBUG::2017-01-11
15:09:59,093::check::327::storage.check::(_check_completed)
'/dev/78e59ee0-13ac-4176-8950-837498ba6038/metadata' elapsed=0.06
Thread-12::DEBUG::2017-01-11
15:09:59,159::check::327::storage.check::(_check_completed)
'/dev/b66a2944-a056-4a48-a3f9-83f509df5d1b/metadata' elapsed=0.06
Thread-12::DEBUG::2017-01-11
15:09:59,218::check::327::storage.check::(_check_completed)
'/dev/da05d769-27c2-4270-9bba-5277bf3636e6/metadata' elapsed=0.06
Thread-12::DEBUG::2017-01-11
15:09:59,247::check::327::storage.check::(_check_completed)
'/dev/819b51c0-96d7-43c2-b120-7adade60a2e2/metadata' elapsed=0.03
Thread-12::DEBUG::2017-01-11
15:09:59,363::check::327::storage.check::(_check_completed)
'/dev/24499abc-0e16-48a2-8512-6c34e99dfa5f/metadata' elapsed=0.02
Threa

Re: [ovirt-users] [ANN] oVirt 4.0.6 Release is now available

2017-01-15 Thread Gianluca Cecchi
On Sun, Jan 15, 2017 at 3:39 PM, Derek Atkins  wrote:

>
>
> FWIW, I'm running on a single-host system with hosted-engine.
>

I made the same update on two single host environments with self hosted
engine without any problem.
My approach was;

- shutdown all VMs except self hosted engine
- put environment in global maintenance
- update the self hosted engine environment
(with commands:
yum update "ovirt-*-setup*"
engine-setup
)
- verify connection to engine web admin gui is still ok and 4.0.6. Engine
os at this time is still 7.2
- shutdown engine VM
- put hypervisor host in local maintenance
- stop some services (ovirt-ha-agent, ovirt-ha-broker, vdsmd)
- run yum update that brings hypervisor at 7.3 and also new vdsm and
related packages to 4.0.6 level and also qemu-kvm-ev at 2.6.0-27.1.el7
- adjust/merge some rpmnew files (both os in general and ovirt related)
- stop again vdsmd (agent and broker remained down)
- stop sanlock (here sometimes it goes timeout so I "kill -9" the remaining
process otherwise the system is unable to shutdown due to impossibility to
umount nfs filesystems
In fact in my environment the host itself provides nfs mounts for data
storage domain and iso one; the umount problem is only with the data one)
- shutdown host and reboot it
- exit maintenance
- engine vm starts after a while
- enter global maintenance again
- yum update on engine vm and adjust rpmnew files
- shutdown engine vm
- exit global maintenance
- after a while engine vm starts
- power on VMs required.

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.0.6 Release is now available

2017-01-15 Thread Derek Atkins
Hi,

Just FYI I upgraded from EL7.2/ovirt 4.0.5 to EL7.3/ovirt 4.0.6 and when
I ran engine-setup I ran into:

  https://bugzilla.redhat.com/show_bug.cgi?id=1293844

Specifically, engine-setup complained that dwhd was still running, but
systemctl status showed it was not running.

I was able to finally get around this by manually stopping/starting
ovirt-engine-dwhd until it actually came up again and then engine-setup
ran fine.  But what would cause it to get into this behavior in the
first place?

I SUSPECT part of the issue was that I upgraded EL7.2 to EL7.3 at the
same time, which probably updated (and restarted) PG, and I didn't wait
the requisite hour until cron restarted it on it's own?

FWIW, I'm running on a single-host system with hosted-engine.

Thanks,

-derek

Sandro Bonazzola  writes:

> The oVirt Project is pleased to announce the general availability of oVirt
> 4.0.6, as of January 10th, 2016.
>  
> This release is available now for:
> * Red Hat Enterprise Linux 7.3 or later
> * CentOS Linux (or similar) 7.3 or later
> * Fedora 23 (tech preview)
>  
> This release supports Hypervisor Hosts running:
> * Red Hat Enterprise Linux 7.3 or later
> * CentOS Linux (or similar) 7.3 or later
> * Fedora 23 (tech preview)
> * oVirt Next Generation Node 4.0
>  

-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt host activation and lvm looping with high CPU load trying to mount iSCSI storage

2017-01-15 Thread Nir Soffer
On Fri, Jan 13, 2017 at 11:29 AM, Mark Greenall
 wrote:
> Hi Nir,
>
> Thanks very much for your feedback. It's really useful information. I keep my 
> fingers crossed it leads to a solution for us.
>
> All the settings we currently have were to try and optimise the Equallogic 
> for Linux and Ovirt.
>
> The multipath config settings came from this Dell Forum thread re: getting 
> EqualLogic to work with Ovirt 
> http://en.community.dell.com/support-forums/storage/f/3775/t/19529606

I don't think it is a good idea to copy undocumented changes to
multipath.conf like this.

You must understand any change you have in your multipath.conf. If you cannot
explain any of the changes you should use the defaults.

> The udev settings were from the Dell Optimizing SAN Environment for Linux 
> Guide here: 
> https://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwiXvJes4L7RAhXLAsAKHVWLDyQQFggiMAA&url=http%3A%2F%2Fen.community.dell.com%2Fdell-groups%2Fdtcmedia%2Fm%2Fmediagallery%2F20371245%2Fdownload&usg=AFQjCNG0J8uWEb90m-BwCH_nZJ8lEB3lFA&bvm=bv.144224172,d.d24&cad=rja

Not sure that these changes were tested by someone with ovirt.

I think the general approach is to first make the system work using
the defaults, applying required changes.

Tuning a system should be done after you the system works, and you
can show that you have performance issues that needs tuning.

> Perhaps some of the settings are now conflicting with Ovirt best practice as 
> you optimise the releases.
>
> As requested, here is the output of multipath -ll
>
> [root@uk1-ion-ovm-08 rules.d]# multipath -ll
> 364842a3403798409cf7d555c6b8bb82e dm-237 EQLOGIC ,100E-00
> size=1.5T features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=1 status=active
>   |- 48:0:0:0  sdan 66:112 active ready running
>   `- 49:0:0:0  sdao 66:128 active ready running
> 364842a34037924a7bf7d25416b8be891 dm-212 EQLOGIC ,100E-00
> size=345G features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=1 status=active
>   |- 42:0:0:0  sdah 66:16  active ready running
>   `- 43:0:0:0  sdai 66:32  active ready running
> 364842a340379c497f47ee5fe6c8b9846 dm-459 EQLOGIC ,100E-00
> size=175G features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=1 status=active
>   |- 86:0:0:0  sdbz 68:208 active ready running
>   `- 87:0:0:0  sdca 68:224 active ready running
> 364842a34037944f2807fe5d76d8b1842 dm-526 EQLOGIC ,100E-00
> size=200G features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=1 status=active
>   |- 96:0:0:0  sdcj 69:112 active ready running
>   `- 97:0:0:0  sdcl 69:144 active ready running
> 364842a3403798426d37e05bc6c8b6843 dm-420 EQLOGIC ,100E-00
> size=250G features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=1 status=active
>   |- 82:0:0:0  sdbu 68:128 active ready running
>   `- 83:0:0:0  sdbw 68:160 active ready running
> 364842a340379449fbf7dc5406b8b2818 dm-199 EQLOGIC ,100E-00
> size=200G features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=1 status=active
>   |- 38:0:0:0  sdad 65:208 active ready running
>   `- 39:0:0:0  sdae 65:224 active ready running
> 364842a34037984543c7d35a86a8bc8ee dm-172 EQLOGIC ,100E-00
> size=670G features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=1 status=active
>   |- 36:0:0:0  sdaa 65:160 active ready running
>   `- 37:0:0:0  sdac 65:192 active ready running
> 364842a340379e4303c7dd5a76a8bd8b4 dm-140 EQLOGIC ,100E-00
> size=1.5T features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=1 status=active
>   |- 33:0:0:0  sdx  65:112 active ready running
>   `- 32:0:0:0  sdy  65:128 active ready running
> 364842a340379b44c7c7ed53b6c8ba8c0 dm-359 EQLOGIC ,100E-00
> size=300G features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=1 status=active
>   |- 69:0:0:0  sdbi 67:192 active ready running
>   `- 68:0:0:0  sdbh 67:176 active ready running
> 364842a3403790415d37ed5bb6c8b68db dm-409 EQLOGIC ,100E-00
> size=200G features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=1 status=active
>   |- 80:0:0:0  sdbt 68:112 active ready running
>   `- 81:0:0:0  sdbv 68:144 active ready running
> 364842a34037964f7807f15d86d8b8860 dm-527 EQLOGIC ,100E-00
> size=200G features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=1 status=active
>   |- 98:0:0:0  sdck 69:128 active ready running
>   `- 99:0:0:0  sdcm 69:160 active ready running
> 364842a34037944aebf7d85416b8ba895 dm-226 EQLOGIC ,100E-00
> size=200G features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=1 status=active
>   |- 46:0:0:0  sdal 66:80  active ready running
>   `- 47:0:0:0  sdam 66:96  active ready running
> 364842a340379f44f7c7e053c6c8b98d2 dm-360 EQLOGIC ,100E-00
> size=450G features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=1 status=active
>   |- 70:0:0:0  sdbj 67:208 active ready running
>   `- 71:0:0:0  sdbk 67:224 active ready running
> 364842a34037924276e7e051e6c8b084f dm-308 EQLOGIC ,100E-00
> 

Re: [ovirt-users] PM proxy

2017-01-15 Thread Slava Bendersky
Hello Martin, 
Thank you for reply, I will post more detail soon. 

Slava. 


From: "Martin Perina"  
To: "Slava Bendersky"  
Cc: "users"  
Sent: Friday, January 13, 2017 2:17:28 AM 
Subject: Re: [ovirt-users] PM proxy 

Hi Slava, 

do you have at least one another host in the same cluster or DC which doesn't 
have connection issues (in status Up or Maintenance)? 
If so, please turn on debug logging for power management part using following 
command: 

/usr/share/ovirt-engine-wildfly/bin/jboss-cli.sh --controller= [ 
http://127.0.0.1:8706/ | 127.0.0.1:8706 ] --connect --user=admin@internal 

and enter following inside jboss-cli command prompt: 

/subsystem=logging/logger=org.ovirt.engine.core.bll.pm:add 
/subsystem=logging/logger=org.ovirt.engine.core.bll.pm:write-attribute(name=level,value=DEBUG)
 
quit 

Afterwards you will see more details in engine.log why other hosts were 
rejected during fence proxy selection process. 

Btw above debug log changes are not permanent, they will be reverted on 
ovirt-engine restart or using following command: 

/usr/share/ovirt-engine-wildfly/bin/jboss-cli.sh --controller= [ 
http://127.0.0.1:8706/ | 127.0.0.1:8706 ] --connect --user=admin@internal 
'/subsystem=logging/logger=org.ovirt.engine.core.bll.pm:remove' 


Regards 

Martin Perina 


On Thu, Jan 12, 2017 at 4:42 PM, Slava Bendersky < [ 
mailto:volga...@networklab.ca | volga...@networklab.ca ] > wrote: 



Hello Everyone, 
I need help with this error. What possible missing or miss-configured ? 

2017-01-12 05:17:31,444 ERROR [ [ http://org.ovirt.engine.core.bll.pm/ | 
org.ovirt.engine.core.bll.pm ] .FenceProxyLocator] (default task-38) [] Can not 
run fence action on host 'hosted_engine_1', no suitable proxy host was found 

I tried from shell on host and it works fine. 
Right now settings default dc, cluster from PM proxy definition. 
Slava. 

___ 
Users mailing list 
[ mailto:Users@ovirt.org | Users@ovirt.org ] 
[ http://lists.ovirt.org/mailman/listinfo/users | 
http://lists.ovirt.org/mailman/listinfo/users ] 





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Black Screen Issue when installing Ovirt Hypervisor bare metal

2017-01-15 Thread Jeramy Johnson
Hey Support, Im new to Ovirt and wanted to know if you can help me out
for some strange reason when i try to install Ovirt Node Hypervisor on a 
machine (baremetal) using the ISO, I get a black screen after I select Install 
Ovirt Hypervisor and nothing happens. Can someone help assist? The machine i'm 
using for deployment is HP 280 Business PC, i5 processor, 8gigs memory, 1tb 
hard drive. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users