Re: [ovirt-users] "remove" option greyed out on Permissions tab

2017-07-18 Thread Ian Neilsen
Hey All

Ive dug around trying to find a flag to allow "remove" option on
permissions, but cant tell for sure.  On every panel the 'remove' option is
greyed out. I need to remove users from modifying the disk of the engine
manager except admin and unfortunately I can't do this.

Any ideas?

Thanks in advance
Ian


On 7 July 2017 at 13:53, Ian Neilsen  wrote:

> Hey guys
>
> Ive just noticed that I am unable to choose the "remove" option on any
> "Permissions" tab in Ovirt Self-hosted 4.1.
>
> Anyone have a suggestion on how to fix this. Im logged in as admin,
> original admin created during installation.
>
> Thanks in Advance
>
> --
> Ian Neilsen
>
> Mobile: 0424 379 762
> Linkedin: http://au.linkedin.com/in/ianneilsen
> Twitter : ineilsen
>



-- 
Ian Neilsen

Mobile: 0424 379 762
Linkedin: http://au.linkedin.com/in/ianneilsen
Twitter : ineilsen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt Engine Reports oVirt >= 4

2017-07-18 Thread Victor José Acosta Domínguez
Hello everyone, quick question, is ovirt-engine-reports deprecated on oVirt
>= 4?

Because i don't find ovirt-engine-reports package on oVirt's 4 repo.


Regards

Victor Acosta
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] iSCSI Multipath issues

2017-07-18 Thread Vinícius Ferrão
Hello,

I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m trying to 
enable the feature without success too.

Here’s what I’ve done, step-by-step.

1. Installed oVirt Node 4.1.3 with the following network settings:

eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
eno3 with 9216 MTU.
eno4 with 9216 MTU.
vlan11 on eno3 with 9216 MTU and fixed IP addresses.
vlan12 on eno4 with 9216 MTU and fixed IP addresses.

eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on 
different switches.

2. Started the installation of Self-hosted engine after three hours of waiting, 
because: https://bugzilla.redhat.com/show_bug.cgi?id=1454536

3. Selected iSCSI as default interface for Hosted Engine. Everything was fine.

4. On the Hosted Engine I’ve done the following:

a. System > Data Centers > Default > Networks
. Created iSCSI1 with VLAN 11 and MTU 9216, removed VM Network option.
. Created iSCSI2 with VLAN 12 and MTU 9216, removed VM Network option.

b. System > Data Centers > Default > Clusters > Default > Hosts > 
ovirt3.cc.if.ufrj.br (my machine)

Selected Setup Host Networks and moved iSCSI1 to eno3 and iSCSI2 to eno4. Both 
icons gone green, indicating an “up” state.

c. System > Data Centers > Default > Clusters

Selected Logical Networks and them Manage Network. Removed the Required 
checkbox from both iSCSI connections.

d. System > Data Centers > Default > Storage

Added an iSCSI Share with two initiators. Both shows up correctly.

e. System > Data Centers

Now the iSCSI Multipath tab is visible. Selected it and added an iSCSI Bond:
. iSCSI1 and iSCSI2 selected on Logical Networks.
. Two iqn’s selected on Storage Targets.

5. oVirt just goes down. VDSM gets crazy and everything “crashes”. iSCSI is 
still alive, since we can still talk with the Self Hosted Engine, but 
**NOTHING** works. If the iSCSI Bond is removed everything regenerates to a 
usable state.

I’ve added the following files on my public page to help on debugging:
/var/log/vdsm/vdsm.log
/var/log/sanlock.log
/var/log/vdsm/mom.log
/var/log/messages

There are some random images of my configuration too: 
http://www.if.ufrj.br/~ferrao/ovirt/

What should be done now? Issue a bug fix request?

Thanks,
V.

PS: My machine is reachable over the internet. So if anyone would like to 
connect to it, just let me know.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Backup oVirt Node configuration

2017-07-18 Thread Fernando Frediani
Folks. I had a need to reinstall a oVirt Node a few times these days. This
imposed reconfigure it all in order to add it back to oVirt Engine.

What is a better way to backup a oVirt Node configuration, for when you
reinstall it or if it fail completelly you just reinstall it and restore
the backed up files with network configuration, UUID, VDSM, etc ?

Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [EXTERNAL] Re: Host stuck unresponsive after Network Outage

2017-07-18 Thread Anthony . Fillmore
[boxname ~]# systemctl status -l vdsm-network
● vdsm-network.service - Virtual Desktop Server Manager network restoration
   Loaded: loaded (/usr/lib/systemd/system/vdsm-network.service; enabled; 
vendor preset: enabled)
   Active: activating (start) since Tue 2017-07-18 10:42:57 CDT; 1h 29min ago
  Process: 8216 ExecStartPre=/usr/bin/vdsm-tool --vvverbose --append 
--logfile=/var/log/vdsm/upgrade.log upgrade-unified-persistence (code=exited, 
status=0/SUCCESS)
Main PID: 8231 (vdsm-tool)
   CGroup: /system.slice/vdsm-network.service
   ├─8231 /usr/bin/python /usr/bin/vdsm-tool restore-nets
   └─8240 /usr/bin/python /usr/share/vdsm/vdsm-restore-net-config

Jul 18 10:42:57 t0894bmh1001.stores.target.com systemd[1]: Starting Virtual 
Desktop Server Manager network restoration...

Thanks,
Tony
From: Pavel Gashev [mailto:p...@acronis.com]
Sent: Tuesday, July 18, 2017 11:17 AM
To: Anthony.Fillmore ; users@ovirt.org
Cc: Brandon.Markgraf ; Sandeep.Mendiratta 

Subject: [EXTERNAL] Re: [ovirt-users] Host stuck unresponsive after Network 
Outage

Anthony,

Output of “systemctl status -l vdsm-network” would help.


From: > on behalf of 
"Anthony.Fillmore" 
>
Date: Tuesday, 18 July 2017 at 18:13
To: "users@ovirt.org" 
>
Cc: "Brandon.Markgraf" 
>, 
"Sandeep.Mendiratta" 
>
Subject: [ovirt-users] Host stuck unresponsive after Network Outage

Hey Ovirt Users and Team,

I have a host that I am unable to recover post a network outage.  The host is 
stuck in unresponsive mode, even though the host is on the network, able to SSH 
and seems to be healthy.  I’ve tried several things to recover the host in 
Ovirt, but have had no success so far.  I’d like to reach out to the community 
before blowing away and rebuilding the host.

Environment: I have an Ovengine server with about 26 Datacenters, with 2 to 3 
hosts per Datacenter.  My Ovengine server is hosted centrally, with my hosts 
being bare-metal and distributed throughout my environment.Ovengine is 
version 4.0.6.

What I’ve tried: put into maintenance mode, rebooted the host.  Confirmed host 
was rebooted and tried to active, goes back to unresponsive.   Attempted a 
reinstall, which fails.

Checking from the host perspective, I can see the following problems:

[boxname~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: enabled)
   Active: inactive (dead)

Jul 14 12:34:28 boxname systemd[1]: Dependency failed for Virtual Desktop 
Server Manager.
Jul 14 12:34:28 boxname systemd[1]: Job vdsmd.service/start failed with result 
'dependency'.

Going a bit deeper, the results of journalctl –xe:

[root@boxname ~]# journalctl -xe
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit libvirtd.service has begun shutting down.
Jul 18 09:07:31 boxname systemd[1]: Stopped Virtualization daemon.
-- Subject: Unit libvirtd.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit libvirtd.service has finished shutting down.
Jul 18 09:07:31 boxname systemd[1]: Reloading.
Jul 18 09:07:31 boxname systemd[1]: Binding to IPv6 address not available since 
kernel does not support IPv6.
Jul 18 09:07:31 boxname systemd[1]: [/usr/lib/systemd/system/rpcbind.socket:6] 
Failed to parse address value, ignoring: [::
Jul 18 09:07:31 boxname systemd[1]: Started Auxiliary vdsm service for running 
helper functions as root.
-- Subject: Unit supervdsmd.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit supervdsmd.service has finished starting up.
--
-- The start-up result is done.
Jul 18 09:07:31 boxname systemd[1]: Starting Auxiliary vdsm service for running 
helper functions as root...
-- Subject: Unit supervdsmd.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit supervdsmd.service has begun starting up.
Jul 18 09:07:31 boxname systemd[1]: Starting Virtualization daemon...
-- Subject: Unit libvirtd.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit libvirtd.service has begun starting up.
Jul 18 09:07:32 boxname systemd[1]: Started Virtualization daemon.
-- Subject: Unit libvirtd.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- 

Re: [ovirt-users] ovirt-hosted-engine state transition messages

2017-07-18 Thread Darrell Budic
I had some of this going on recently under 4.1.2, started with one or two 
warning messages, then a flood of them. Did the upgrade to 4.1.3 and haven’t 
seen it yet, but it’s only been a few days so far. A java process was consuming 
much CPU, and the DataWarehouse appears to not be collecting data (evidenced by 
a blank dashboard). My DWH has since recovered as well.

I forgot to check, but suspect I was low/out of memory on my engine VM, it’s an 
old one with only 6G allocated currently. Watching for this to happen again, 
and will confirm RAM utilization and bump up appropriately if it looks like 
it’s starved for RAM.


> On Jul 18, 2017, at 5:45 AM, Christophe TREFOIS  
> wrote:
> 
> I have the same as you on 4.1.0
> 
> EngineBadHealth-EngineUp 1 minute later. Sometimes 20 times per day, mostly 
> on weekends.
> 
> Cheers,
> -- 
> 
> Dr Christophe Trefois, Dipl.-Ing.  
> Technical Specialist / Post-Doc
> 
> UNIVERSITÉ DU LUXEMBOURG
> 
> LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
> Campus Belval | House of Biomedicine  
> 6, avenue du Swing 
> L-4367 Belvaux  
> T: +352 46 66 44 6124 
> F: +352 46 66 44 6949  
> http://www.uni.lu/lcsb 
>        
>    
>    
> 
> 
> This message is confidential and may contain privileged information. 
> It is intended for the named recipient only. 
> If you receive it in error please notify me and permanently delete the 
> original message and any copies. 
> 
>   
> 
>> On 17 Jul 2017, at 17:35, Jim Kusznir > > wrote:
>> 
>> Ok, I've been ignoring this for a long time as the logs were so verbose and 
>> didn't show anything I could identify as usable debug info.  Recently one of 
>> my ovirt hosts (currently NOT running the main engine, but a candidate) was 
>> cycling as much as 40 times a day between "EngineUpBadHealth and EngineUp".  
>> Here's the log snippit.  I included some time before and after if that's 
>> helpful.  In this case, I got an email about bad health at 8:15 and a 
>> restore (engine up) at 8:16.  I see where the messages are sent, but I don't 
>> see any explanation as to why / what the problem is.
>> 
>> BTW: 192.168.8.11 is this computer's physical IP; 192.168.8.12 is the 
>> computer currently running the engine.  Both are also hosting the gluster 
>> store (eg, I have 3 hosts, all are participating in the gluster replica 
>> 2+arbitrator).
>> 
>> I'd appreciate it if someone could shed some light on why this keeps 
>> happening!
>> 
>> --Jim
>> 
>> 
>> MainThread::INFO::2017-07-17 
>> 08:12:06,230::config::485::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf)
>>  Reloading vm.conf from the shared storage domain
>> MainThread::INFO::2017-07-17 
>> 08:12:06,230::config::412::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
>>  Trying to get a fresher copy of vm configuration from the OVF_STORE
>> MainThread::INFO::2017-07-17 
>> 08:12:08,877::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
>>  Found OVF_STORE: imgUUID:e10c90a5-4d9c-4e18-b6f7-ae8f0cdf4f57, 
>> volUUID:a9754d40-eda1-44d7-ac92-76a228f9f1ac
>> MainThread::INFO::2017-07-17 
>> 08:12:09,432::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
>>  Found OVF_STORE: imgUUID:f22829ab-9fd5-415a-9a8f-809d3f7887d4, 
>> volUUID:9f4760ee-119c-412a-a1e8-49e73e6ba929
>> MainThread::INFO::2017-07-17 
>> 08:12:09,925::ovf_store::112::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>>  Extracting Engine VM OVF from the OVF_STORE
>> MainThread::INFO::2017-07-17 
>> 08:12:10,324::ovf_store::119::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>>  OVF_STORE volume path: 
>> /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929
>>  
>> MainThread::INFO::2017-07-17 
>> 08:12:10,696::config::431::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
>>  Found an OVF for HE VM, trying to convert
>> MainThread::INFO::2017-07-17 
>> 08:12:10,704::config::436::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
>>  Got vm.conf from OVF_STORE
>> MainThread::INFO::2017-07-17 
>> 08:12:10,705::states::426::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
>>  Engine vm running on localhost
>> MainThread::INFO::2017-07-17 
>> 08:12:10,714::hosted_engine::604::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
>>  Initializing VDSM
>> MainThread::INFO::2017-07-17 
>> 

Re: [ovirt-users] Host stuck unresponsive after Network Outage

2017-07-18 Thread Pavel Gashev
Anthony,

Output of “systemctl status -l vdsm-network” would help.


From:  on behalf of "Anthony.Fillmore" 

Date: Tuesday, 18 July 2017 at 18:13
To: "users@ovirt.org" 
Cc: "Brandon.Markgraf" , "Sandeep.Mendiratta" 

Subject: [ovirt-users] Host stuck unresponsive after Network Outage

Hey Ovirt Users and Team,

I have a host that I am unable to recover post a network outage.  The host is 
stuck in unresponsive mode, even though the host is on the network, able to SSH 
and seems to be healthy.  I’ve tried several things to recover the host in 
Ovirt, but have had no success so far.  I’d like to reach out to the community 
before blowing away and rebuilding the host.

Environment: I have an Ovengine server with about 26 Datacenters, with 2 to 3 
hosts per Datacenter.  My Ovengine server is hosted centrally, with my hosts 
being bare-metal and distributed throughout my environment.Ovengine is 
version 4.0.6.

What I’ve tried: put into maintenance mode, rebooted the host.  Confirmed host 
was rebooted and tried to active, goes back to unresponsive.   Attempted a 
reinstall, which fails.

Checking from the host perspective, I can see the following problems:

[boxname~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: enabled)
   Active: inactive (dead)

Jul 14 12:34:28 boxname systemd[1]: Dependency failed for Virtual Desktop 
Server Manager.
Jul 14 12:34:28 boxname systemd[1]: Job vdsmd.service/start failed with result 
'dependency'.

Going a bit deeper, the results of journalctl –xe:

[root@boxname ~]# journalctl -xe
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit libvirtd.service has begun shutting down.
Jul 18 09:07:31 boxname systemd[1]: Stopped Virtualization daemon.
-- Subject: Unit libvirtd.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit libvirtd.service has finished shutting down.
Jul 18 09:07:31 boxname systemd[1]: Reloading.
Jul 18 09:07:31 boxname systemd[1]: Binding to IPv6 address not available since 
kernel does not support IPv6.
Jul 18 09:07:31 boxname systemd[1]: [/usr/lib/systemd/system/rpcbind.socket:6] 
Failed to parse address value, ignoring: [::
Jul 18 09:07:31 boxname systemd[1]: Started Auxiliary vdsm service for running 
helper functions as root.
-- Subject: Unit supervdsmd.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit supervdsmd.service has finished starting up.
--
-- The start-up result is done.
Jul 18 09:07:31 boxname systemd[1]: Starting Auxiliary vdsm service for running 
helper functions as root...
-- Subject: Unit supervdsmd.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit supervdsmd.service has begun starting up.
Jul 18 09:07:31 boxname systemd[1]: Starting Virtualization daemon...
-- Subject: Unit libvirtd.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit libvirtd.service has begun starting up.
Jul 18 09:07:32 boxname systemd[1]: Started Virtualization daemon.
-- Subject: Unit libvirtd.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit libvirtd.service has finished starting up.
--
-- The start-up result is done.
Jul 18 09:07:32 boxname systemd[1]: Starting Virtual Desktop Server Manager 
network restoration...
-- Subject: Unit vdsm-network.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit vdsm-network.service has begun starting up.
lines 2751-2797/2797 (END)

Does the community have suggestions on what can be done next to recover this 
host within Ovirt?  I can provide additional log dumps as needed, please inform 
with what you need to assist further.

Thank you,
Tony

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Host stuck unresponsive after Network Outage

2017-07-18 Thread Anthony . Fillmore
Hey Ovirt Users and Team,

I have a host that I am unable to recover post a network outage.  The host is 
stuck in unresponsive mode, even though the host is on the network, able to SSH 
and seems to be healthy.  I’ve tried several things to recover the host in 
Ovirt, but have had no success so far.  I’d like to reach out to the community 
before blowing away and rebuilding the host.

Environment: I have an Ovengine server with about 26 Datacenters, with 2 to 3 
hosts per Datacenter.  My Ovengine server is hosted centrally, with my hosts 
being bare-metal and distributed throughout my environment.Ovengine is 
version 4.0.6.

What I’ve tried: put into maintenance mode, rebooted the host.  Confirmed host 
was rebooted and tried to active, goes back to unresponsive.   Attempted a 
reinstall, which fails.

Checking from the host perspective, I can see the following problems:

[boxname~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: enabled)
   Active: inactive (dead)

Jul 14 12:34:28 boxname systemd[1]: Dependency failed for Virtual Desktop 
Server Manager.
Jul 14 12:34:28 boxname systemd[1]: Job vdsmd.service/start failed with result 
'dependency'.

Going a bit deeper, the results of journalctl -xe:

[root@boxname ~]# journalctl -xe
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit libvirtd.service has begun shutting down.
Jul 18 09:07:31 boxname systemd[1]: Stopped Virtualization daemon.
-- Subject: Unit libvirtd.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit libvirtd.service has finished shutting down.
Jul 18 09:07:31 boxname systemd[1]: Reloading.
Jul 18 09:07:31 boxname systemd[1]: Binding to IPv6 address not available since 
kernel does not support IPv6.
Jul 18 09:07:31 boxname systemd[1]: [/usr/lib/systemd/system/rpcbind.socket:6] 
Failed to parse address value, ignoring: [::
Jul 18 09:07:31 boxname systemd[1]: Started Auxiliary vdsm service for running 
helper functions as root.
-- Subject: Unit supervdsmd.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit supervdsmd.service has finished starting up.
--
-- The start-up result is done.
Jul 18 09:07:31 boxname systemd[1]: Starting Auxiliary vdsm service for running 
helper functions as root...
-- Subject: Unit supervdsmd.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit supervdsmd.service has begun starting up.
Jul 18 09:07:31 boxname systemd[1]: Starting Virtualization daemon...
-- Subject: Unit libvirtd.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit libvirtd.service has begun starting up.
Jul 18 09:07:32 boxname systemd[1]: Started Virtualization daemon.
-- Subject: Unit libvirtd.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit libvirtd.service has finished starting up.
--
-- The start-up result is done.
Jul 18 09:07:32 boxname systemd[1]: Starting Virtual Desktop Server Manager 
network restoration...
-- Subject: Unit vdsm-network.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit vdsm-network.service has begun starting up.
lines 2751-2797/2797 (END)

Does the community have suggestions on what can be done next to recover this 
host within Ovirt?  I can provide additional log dumps as needed, please inform 
with what you need to assist further.

Thank you,
Tony

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-18 Thread Uwe Laverenz

Hi,

just to avoid misunderstandings: the workaround I suggested means that I 
don't use OVirt's iSCSI-Bonding at all (because it let's my environment 
misbehave in the same way you described).


cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-18 Thread Uwe Laverenz

Hi,


Am 17.07.2017 um 14:11 schrieb Devin Acosta:

I am still troubleshooting the issue, I haven’t found any resolution to 
my issue at this point yet. I need to figure out by this Friday 
otherwise I need to look at Xen or another solution. iSCSI and oVIRT 
seems problematic.


The configuration of iSCSI-Multipathing via OVirt didn't work for me 
either. IIRC the underlying problem in my case was that I use totally 
isolated networks for each path.


Workaround: to make round robin work you have to enable it by editing 
"/etc/multipath.conf". Just add the 3 lines for the round robin setting 
(see comment in the file) and additionally add the "# VDSM PRIVATE" 
comment to keep vdsmd from overwriting your settings.


My multipath.conf:



# VDSM REVISION 1.3
# VDSM PRIVATE

defaults {
polling_interval5
no_path_retry   fail
user_friendly_names no
flush_on_last_del   yes
fast_io_fail_tmo5
dev_loss_tmo30
max_fds 4096
# 3 lines added manually for multipathing:
path_selector   "round-robin 0"
path_grouping_policymultibus
failbackimmediate
}

# Remove devices entries when overrides section is available.
devices {
device {
# These settings overrides built-in devices settings. It does not apply
# to devices without built-in settings (these use the settings in the
# "defaults" section), or to devices defined in the "devices" section.
# Note: This is not available yet on Fedora 21. For more info see
# https://bugzilla.redhat.com/1253799
all_devsyes
no_path_retry   fail
}
}




To enable the settings:

  systemctl restart multipathd

See if it works:

  multipath -ll


HTH,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] mirror

2017-07-18 Thread Barak Korren
On 18 July 2017 at 15:49, Fabrice Bacchella  wrote:
>
> I can't host a public mirror, but I was thinking more about hosting a private 
> mirror using rsync instead of ftp/http to get content.
>

We don't currently support anonymous rsync access to
resource.ovirt.org, but you can rsync from one of the other mirrors
that do support it.

Another option to consider it to use the 'reposync' tool.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] mirror

2017-07-18 Thread Fabrice Bacchella

> Le 18 juil. 2017 à 12:10, Barak Korren  a écrit :
> 
> On 18 July 2017 at 12:57, Fabrice Bacchella  
> wrote:
>> I'm reading https://www.ovirt.org/develop/infra/repository-mirrors/
>> 
>> It says:
>> 
>> You'll find in resources.ovirt.org a user named mirror
>> 
>> I'm looking at http://resources.ovirt.org and don't see anything about that 
>> user. Where should I look ?
> 
> These are instructions for members of the oVirt infra team, with ssh
> access to resources.ovirt.org.
> 
> If you want to host a public oVirt mirror, please send a request to
> infra-support.ovirt.org with an SSH public key of the mirror server
> attached and someone from infra will contact you with instructions on
> how to proceed.
> 

I can't host a public mirror, but I was thinking more about hosting a private 
mirror using rsync instead of ftp/http to get content.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine/NFS Troubles

2017-07-18 Thread Pavel Gashev
Phillip,

The relevant lines from the vdsm logs are the following:

jsonrpc.Executor/6::INFO::2017-07-17 
14:24:41,005::logUtils::49::dispatcher::(wrapper) Run and protect: 
connectStorageServer(domType=1, spUUID=u'----', 
conList=[{u'protocol
_version': 3, u'connection': u'192.168.1.21:/srv/ovirt', u'user': u'kvm', 
u'id': u'dbeb8ab4-849f-4728-8ee9-f891bb84ce2f'}], options=None)
jsonrpc.Executor/6::DEBUG::2017-07-17 
14:24:41,006::fileUtils::209::Storage.fileUtils::(createdir) Creating 
directory: /rhev/data-center/mnt/192.168.1.21:_srv_ovirt mode: None
jsonrpc.Executor/6::DEBUG::2017-07-17 
14:24:41,007::fileUtils::218::Storage.fileUtils::(createdir) Using existing 
directory: /rhev/data-center/mnt/192.168.1.21:_srv_ovirt
jsonrpc.Executor/6::INFO::2017-07-17 
14:24:41,007::mount::226::storage.Mount::(mount) mounting 
192.168.1.21:/srv/ovirt at /rhev/data-center/mnt/192.168.1.21:_srv_ovirt
jsonrpc.Executor/6::ERROR::2017-07-17 
14:26:46,098::hsm::2403::Storage.HSM::(connectStorageServer) Could not connect 
to storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2400, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 456, in connect
return self._mountCon.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 238, in connect
six.reraise(t, v, tb)
  File "/usr/share/vdsm/storage/storageServer.py", line 230, in connect
self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 229, in 
mount
timeout=timeout, cgroup=cgroup)
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in 
__call__
return callMethod()
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 51, in 

**kwargs)
  File "", line 2, in mount
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in 
_callmethod
raise convert_to_error(kind, result)
MountError: (32, ';mount.nfs: Connection timed out\n')


From:  on behalf of Phillip Bailey 

Date: Tuesday, 18 July 2017 at 13:48
To: Luca 'remix_tj' Lorenzetto 
Cc: users 
Subject: Re: [ovirt-users] Hosted Engine/NFS Troubles

On Mon, Jul 17, 2017 at 3:34 PM, Luca 'remix_tj' Lorenzetto 
> wrote:
On Mon, Jul 17, 2017 at 9:05 PM, Phillip Bailey 
> wrote:
> Hi,
>
> I'm having trouble with my hosted engine setup (v4.0) and could use some
> help. The problem I'm having is that whenever I try to add additional hosts
> to the setup via webadmin, the operation fails due to storage-related
> issues.
>
> webadmin shows the following error messages:
>
> "Host  cannot access the Storage Domain(s) hosted_storage
> attached to the Data Center Default. Setting Host state to Non-Operational.
> Failed to connect Host ovirt-node-1 to Storage Pool Default"
>

Hi Phillip,

your hosted engine storage is on nfs, right? Did you test if you can
mount manually on each host?
Hi Luca,

Yes, both storage domains are on NFS (v3) and I am able to successfully mount 
them manually on the hosts.

Luca



--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-hosted-engine state transition messages

2017-07-18 Thread Christophe TREFOIS
I have the same as you on 4.1.0

EngineBadHealth-EngineUp 1 minute later. Sometimes 20 times per day, mostly on 
weekends.

Cheers,

--

Dr Christophe Trefois, Dipl.-Ing.
Technical Specialist / Post-Doc

UNIVERSITÉ DU LUXEMBOURG

LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
Campus Belval | House of Biomedicine
6, avenue du Swing
L-4367 Belvaux
T: +352 46 66 44 6124
F: +352 46 66 44 6949
http://www.uni.lu/lcsb

[Facebook]  [Twitter] 
   [Google Plus] 
   [Linkedin] 
   [skype] 



This message is confidential and may contain privileged information.
It is intended for the named recipient only.
If you receive it in error please notify me and permanently delete the original 
message and any copies.




On 17 Jul 2017, at 17:35, Jim Kusznir 
> wrote:

Ok, I've been ignoring this for a long time as the logs were so verbose and 
didn't show anything I could identify as usable debug info.  Recently one of my 
ovirt hosts (currently NOT running the main engine, but a candidate) was 
cycling as much as 40 times a day between "EngineUpBadHealth and EngineUp".  
Here's the log snippit.  I included some time before and after if that's 
helpful.  In this case, I got an email about bad health at 8:15 and a restore 
(engine up) at 8:16.  I see where the messages are sent, but I don't see any 
explanation as to why / what the problem is.

BTW: 192.168.8.11 is this computer's physical IP; 192.168.8.12 is the computer 
currently running the engine.  Both are also hosting the gluster store (eg, I 
have 3 hosts, all are participating in the gluster replica 2+arbitrator).

I'd appreciate it if someone could shed some light on why this keeps happening!

--Jim


MainThread::INFO::2017-07-17 
08:12:06,230::config::485::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf)
 Reloading vm.conf from the shared storage domain
MainThread::INFO::2017-07-17 
08:12:06,230::config::412::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
 Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::INFO::2017-07-17 
08:12:08,877::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
 Found OVF_STORE: imgUUID:e10c90a5-4d9c-4e18-b6f7-ae8f0cdf4f57, 
volUUID:a9754d40-eda1-44d7-ac92-76a228f9f1ac
MainThread::INFO::2017-07-17 
08:12:09,432::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
 Found OVF_STORE: imgUUID:f22829ab-9fd5-415a-9a8f-809d3f7887d4, 
volUUID:9f4760ee-119c-412a-a1e8-49e73e6ba929
MainThread::INFO::2017-07-17 
08:12:09,925::ovf_store::112::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
 Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2017-07-17 
08:12:10,324::ovf_store::119::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
 OVF_STORE volume path: 
/rhev/data-center/mnt/glusterSD/192.168.8.11:_engine/c0acdefb-7d16-48ec-9d76-659b8fe33e2a/images/f22829ab-9fd5-415a-9a8f-809d3f7887d4/9f4760ee-119c-412a-a1e8-49e73e6ba929
MainThread::INFO::2017-07-17 
08:12:10,696::config::431::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
 Found an OVF for HE VM, trying to convert
MainThread::INFO::2017-07-17 
08:12:10,704::config::436::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
 Got vm.conf from OVF_STORE
MainThread::INFO::2017-07-17 
08:12:10,705::states::426::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
 Engine vm running on localhost
MainThread::INFO::2017-07-17 
08:12:10,714::hosted_engine::604::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
 Initializing VDSM
MainThread::INFO::2017-07-17 
08:12:14,426::hosted_engine::630::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
 Connecting the storage
MainThread::INFO::2017-07-17 
08:12:14,470::storage_server::219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
 Connecting storage server
MainThread::INFO::2017-07-17 
08:12:19,648::storage_server::226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
 Connecting storage server
MainThread::INFO::2017-07-17 
08:12:19,900::storage_server::233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
 Refreshing the storage domain
MainThread::INFO::2017-07-17 
08:12:20,298::hosted_engine::657::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
 Preparing images
MainThread::INFO::2017-07-17 
08:12:20,298::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
 Preparing images

Re: [ovirt-users] Hosted Engine/NFS Troubles

2017-07-18 Thread Phillip Bailey
On Mon, Jul 17, 2017 at 3:34 PM, Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:

> On Mon, Jul 17, 2017 at 9:05 PM, Phillip Bailey 
> wrote:
> > Hi,
> >
> > I'm having trouble with my hosted engine setup (v4.0) and could use some
> > help. The problem I'm having is that whenever I try to add additional
> hosts
> > to the setup via webadmin, the operation fails due to storage-related
> > issues.
> >
> > webadmin shows the following error messages:
> >
> > "Host  cannot access the Storage Domain(s) hosted_storage
> > attached to the Data Center Default. Setting Host state to
> Non-Operational.
> > Failed to connect Host ovirt-node-1 to Storage Pool Default"
> >
>
> Hi Phillip,
>
> your hosted engine storage is on nfs, right? Did you test if you can
> mount manually on each host?
>
> Hi Luca,

Yes, both storage domains are on NFS (v3) and I am able to successfully
mount them manually on the hosts.

Luca
>
>
>
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
>
> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
> lorenzetto.l...@gmail.com>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] mirror

2017-07-18 Thread Barak Korren
On 18 July 2017 at 12:57, Fabrice Bacchella  wrote:
> I'm reading https://www.ovirt.org/develop/infra/repository-mirrors/
>
> It says:
>
> You'll find in resources.ovirt.org a user named mirror
>
> I'm looking at http://resources.ovirt.org and don't see anything about that 
> user. Where should I look ?

These are instructions for members of the oVirt infra team, with ssh
access to resources.ovirt.org.

If you want to host a public oVirt mirror, please send a request to
infra-support.ovirt.org with an SSH public key of the mirror server
attached and someone from infra will contact you with instructions on
how to proceed.

Thanks,

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] mirror

2017-07-18 Thread Fabrice Bacchella
I'm reading https://www.ovirt.org/develop/infra/repository-mirrors/

It says:

You'll find in resources.ovirt.org a user named mirror

I'm looking at http://resources.ovirt.org and don't see anything about that 
user. Where should I look ?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-18 Thread Elad Ben Aharon
Hi,

Please make sure that the hosts can reach the iSCSI targets on your Dell
storage using the NICs that are being used by the 2 networks dedicated for
iSCSI.
You can try check it using 'ping -I  '.



Thanks,

ELAD BEN AHARON

SENIOR QUALITY ENGINEER

Red Hat Israel Ltd. 

34 Jerusalem Road, Building A, 1st floor

Ra'anana, Israel 4350109

ebena...@redhat.comT: +972-9-7692007/8272007
  TRIED. TESTED. TRUSTED. 


On Mon, Jul 17, 2017 at 3:11 PM, Devin Acosta 
wrote:

> V.,
>
> I am still troubleshooting the issue, I haven’t found any resolution to my
> issue at this point yet. I need to figure out by this Friday otherwise I
> need to look at Xen or another solution. iSCSI and oVIRT seems problematic.
>
>
> --
>
> Devin Acosta
> Red Hat Certified Architect, LinuxStack
>
> On July 16, 2017 at 11:53:59 PM, Vinícius Ferrão (fer...@if.ufrj.br)
> wrote:
>
> Have you found any solution for this problem?
>
> I’m using an FreeNAS machine to server iSCSI but I’ve the exactly same
> problem. I’ve reinstalled oVirt at least 3 times during the weekend trying
> to solve the issue.
>
> At this moment my iSCSI Multipath tab is just inconsitent. I can’t see
> both VLAN’s on “Logical networks” but only one target shows up on Storage
> Targets.
>
> When I was able to found two targets everything went down and I needed to
> reboot the host and the Hosted Engine to regenerate oVirt.
>
> V.
>
> On 11 Jul 2017, at 19:29, Devin Acosta  wrote:
>
>
> I am using the latest release of oVIRT 4.1.3, and I am connecting a Dell
> Compelent SAN that has 2 fault domains each on a separate VLAN that I have
> attached to oVIRT. From what I understand I am suppose to go into “iSCSI
> Multipathing” option and add a BOND of the iSCSI interfaces. I have done
> this selecting the 2 logical networks together for iSCSI. I notice that
> there is an option below to select Storage Targets but if I select the
> storage targets below with the logical networks the the cluster goes crazy
> and appears to be mad. Storage, Nodes, and everything goes offline even
> thought I have NFS also attached to the cluster.
>
> How should this best be configured. What we notice that happens is when
> the server reboots it seems to log into the SAN correctly but according the
> the Dell SAN it is only logged into once controller. So only pulls both
> fault domains from a single controller.
>
> Please Advise.
>
> Devin
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users