Re: [ovirt-users] PM proxy

2017-01-12 Thread Martin Perina
Hi Slava,

do you have at least one another host in the same cluster or DC which
doesn't have connection issues (in status Up or Maintenance)?
If so, please turn on debug logging for power management part using
following command:

/usr/share/ovirt-engine-wildfly/bin/jboss-cli.sh --controller=
127.0.0.1:8706 --connect --user=admin@internal

and enter following inside jboss-cli command prompt:

/subsystem=logging/logger=org.ovirt.engine.core.bll.pm:add
/subsystem=logging/logger=org.ovirt.engine.core.bll.pm:
write-attribute(name=level,value=DEBUG)
quit

Afterwards you will see more details in engine.log why other hosts were
rejected during fence proxy selection process.

Btw above debug log changes are not permanent, they will be reverted on
ovirt-engine restart or using following command:

/usr/share/ovirt-engine-wildfly/bin/jboss-cli.sh --controller=
127.0.0.1:8706 --connect --user=admin@internal
'/subsystem=logging/logger=org.ovirt.engine.core.bll.pm:remove'


Regards

Martin Perina


On Thu, Jan 12, 2017 at 4:42 PM, Slava Bendersky 
wrote:

> Hello Everyone,
> I need help with this error. What possible missing or miss-configured  ?
>
> 2017-01-12 05:17:31,444 ERROR [org.ovirt.engine.core.bll.pm.FenceProxyLocator]
> (default task-38) [] Can not run fence action on host 'hosted_engine_1', no
> suitable proxy host was found
>
> I tried from shell on host and it works fine.
> Right now settings default dc, cluster from PM proxy definition.
> Slava.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt host activation and lvm looping with high CPU load trying to mount iSCSI storage

2017-01-12 Thread Gianluca Cecchi
On Fri, Jan 13, 2017 at 12:10 AM, Nir Soffer  wrote:

> On Thu, Jan 12, 2017 at 6:01 PM, Nicolas Ecarnot 
> wrote:
> > Hi,
> >
> > As we are using a very similar hardware and usage as Mark (Dell poweredge
> > hosts, Dell Equallogic SAN, iSCSI, and tons of LUNs for all those VMs),
> I'm
> > jumping into this thread.
>
> Can you share your multipath.conf that works with Dell Equallogic SAN?
>
>
I jump in to share what is my current config with EQL SAN and RH EL /
CentOS (but not oVirt).
the examples below for a system connected with a PS6510ES.
Please note that it is to be considered as an element of discussion and to
be then mixed and integrated with oVirt specific requirements (eg no
friendly names).
Also, it is what I'm using on RH EL 6.8 clusters configured with RHCS. Not
tested yet any RH EL / CentOS 7.x system with EQL iSCSI

 - /etc/multipath.conf

defaults {
user_friendly_names yes
}

blacklist {
   wwid my_internal_disk_wwid

   device {
   vendor  "iDRAC"
   product "*"
   }
}

devices {
device {
vendor  "EQLOGIC"
product "100E-00"
path_grouping_policymultibus
features "1 queue_if_no_path"
path_checker directio
failback immediate
path_selector "round-robin 0"
rr_min_io 512
rr_weight priorities
}
}


multipaths {
multipath {
wwid one_of_my_luns_wwid
alias mympfriendlyname
}

... other multipath sections for other luns

}


other important configurations:

- /etc/iscsi/iscsid.conf
other than chap config parameters

diff iscsid.conf iscsid.conf.orig
< #node.session.timeo.replacement_timeout = 120
< node.session.timeo.replacement_timeout = 15
---
> node.session.timeo.replacement_timeout = 120
130,131c125
< #node.session.err_timeo.lu_reset_timeout = 30
< node.session.err_timeo.lu_reset_timeout = 20
---
> node.session.err_timeo.lu_reset_timeout = 30
168,169c162
< # node.session.initial_login_retry_max = 8
< node.session.initial_login_retry_max = 12
---
> node.session.initial_login_retry_max = 8
178,179c171
< #node.session.cmds_max = 128
< node.session.cmds_max = 1024
---
> node.session.cmds_max = 128
183,184c175
< #node.session.queue_depth = 32
< node.session.queue_depth = 128
---
> node.session.queue_depth = 32
310,311c301
< #node.session.iscsi.FastAbort = Yes
< node.session.iscsi.FastAbort = No
---
> node.session.iscsi.FastAbort = Yes


- network adapters dedicated to iSCSI config files
they are 10Gb/s interfaces
(
lspci gives
05:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port
Backplane Connection (rev 01)
)
/etc/sysconfig/network-scripts/ifcfg-eth4
DEVICE=eth4
BOOTPROTO=static
HWADDR=XX:XX:XX:XX:XX:XX
ONBOOT=yes
IPADDR=10.10.100.227
NETMASK=255.255.255.0
TYPE=Ethernet
MTU=9000

similar for eth5 (ip is 10.10.100.227)

ifup eth4
ifup eth5

- /etc/sysctl.conf
net.ipv4.conf.eth4.arp_announce=2
net.ipv4.conf.eth4.arp_ignore=1
net.ipv4.conf.eth4.arp_filter=2
#
net.ipv4.conf.eth5.arp_announce=2
net.ipv4.conf.eth5.arp_ignore=1
net.ipv4.conf.eth5.arp_filter=2

to acquire modification:
sysctl -p

Verify ping to the portal (10.10.100.7) from both interfaces
ping -I eth4 10.10.100.7
ping -I eth5 10.10.100.7

to verify jumbo frame connections (if configured, as in my case):
ping 10.10.100.7 -M do -s 8792 -I eth4
ping 10.10.100.7 -M do -s 8792 -I eth5


- configuration of the iscsi interfaces
iscsiadm -m iface -I ieth4 --op=new
iscsiadm -m iface -I ieth5 --op=new
iscsiadm -m iface -I ieth4 --op=update -n iface.hwaddress -v
XX:XX:XX:XX:XX:XX
iscsiadm -m iface -I ieth5 --op=update -n iface.hwaddress -v
YY:YY:YY:YY:YY:YY


output of some commands with this config

# iscsiadm -m session | grep mylun
tcp: [3] 10.10.100.7:3260,1 iqn.2001-05.com.equallogic:0--mylun
(non-flash)
tcp: [4] 10.10.100.7:3260,1 iqn.2001-05.com.equallogic:0--mylun
(non-flash)

with "-P 1" option

Target: iqn.2001-05.com.equallogic:0--mylun (non-flash)
Current Portal: 10.10.100.38:3260,1
Persistent Portal: 10.10.100.7:3260,1
**
Interface:
**
Iface Name: ieth5
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:aea9b71a9aaf
Iface IPaddress: 10.10.100.228
Iface HWaddress: 
Iface Netdev: eth5
SID: 3
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Current Portal: 10.10.100.37:3260,1
Persistent Portal: 10.10.100.7:3260,1
**
Interface:
**
Iface Name: ieth4
Iface Transport: tcp
Iface Ini

Re: [ovirt-users] Ovirt host activation and lvm looping with high CPU load trying to mount iSCSI storage

2017-01-12 Thread Nir Soffer
On Thu, Jan 12, 2017 at 6:01 PM, Nicolas Ecarnot  wrote:
> Hi,
>
> As we are using a very similar hardware and usage as Mark (Dell poweredge
> hosts, Dell Equallogic SAN, iSCSI, and tons of LUNs for all those VMs), I'm
> jumping into this thread.

Can you share your multipath.conf that works with Dell Equallogic SAN?

>
> Le 12/01/2017 à 16:29, Yaniv Kaul a écrit :
>
>
> While it's a bit of a religious war on what is preferred with iSCSI -
> network level bonding (LACP) or multipathing on the iSCSI level, I'm on the
> multipathing side. The main reason is that you may end up easily using just
> one of the paths in a bond - if your policy is not set correct on how to
> distribute connections between the physical links (remember that each
> connection sticks to a single physical link. So it really depends on the
> hash policy and even then - not so sure). With iSCSI multipathing you have
> more control - and it can also be determined by queue depth, etc.
> (In your example, if you have SRC A -> DST 1 and SRC B -> DST 1 (as you seem
> to have), both connections may end up on the same physical NIC.)
>
>>
>>
>> If we reduce the number of storage domains, we reduce the number of
>> devices and therefore the number of LVM Physical volumes that appear in
>> Linux correct? At the moment each connection results in a Linux device which
>> has its own queue. We have some guests with high IO loads on their device
>> whilst others are low. All the storage domain / datastore sizing guides we
>> found seem to imply it’s a trade-off between ease of management (i.e not
>> having millions of domains to manage), IO contention between guests on a
>> single large storage domain / datastore and possible wasted space on storage
>> domains. If you have further information on recommendations, I am more than
>> willing to change things as this problem is making our environment somewhat
>> unusable at the moment. I have hosts that I can’t bring online and therefore
>> reduced resiliency in clusters. They used to work just fine but the
>> environment has grown over the last year and we also upgraded the Ovirt
>> version from 3.6 to 4.x. We certainly had other problems, but host
>> activation wasn’t one of them and it’s a problem that’s driving me mad.
>
>
> I would say that each path has its own device (and therefore its own queue).
> So I'd argue that you may want to have (for example) 4 paths to each LUN or
> perhaps more (8?). For example, with 2 NICs, each connecting to two
> controllers, each controller having 2 NICs (so no SPOF and nice number of
> paths).
>
> Here, one key point I'm trying (to no avail) to discuss for years with
> Redhat people, and either I did not understood, either I wasn't clear
> enough, or Redhat people answered me they owned no Equallogic SAN to test
> it, is :
> My (and maybe many others) Equallogic SAN has two controllers, but is
> publishing only *ONE* virtual ip address.
> On one of our other EMC SAN, publishing *TWO* ip addresses, which can be
> published in two different subnets, I fully understand the benefits and
> working of multipathing (and even in the same subnet, our oVirt setup is
> happily using multipath).
>
> But on one of our oVirt setup using the Equallogic SAN, we have no choice
> but point our hosts iSCSI interfaces to one single SAN ip, so no multipath
> here.
>
> At this point, we saw no other mean than using bonding mode 1 to reach our
> SAN, which is terrible for storage experts.
>
>
> To come back to Mark's story, we are still using 3.6.5 DCs and planning to
> upgrade.
> Reading all this is making me delay this step.
>
> --
> Nicolas ECARNOT
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt host activation and lvm looping with high CPU load trying to mount iSCSI storage

2017-01-12 Thread Nir Soffer
On Thu, Jan 12, 2017 at 12:02 PM, Mark Greenall
 wrote:
> Firstly, thanks @Yaniv and thanks @Nir for your responses.
>
> @Yaniv, in answer to this:
>
>>> Why do you have 1 SD per VM?
>
> It's a combination of performance and ease of management. We ran some IO 
> tests with various configurations and settled on this one for a balance of 
> reduced IO contention and ease of management. If there is a better 
> recommended way of handling these then I'm all ears. If you believe having a 
> large amount of storage domains adds to the problem then we can also review 
> the setup.
>
>>> Can you try and disable (mask) the lvmetad service on the hosts and see if 
>>> it improves matters?
>
> Disabled and masked the lvmetad service and tried again this morning. It 
> seemed to be less of a load / quicker getting the initial activation of the 
> host working but the end result was still the same. Just under 10 minutes 
> later the node went non-operational and the cycle began again. By 09:27 we 
> had the high CPU load and repeating lvm cycle.
>
> Host Activation: 09:06
> Host Up: 09:08
> Non-Operational: 09:16
> LVM Load: 09:27
> Host Reboot: 09:30
>
> From yesterday and today I've attached messages, sanlock.log and 
> multipath.conf files too. Although I'm not sure the messages file will be of 
> much use as it looks like log rate limiting kicked in and supressed messages 
> for the duration of the process. I'm booted off the kernel with debugging but 
> maybe that's generating too much info? Let me know if you want me to change 
> anything here to get additional information.
>
> As added configuration information we also have the following settings from 
> the Equallogic and Linux install guide:
>
> /etc/sysctl.conf:
>
> # Prevent ARP Flux for multiple NICs on the same subnet:
> net.ipv4.conf.all.arp_ignore = 1
> net.ipv4.conf.all.arp_announce = 2
> # Loosen RP Filter to alow multiple iSCSI connections
> net.ipv4.conf.all.rp_filter = 2
>
>
> And the following /lib/udev/rules.d/99-eqlsd.rules:
>
> #-
> #  Copyright (c) 2010-2012 by Dell, Inc.
> #
> # All rights reserved.  This software may not be copied, disclosed,
> # transferred, or used except in accordance with a license granted
> # by Dell, Inc.  This software embodies proprietary information
> # and trade secrets of Dell, Inc.
> #
> #-
> #
> # Various Settings for Dell Equallogic disks based on Dell Optimizing SAN 
> Environment for Linux Guide
> #
> # Modify disk scheduler mode to noop
> ACTION=="add|change", SUBSYSTEM=="block", ATTRS{vendor}=="EQLOGIC", 
> RUN+="/bin/sh -c 'echo noop > /sys/${DEVPATH}/queue/scheduler'"
> # Modify disk timeout value to 60 seconds
> ACTION!="remove", SUBSYSTEM=="block", ATTRS{vendor}=="EQLOGIC", RUN+="/bin/sh 
> -c 'echo 60 > /sys/%p/device/timeout'"

This timeout may cause large timeouts in vdsm in commands accessing
storage, it may cause timeouts in various flows, and may cause your
domain to become inactive - since you set this for all domains, it may
cause the entire host to become non-operational.

I recommend to remove this rule.

> # Modify read ahead value to 1024
> ACTION!="remove", SUBSYSTEM=="block", ATTRS{vendor}=="EQLOGIC", RUN+="/bin/sh 
> -c 'echo 1024 > /sys/${DEVPATH}/bdi/read_ahead_kb'"

In your multipath.conf, I see that you changed lot of the defaults
recommended by ovirt:

defaults {
deferred_remove yes
dev_loss_tmo30
fast_io_fail_tmo5
flush_on_last_del   yes
max_fds 4096
no_path_retry   fail
polling_interval5
user_friendly_names no
}

You are using:

defaults {

You are not using "deferred_remove", so you get the default value ("no").
Do you have any reason to change this?

You are not using "dev_loss_tmo", so you get the default value
Do you have any reason to change this?

You are not using "fast_io_fail_tmo", so you will get the default
value  (hopefully 5).
Do you have any reason to change this?

You are not using "flush_on_last_del " - any reason to change this?

   failbackimmediate
   max_fds 8192
   no_path_retry   fail

I guess these are the settings recommended for your storage?

   path_checkertur
   path_grouping_policymultibus
   path_selector   "round-robin 0"

   polling_interval10

This will means multipathd will check paths every 10-40 seconds.
You should use the default 5, which cause multipathd to check every
5-20 seconds.

   rr_min_io   10
   rr_weight   priorities
   user_friendly_names no
}

Also you are mixing defaults and settings that you need for your specific
devices.

You should leave the default without change, and create a device section
for your device:

de

Re: [ovirt-users] Ovirt host activation and lvm looping with high CPU load trying to mount iSCSI storage

2017-01-12 Thread Yaniv Kaul
On Thu, Jan 12, 2017 at 6:01 PM, Nicolas Ecarnot 
wrote:

> Hi,
>
> As we are using a very similar hardware and usage as Mark (Dell poweredge
> hosts, Dell Equallogic SAN, iSCSI, and tons of LUNs for all those VMs), I'm
> jumping into this thread.
>
> Le 12/01/2017 à 16:29, Yaniv Kaul a écrit :
>
>
> While it's a bit of a religious war on what is preferred with iSCSI -
> network level bonding (LACP) or multipathing on the iSCSI level, I'm on the
> multipathing side. The main reason is that you may end up easily using just
> one of the paths in a bond - if your policy is not set correct on how to
> distribute connections between the physical links (remember that each
> connection sticks to a single physical link. So it really depends on the
> hash policy and even then - not so sure). With iSCSI multipathing you have
> more control - and it can also be determined by queue depth, etc.
> (In your example, if you have SRC A -> DST 1 and SRC B -> DST 1 (as you
> seem to have), both connections may end up on the same physical NIC.)
>
>
>>
>> If we reduce the number of storage domains, we reduce the number of
>> devices and therefore the number of LVM Physical volumes that appear in
>> Linux correct? At the moment each connection results in a Linux device
>> which has its own queue. We have some guests with high IO loads on their
>> device whilst others are low. All the storage domain / datastore sizing
>> guides we found seem to imply it’s a trade-off between ease of management
>> (i.e not having millions of domains to manage), IO contention between
>> guests on a single large storage domain / datastore and possible wasted
>> space on storage domains. If you have further information on
>> recommendations, I am more than willing to change things as this problem is
>> making our environment somewhat unusable at the moment. I have hosts that I
>> can’t bring online and therefore reduced resiliency in clusters. They used
>> to work just fine but the environment has grown over the last year and we
>> also upgraded the Ovirt version from 3.6 to 4.x. We certainly had other
>> problems, but host activation wasn’t one of them and it’s a problem that’s
>> driving me mad.
>>
>
> I would say that each path has its own device (and therefore its own
> queue). So I'd argue that you may want to have (for example) 4 paths to
> each LUN or perhaps more (8?). For example, with 2 NICs, each connecting to
> two controllers, each controller having 2 NICs (so no SPOF and nice number
> of paths).
>
> Here, one key point I'm trying (to no avail) to discuss for years with
> Redhat people, and either I did not understood, either I wasn't clear
> enough, or Redhat people answered me they owned no Equallogic SAN to test
> it, is :
> My (and maybe many others) Equallogic SAN has two controllers, but is
> publishing only *ONE* virtual ip address.
>

You are completely right - you keep saying that and I keep forgetting that.
I apologize.


> On one of our other EMC SAN, publishing *TWO* ip addresses, which can be
> published in two different subnets, I fully understand the benefits and
> working of multipathing (and even in the same subnet, our oVirt setup is
> happily using multipath).
>
> But on one of our oVirt setup using the Equallogic SAN, we have no choice
> but point our hosts iSCSI interfaces to one single SAN ip, so no multipath
> here.
>
> At this point, we saw no other mean than using bonding mode 1 to reach our
> SAN, which is terrible for storage experts.
>

You could, if you do it properly, have an active-active mode, no?. And if
the hash policy is correct (for example, layer3+4) you might get both
slaves useful. Also, multiple sessions can be achieved with iscsi.conf's
session.nr_sessions (though I'm not sure we don't have a bug where we don't
disconnect all sessions?).


>
>
> To come back to Mark's story, we are still using 3.6.5 DCs and planning to
> upgrade.
> Reading all this is making me delay this step.
>

Well, it'd be nice to get to the bottom of it, but I'm quite sure it has
relatively nothing to do with 4.0.
Y.


>
> --
> Nicolas ECARNOT
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.1.0 Second Beta Release is now available for testing

2017-01-12 Thread Nathanaël Blanchet

Hello Yaniv,

Did you find something wrong with the log files I provided?


Le 28/12/2016 à 15:25, Nathanaël Blanchet a écrit :




Le 28/12/2016 à 15:09, Yaniv Bronheim a écrit :



On Wed, Dec 28, 2016 at 3:43 PM, Nathanaël Blanchet 
 wrote:


Hello,

On my 4.1 Second Beta test platform, I meet this issue on the
three hosts : VDSM gaua3 command failed: :'NoneType' object has no attribute
'statistics'">


Hi Nathanael, Thank you for the report

Hi Yaniv


please send also the following logs for deeper investigation
/var/log/vdsm.log
/var/log/supervdsm.log
/var/log/messages or joursnalctl -xn output

Also, please specify a bit the platform you are running on and when 
this issue occurs
3 el7 hosts, 1 gluster + virt cluster, FC domain storage with the 
latest 4.1 beta, independant el7 engine


Greetings,
Yaniv Bronhaim.


Le 21/12/2016 à 16:12, Sandro Bonazzola a écrit :

The oVirt Project is pleased to announce the availability of the
Second
Beta Release of oVirt 4.1.0 for testing, as of December 21st, 2016

This is pre-release software. Please take a look at our
community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.

This release is available now for:
* Fedora 24 (tech preview)
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
* Fedora 24 (tech preview)

See the release notes draft [3] for installation / upgrade
instructions and
a list of new features and bugs fixed.

Notes:
- oVirt Live iso is already available
- oVirt Node NG iso will be available soon
- Hosted Engine appliance will be available soon
- above delay is due to jenkins issues building node and
appliance, should be fixed by tomorrow.

An initial release management page including planned schedule is
also
available[4]


Additional Resources:
* Read more about the oVirt 4.1.0 beta release highlights:
http://www.ovirt.org/release/4.1.0/

* Get more oVirt Project updates on Twitter:
https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] https://www.ovirt.org/community/

[2]
https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt

[3] http://www.ovirt.org/release/4.1.0/

[4]

http://www.ovirt.org/develop/release-management/releases/4.1/release-management/




-- 
Sandro Bonazzola

Better technology. Faster innovation. Powered by community
collaboration.
See how it works at redhat.com 


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



-- 
Nathanaël Blanchet


Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr   


___ Users mailing
list Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users
 


--
*Yaniv Bronhaim.*

--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr  


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Specifying the datastore on a VmPool creation via Python-SDK?

2017-01-12 Thread Nicolás

Done [1].

Thank you.

  [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1412768

El 12/01/17 a las 16:16, Juan Hernández escribió:

On 01/11/2017 03:53 PM, nico...@devels.es wrote:

Any ideas to this?


My understanding is that there is no way to this with the API currently.
If you need it, please open a RFE. Meanwhile, I'd suggest to copy the
templates to the storage domain that you want to use, and then use the
copied templates. I don't see any other alternative.


El 2017-01-10 08:43, nico...@devels.es escribió:

Hi,

We've several templates that have their disks replicated (copied) on
all our Storage Domains. Problem is that we create our VmPools using
PythonSDK, and it usually creates the pool on one of our Storage
Domains that has a small amount of free disk space.

Some of the Data Stores have plenty of space and when creating the
VmPool, we'd like to be able to specify on which of these Storage
Domains to create the VmPool. So far I see no parameter on the
params.VmPool class to do that. I've tried using an Action, but the
request is not correct:

   action =
params.Action(storage_domain=api.storagedomains.get(name='...'))

   pool = params.VmPool(name='testlarge',
cluster=api.clusters.get(name='...'),
template=api.templates.get(name='Blank'), max_user_vms=1, size=1,
type_='manual', actions=action)
   pool = params.VmPool(name='testlarge',
cluster=api.clusters.get(name='...'),
template=api.templates.get(name='Blank'), max_user_vms=1, size=1,
type_='manual', actions=[action])

   api.vmpools.add(pool)

Both tries fail.

This is Python-SDK 3.x.

Is there a way to specify the destination Storage Domain onto where to
create the VmPool?

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt host activation and lvm looping with high CPU load trying to mount iSCSI storage

2017-01-12 Thread Mark Greenall
Hi Yaniv,

>> 1. There is no point in so many connections.
>> 2. Certainly not the same portal - you really should have more.
>> 3. Note that some go via bond1 - and some via 'default' interface. Is that 
>> intended?
>> 4. Your multipath.conf is using rr_min_io - where it should use rr_min_io_rq 
>> most likely.

We have a single 68TB Equallogic unit with 24 disks. Each Ovirt host has 2 
HBA’s on the iSCSI network. We use Ovirt and the Cisco switches to create an 
LACP group with those 2 HBA’s. I have always assumed that the two connections 
are one each from the HBA’s (i.e I should have two paths and two connections to 
each target).

If we reduce the number of storage domains, we reduce the number of devices and 
therefore the number of LVM Physical volumes that appear in Linux correct? At 
the moment each connection results in a Linux device which has its own queue. 
We have some guests with high IO loads on their device whilst others are low. 
All the storage domain / datastore sizing guides we found seem to imply it’s a 
trade-off between ease of management (i.e not having millions of domains to 
manage), IO contention between guests on a single large storage domain / 
datastore and possible wasted space on storage domains. If you have further 
information on recommendations, I am more than willing to change things as this 
problem is making our environment somewhat unusable at the moment. I have hosts 
that I can’t bring online and therefore reduced resiliency in clusters. They 
used to work just fine but the environment has grown over the last year and we 
also upgraded the Ovirt version from 3.6 to 4.x. We certainly had other 
problems, but host activation wasn’t one of them and it’s a problem that’s 
driving me mad.

Thanks for the pointer on rr_min_io – I see that was for an older kernel. We 
had that set from a Dell guide. I’ve now removed that setting as it seems the 
default value has changed now anyway.

>> Unrelated, your engine.log is quite flooded with:
>> 2017-01-11 15:07:46,085 WARN  
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder] 
>> (DefaultQuartzScheduler9) [31a71bf5] Invalid or unknown guest architecture 
>> type '' received from guest agent
>>
>> Any idea what kind of guest you are running?

Do you have any idea what the guest name is that’s coming from? We pretty much 
exclusively have Linux (CentOS various versions) and Windows (various versions) 
as the guest OS.

Thanks again,
Mark
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt host activation and lvm looping with high CPU load trying to mount iSCSI storage

2017-01-12 Thread Mark Greenall
>> I would say that each path has its own device (and therefore its own queue). 
>> So I'd argue that you may want to have (for example) 4 paths to each LUN or 
>> perhaps more (8?). For example, with 2 NICs, each connecting to two 
>> controllers, each controller having 2 NICs (so no SPOF and nice number of 
>> paths).

Totally get where you are coming from with paths to LUN’s and using multipath. 
We do use that with the Dell Compellent storage we have. It has multiple active 
controllers each with an IP address in a different subnet. Unfortunately, the 
Equallogic does NOT have two active controllers. It has a single active 
controller and a single IP that migrates between the controllers when either 
one is active. If I don’t use LACP I can’t use both HBA’s on the host with 
Ovirt as it doesn’t support Dells host integration tool (HIT) software (or you 
could argue Dell don’t support Ovirt). So, instead of being able to have a 
large number of paths to devices I can either have one active path or LACP and 
get two. As two is the most I can have to a LUN with the infrastructure we 
have, we spread the IO by increasing the number of targets (storage domains).

>> Depending on your storage, you may want to use rr_min_io_rq = 1 for latency 
>> purposes.

Looking at the man page for multipath.conf it looks like the default is now 1, 
where it was 1000 for rr_min_io. For now I’ve just removed it from our config 
file and we’ll take the default.

I’m still seeing the same problem with the couple of changes made (lvmetad and 
multipath). I’m really not very good at understanding exactly what is going on 
in the Ovirt logs. Does it provide any clues as to why it brings the host up 
and then takes it offline again? What are the barrage of lvm processes trying 
to achieve and why do they apparently fail (as it keeps on trying to run them)? 
As mentioned, throughout all this I see no multipath errors (all paths 
available), I see no iSCSI connection errors to the Equallogic. It just seems 
to be Ovirt that thinks the storage is unavailable for some reason?

Thanks,
Mark
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Max number of api calls per user?

2017-01-12 Thread Juan Hernández
On 01/11/2017 03:45 PM, Grundmann, Christian wrote:
> @ What does 'simultaneously' mean exactly? Are you sending the requests in
> parallel from different threads? Or from different processes? Or just
> sending them in a loop?
> 
> I call the python script multiple times for different VMs from different
> shells.
> 
> @ The /var/log/ovirt-engine/server.log, /var/log/ovirt-engine/engine.log and
> /var/log/httpd/ssl_access_log files can help determine what is happening.
> Can you check and maybe share the relevant part of those files?
> 
> Script started @14:57:34 
> Error @14:57:35
> 
> /var/log/ovirt-engine/server.log
> Nothing around that time
> 
> /var/log/ovirt-engine/engine.log:
> 

I think this error can be caused by the following bug:

  ovirt-shell: sporadic HTTP 500 errors
  https://bugzilla.redhat.com/1396833

It will be fixed in release 4.0.7.

> 2017-01-11 14:57:34,015 INFO
> [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-109)
> [642e44ba] User admin@internal successfully logged in with scopes:
> ovirt-app-api ovirt-ext=token-info:authz-search
> ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate
> 2017-01-11 14:57:34,139 ERROR [org.ovirt.engine.api.restapi.util.LinkHelper]
> (default task-41) [] Can't find relative path for class
> "org.ovirt.engine.api.resource.VmDisksResource", will return null
> 2017-01-11 14:57:34,139 ERROR [org.ovirt.engine.api.restapi.util.LinkHelper]
> (default task-41) [] Can't find relative path for class
> "org.ovirt.engine.api.resource.VmDisksResource", will return null
> 2017-01-11 14:57:34,149 INFO
> [org.ovirt.engine.core.bll.aaa.LogoutSessionCommand] (default task-41)
> [1aa17f11] Running command: LogoutSessionCommand internal: false.
> 2017-01-11 14:57:34,238 INFO
> [org.ovirt.engine.core.sso.servlets.OAuthRevokeServlet] (default task-97)
> [601ac7b6] User admin@internal successfully logged out
> 2017-01-11 14:57:34,328 INFO
> [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-46)
> [10ffdf24] Running command: CreateUserSessionCommand internal: false.
> 2017-01-11 14:57:34,339 INFO
> [org.ovirt.engine.core.bll.aaa.TerminateSessionsForTokenCommand] (default
> task-87) [5f84b0d0] Running command: TerminateSessionsForTokenCommand
> internal: true.
> 2017-01-11 14:57:34,340 INFO
> [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (default task-87)
> [5f84b0d0] Not removing session
> 'y71cMky/m5Du0v4Hk9yWL3ppHW+kN2GXg07SajV6RQgOxC7hn6kFzpFwCu5iwEpiYq6EkBSEOgi
> w4RvsYG6ljA==', session has running commands for user 'admin@internal'.
> 2017-01-11 14:57:34,379 INFO
> [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-113) []
> User admin@internal successfully logged in with scopes: ovirt-app-api
> ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search
> ovirt-ext=token-info:validate
> 2017-01-11 14:57:34,390 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-41) [1aa17f11] Correlation ID: 1aa17f11, Call Stack: null,
> Custom Event ID: -1, Message: User admin@internal logged out.
> 2017-01-11 14:57:34,405 INFO
> [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-93) []
> User admin@internal successfully logged in with scopes: ovirt-app-api
> ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search
> ovirt-ext=token-info:validate
> 2017-01-11 14:57:34,409 INFO
> [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-86)
> [779c3119] Running command: CreateUserSessionCommand internal: false.
> 2017-01-11 14:57:34,414 INFO
> [org.ovirt.engine.core.sso.servlets.OAuthRevokeServlet] (default task-127)
> [] User admin@internal successfully logged out
> 2017-01-11 14:57:34,423 INFO
> [org.ovirt.engine.core.bll.aaa.TerminateSessionsForTokenCommand] (default
> task-5) [70e23ef9] Running command: TerminateSessionsForTokenCommand
> internal: true.
> 2017-01-11 14:57:34,433 INFO
> [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-90)
> [1ddb1bec] Running command: CreateUserSessionCommand internal: false.
> 2017-01-11 14:57:34,452 INFO
> [org.ovirt.engine.core.sso.servlets.OAuthRevokeServlet] (default task-117)
> [] User admin@internal successfully logged out
> 2017-01-11 14:57:34,462 INFO
> [org.ovirt.engine.core.bll.aaa.TerminateSessionsForTokenCommand] (default
> task-80) [4634514] Running command: TerminateSessionsForTokenCommand
> internal: true.
> 2017-01-11 14:57:34,493 INFO
> [org.ovirt.engine.core.sso.servlets.OAuthRevokeServlet] (default task-128)
> [] User admin@internal successfully logged out
> 2017-01-11 14:57:34,504 INFO
> [org.ovirt.engine.core.bll.aaa.TerminateSessionsForTokenCommand] (default
> task-88) [3ff7029c] Running command: TerminateSessionsForTokenCommand
> internal: true.
> 2017-01-11 14:57:34,509 ERROR [org.ovirt.engine.api.restapi.util.LinkHelper]
> (default task-90) [] Can't find relative path for class
> "org.ovirt.engine.api.resource.VmDisksR

Re: [ovirt-users] Specifying the datastore on a VmPool creation via Python-SDK?

2017-01-12 Thread Juan Hernández
On 01/11/2017 03:53 PM, nico...@devels.es wrote:
> Any ideas to this?
> 

My understanding is that there is no way to this with the API currently.
If you need it, please open a RFE. Meanwhile, I'd suggest to copy the
templates to the storage domain that you want to use, and then use the
copied templates. I don't see any other alternative.

> El 2017-01-10 08:43, nico...@devels.es escribió:
>> Hi,
>>
>> We've several templates that have their disks replicated (copied) on
>> all our Storage Domains. Problem is that we create our VmPools using
>> PythonSDK, and it usually creates the pool on one of our Storage
>> Domains that has a small amount of free disk space.
>>
>> Some of the Data Stores have plenty of space and when creating the
>> VmPool, we'd like to be able to specify on which of these Storage
>> Domains to create the VmPool. So far I see no parameter on the
>> params.VmPool class to do that. I've tried using an Action, but the
>> request is not correct:
>>
>>   action =
>> params.Action(storage_domain=api.storagedomains.get(name='...'))
>>
>>   pool = params.VmPool(name='testlarge',
>> cluster=api.clusters.get(name='...'),
>> template=api.templates.get(name='Blank'), max_user_vms=1, size=1,
>> type_='manual', actions=action)
>>   pool = params.VmPool(name='testlarge',
>> cluster=api.clusters.get(name='...'),
>> template=api.templates.get(name='Blank'), max_user_vms=1, size=1,
>> type_='manual', actions=[action])
>>
>>   api.vmpools.add(pool)
>>
>> Both tries fail.
>>
>> This is Python-SDK 3.x.
>>
>> Is there a way to specify the destination Storage Domain onto where to
>> create the VmPool?
>>
>> Thanks
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt host activation and lvm looping with high CPU load trying to mount iSCSI storage

2017-01-12 Thread Nicolas Ecarnot

Hi,

As we are using a very similar hardware and usage as Mark (Dell 
poweredge hosts, Dell Equallogic SAN, iSCSI, and tons of LUNs for all 
those VMs), I'm jumping into this thread.


Le 12/01/2017 à 16:29, Yaniv Kaul a écrit :


While it's a bit of a religious war on what is preferred with iSCSI - 
network level bonding (LACP) or multipathing on the iSCSI level, I'm 
on the multipathing side. The main reason is that you may end up 
easily using just one of the paths in a bond - if your policy is not 
set correct on how to distribute connections between the physical 
links (remember that each connection sticks to a single physical link. 
So it really depends on the hash policy and even then - not so sure). 
With iSCSI multipathing you have more control - and it can also be 
determined by queue depth, etc.
(In your example, if you have SRC A -> DST 1 and SRC B -> DST 1 (as 
you seem to have), both connections may end up on the same physical NIC.)


If we reduce the number of storage domains, we reduce the number
of devices and therefore the number of LVM Physical volumes that
appear in Linux correct? At the moment each connection results in
a Linux device which has its own queue. We have some guests with
high IO loads on their device whilst others are low. All the
storage domain / datastore sizing guides we found seem to imply
it’s a trade-off between ease of management (i.e not having
millions of domains to manage), IO contention between guests on a
single large storage domain / datastore and possible wasted space
on storage domains. If you have further information on
recommendations, I am more than willing to change things as this
problem is making our environment somewhat unusable at the moment.
I have hosts that I can’t bring online and therefore reduced
resiliency in clusters. They used to work just fine but the
environment has grown over the last year and we also upgraded the
Ovirt version from 3.6 to 4.x. We certainly had other problems,
but host activation wasn’t one of them and it’s a problem that’s
driving me mad.


I would say that each path has its own device (and therefore its own 
queue). So I'd argue that you may want to have (for example) 4 paths 
to each LUN or perhaps more (8?). For example, with 2 NICs, each 
connecting to two controllers, each controller having 2 NICs (so no 
SPOF and nice number of paths).
Here, one key point I'm trying (to no avail) to discuss for years with 
Redhat people, and either I did not understood, either I wasn't clear 
enough, or Redhat people answered me they owned no Equallogic SAN to 
test it, is :
My (and maybe many others) Equallogic SAN has two controllers, but is 
publishing only *ONE* virtual ip address.
On one of our other EMC SAN, publishing *TWO* ip addresses, which can be 
published in two different subnets, I fully understand the benefits and 
working of multipathing (and even in the same subnet, our oVirt setup is 
happily using multipath).


But on one of our oVirt setup using the Equallogic SAN, we have no 
choice but point our hosts iSCSI interfaces to one single SAN ip, so no 
multipath here.


At this point, we saw no other mean than using bonding mode 1 to reach 
our SAN, which is terrible for storage experts.



To come back to Mark's story, we are still using 3.6.5 DCs and planning 
to upgrade.

Reading all this is making me delay this step.

--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] PM proxy

2017-01-12 Thread Slava Bendersky
Hello Everyone, 
I need help with this error. What possible missing or miss-configured ? 

2017-01-12 05:17:31,444 ERROR [org.ovirt.engine.core.bll.pm.FenceProxyLocator] 
(default task-38) [] Can not run fence action on host 'hosted_engine_1', no 
suitable proxy host was found 

I tried from shell on host and it works fine. 
Right now settings default dc, cluster from PM proxy definition. 
Slava. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] repository issues causing updates to fail

2017-01-12 Thread cmc
Hi,

engine version: 4.0.5.5-1.el7.centos

I have an oVirt cluster with 2 hosts and my engine node is reporting
failed updates on the hosts. I checked the logs in
/var/log/ovirt-engine/host-deploy/ and found that it reports
dependency failures.

-8<-

2017-01-12 10:53:16 ERROR
otopi.plugins.ovirt_host_mgmt.packages.update update.error:102 Yum:
[u'10:qemu-kvm-ev-2.6.0-27.1.el7.x86_64 requires usbredir >= 0.7.1',
u'10:qemu-kvm-ev-2.6.0-27.1.el7.x86_64 requires seavgabios-bin >=
1.9.1-4', u'10:qemu-kvm-ev-2.6.0-27.1.el7.x86_64 requires
ipxe-roms-qemu >= 20160127-4']
2017-01-12 10:53:16 INFO otopi.plugins.ovirt_host_mgmt.packages.update
update.info:98 Yum: Performing yum transaction rollback
2017-01-12 10:53:16 DEBUG otopi.context context._executeMethod:142
method exception
Traceback (most recent call last):
  File "/tmp/ovirt-tA0ldayV0j/pythonlib/otopi/context.py", line 132,
in _executeMethod
method['method']()
  File "/tmp/ovirt-tA0ldayV0j/otopi-plugins/ovirt-host-mgmt/packages/update.py",
line 115, in _packagesCheck
if myum.buildTransaction():
  File "/tmp/ovirt-tA0ldayV0j/pythonlib/otopi/miniyum.py", line 919,
in buildTransaction
raise yum.Errors.YumBaseError(msg)
YumBaseError: [u'10:qemu-kvm-ev-2.6.0-27.1.el7.x86_64 requires
usbredir >= 0.7.1', u'10:qemu-kvm-ev-2.6.0-27.1.el7.x86_64 requires
seavgabios-bin >= 1.9.1-4', u'10:qemu-kvm-ev-2.6.0-27.1.el7.x86_64
requires ipxe-roms-qemu >= 20160127-4']
2017-01-12 10:53:16 ERROR otopi.context context._executeMethod:151
Failed to execute stage 'Package installation':
[u'10:qemu-kvm-ev-2.6.0-27.1.el7.x86_64 requires usbredir >= 0.7.1',
u'10:qemu-kvm-ev-2.6.0-27.1.el7.x86_64 requires seavgabios-bin >=
1.9.1-4', u'10:qemu-kvm-ev-2.6.0-27.1.el7.x86_64 requires
ipxe-roms-qemu >= 20160127-4']
2017-01-12 10:53:16 DEBUG otopi.transaction transaction.abort:119
aborting 'Yum Transaction'
2017-01-12 10:53:16 INFO otopi.plugins.otopi.packagers.yumpackager
yumpackager.info:80 Yum Performing yum transaction rollback
2017-01-12 10:53:16 DEBUG
otopi.plugins.ovirt_host_mgmt.packages.update update.verbose:94 Yum:
Repository virtio-win-stable is listed more than once in the
configuration
2017-01-12 10:53:16 DEBUG otopi.context context.dumpEnvironment:760
ENVIRONMENT DUMP - BEGIN
2017-01-12 10:53:16 DEBUG otopi.context context.dumpEnvironment:770
ENV BASE/error=bool:'True'

-

I've had a search in various repos for the usbredir-0.7.1,
ipxe-roms-qemu  20160127-4 and seavgabios-bin 1.9.1-4, and these exist
in 7.3 but not 7.2 (I am running 7.2). Should I just ignore these
messages then?

Thanks,

Cam
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt host activation and lvm looping with high CPU load trying to mount iSCSI storage

2017-01-12 Thread Yaniv Kaul
On Thu, Jan 12, 2017 at 5:01 PM, Mark Greenall 
wrote:

> Hi Yaniv,
>
>
>
> >> 1. There is no point in so many connections.
>
> >> 2. Certainly not the same portal - you really should have more.
>
> >> 3. Note that some go via bond1 - and some via 'default' interface. Is
> that intended?
>
> >> 4. Your multipath.conf is using rr_min_io - where it should
> use rr_min_io_rq most likely.
>
>
>
> We have a single 68TB Equallogic unit with 24 disks. Each Ovirt host has 2
> HBA’s on the iSCSI network. We use Ovirt and the Cisco switches to create
> an LACP group with those 2 HBA’s. I have always assumed that the two
> connections are one each from the HBA’s (i.e I should have two paths and
> two connections to each target).
>

While it's a bit of a religious war on what is preferred with iSCSI -
network level bonding (LACP) or multipathing on the iSCSI level, I'm on the
multipathing side. The main reason is that you may end up easily using just
one of the paths in a bond - if your policy is not set correct on how to
distribute connections between the physical links (remember that each
connection sticks to a single physical link. So it really depends on the
hash policy and even then - not so sure). With iSCSI multipathing you have
more control - and it can also be determined by queue depth, etc.
(In your example, if you have SRC A -> DST 1 and SRC B -> DST 1 (as you
seem to have), both connections may end up on the same physical NIC.)


>
> If we reduce the number of storage domains, we reduce the number of
> devices and therefore the number of LVM Physical volumes that appear in
> Linux correct? At the moment each connection results in a Linux device
> which has its own queue. We have some guests with high IO loads on their
> device whilst others are low. All the storage domain / datastore sizing
> guides we found seem to imply it’s a trade-off between ease of management
> (i.e not having millions of domains to manage), IO contention between
> guests on a single large storage domain / datastore and possible wasted
> space on storage domains. If you have further information on
> recommendations, I am more than willing to change things as this problem is
> making our environment somewhat unusable at the moment. I have hosts that I
> can’t bring online and therefore reduced resiliency in clusters. They used
> to work just fine but the environment has grown over the last year and we
> also upgraded the Ovirt version from 3.6 to 4.x. We certainly had other
> problems, but host activation wasn’t one of them and it’s a problem that’s
> driving me mad.
>

I would say that each path has its own device (and therefore its own
queue). So I'd argue that you may want to have (for example) 4 paths to
each LUN or perhaps more (8?). For example, with 2 NICs, each connecting to
two controllers, each controller having 2 NICs (so no SPOF and nice number
of paths).

BTW, perhaps some guests need direct LUN?


>
>
> Thanks for the pointer on rr_min_io – I see that was for an older kernel.
> We had that set from a Dell guide. I’ve now removed that setting as it
> seems the default value has changed now anyway.
>

Depending on your storage, you may want to use rr_min_io_rq = 1 for latency
purposes.


>
>
> >> Unrelated, your engine.log is quite flooded with:
>
> >> 2017-01-11 15:07:46,085 WARN  [org.ovirt.engine.core.
> vdsbroker.vdsbroker.VdsBrokerObjectsBuilder] (DefaultQuartzScheduler9)
> [31a71bf5] Invalid or unknown guest architecture type '' received from
> guest agent
>
> >>
>
> >> Any idea what kind of guest you are running?
>
>
>
> Do you have any idea what the guest name is that’s coming from? We pretty
> much exclusively have Linux (CentOS various versions) and Windows (various
> versions) as the guest OS.
>

Vinzenz - any idea?
Y.


>
>
> Thanks again,
>
> Mark
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to pass unattend file to a sysprep Windows 2012r2 template

2017-01-12 Thread Tomáš Golembiovský
Check the information in this thread:

http://lists.ovirt.org/pipermail/users/2016-December/078251.html

It has the information on how to use 'Run Once' to do the initial run.
Notably, don't forget to add the sysprep floppy!


Best regards,

Tomas

On Thu, 12 Jan 2017 11:14:30 +
Denis Pithon  wrote:

> Hello oVirt users,
> 
> We currently run about 300 VMs with oVirt, both Linux and Windows servers.
> Cloud init works great for Linux VMs and we would like to do the same for
> Windows. We have a 2012r2 sysprep template but we do not manage to
> configure the initial run such that the new Windows VM boot *and join our
> AD domain* thereafter. Does someone have any clues/informations about this
> kind of operation ? How to specify unattend file in the 'New Virtual
> Machine' ? What kind of data (and format) is required in the 'sysprep'
> field ?
> 
> Best whishes to everyone!
> Denis
> 
> PS: We run oVirt Engine 4.0.5


-- 
Tomáš Golembiovský 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to delete VM disk

2017-01-12 Thread cmc
Hi Alexander,

That is correct. When I click remove disk, it gives me a remove disk
dialogue, and when I click 'OK' (whether I tick 'remove permanently'
or not) it will throw an exception.

Thanks,

Cam

On Thu, Jan 12, 2017 at 1:53 PM, Alexander Wels  wrote:
> On Friday, December 30, 2016 11:45:20 AM EST cmc wrote:
>> Hi Alexander,
>>
>> Thanks. I've attached the log. Relevant error is the last entry.
>>
>> Kind regards,
>>
>> Cam
>>
>
> Just to be clear on the flow when this occurs, you do the following on a VM
> that is shut down:
>
> 1. Select the VM in the VM grid.
> 2. Click edit and the edit VM dialog pops up.
> 3. In the General tab you scroll down a little until you see the instance
> Images widget that has the disk listed. You have 3 options:
>   - Edit (edit disk)
>   - + (add new row, that will give you the option to attach/create a disk)
>   - - (remove disk)
> You click - (remove disk)?
> 4. You get the exception?
>
> Alexander
>
>> On Wed, Dec 14, 2016 at 3:12 PM, Alexander Wels  wrote:
>> > On Wednesday, December 14, 2016 11:51:49 AM EST cmc wrote:
>> >> Having some difficulty in getting the permutation string currently, as
>> >> I can't get a cache.html file to appear in the Network section of the
>> >> debugger, and both browsers I'm using (Chrome and FIrefox) do not
>> >> print the permutation ID at the bottom of the console output. I'll see
>> >> if I can get some more detail on how this works from some searching
>> >
>> > I improved that, I just haven't updated the wiki, as soon as you install
>> > the symbol maps, and you can recreate the issue, then the UI.log should
>> > have the unobfuscated stack trace, so you don't have to do all that stuff
>> > manually anymore.
>> >
>> >> On Wed, Dec 14, 2016 at 8:21 AM, Fred Rolland 
> wrote:
>> >> > The UI log is obfuscated.
>> >> > Can you please follow instruction on [1] and reproduce so that we get a
>> >> > human readable log.
>> >> >
>> >> > Thanks
>> >> >
>> >> > [1]
>> >> > http://www.ovirt.org/develop/developer-guide/engine/engine-debug-obfusc
>> >> > ate
>> >> > d-ui/>
>> >> >
>> >> > On Tue, Dec 13, 2016 at 7:42 PM, cmc  wrote:
>> >> >> Sorry, forgot the version: 4.0.5.5-1.el7.centos
>> >> >>
>> >> >> On Tue, Dec 13, 2016 at 5:37 PM, cmc  wrote:
>> >> >> > On the VM in the list of VMs, by right-clicking on it. It then gives
>> >> >> > you a pop up window to edit the VM, starting in the 'General'
>> >> >> > section
>> >> >> > (much as when you create a new one)
>> >> >> >
>> >> >> > Thanks,
>> >> >> >
>> >> >> > Cam
>> >> >> >
>> >> >> > On Tue, Dec 13, 2016 at 5:04 PM, Fred Rolland 
>> >> >> >
>> >> >> > wrote:
>> >> >> >> Hi,
>> >> >> >>
>> >> >> >> Which version are you using ?
>> >> >> >> When you mention "Edit", on which entity is it performed.?
>> >> >> >>
>> >> >> >> The disks are currently not part of the edit VM window.
>> >> >> >>
>> >> >> >> Thanks,
>> >> >> >> Freddy
>> >> >> >>
>> >> >> >> On Tue, Dec 13, 2016 at 6:06 PM, cmc  wrote:
>> >> >> >>> This VM wasn't running.
>> >> >> >>>
>> >> >> >>> On Tue, Dec 13, 2016 at 4:02 PM, Elad Ben Aharon
>> >> >> >>> 
>> >> >> >>>
>> >> >> >>> wrote:
>> >> >> >>> > In general, in order to delete a disk while it is attached to a
>> >> >> >>> > running
>> >> >> >>> > VM,
>> >> >> >>> > the disk has to be deactivated (hotunplugged) first so it won't
>> >> >> >>> > be
>> >> >> >>> > accessible for read and write from the VM.
>> >> >> >>> > In the 'edit' VM prompt there is no option to deactivate the
>> >> >> >>> > disk,
>> >> >> >>> > it
>> >> >> >>> > should
>> >> >> >>> > be done from the disks subtab under the virtual machine.
>> >> >> >>> >
>> >> >> >>> > On Tue, Dec 13, 2016 at 5:33 PM, cmc  wrote:
>> >> >> >>> >> Actually, I just tried to create a new disk via the 'Edit' menu
>> >> >> >>> >> once
>> >> >> >>> >> I'd deleted it from the 'Disks' tab, and it threw an exception.
>> >> >> >>> >>
>> >> >> >>> >> Attached is the console log.
>> >> >> >>> >>
>> >> >> >>> >> On Tue, Dec 13, 2016 at 3:24 PM, cmc  wrote:
>> >> >> >>> >> > Hi Elad,
>> >> >> >>> >> >
>> >> >> >>> >> > I was trying to delete the disk via the 'edit' menu, but
>> >> >> >>> >> > noticed
>> >> >> >>> >> > just
>> >> >> >>> >> > now that there was a 'disks' tab when the machine was
>> >> >> >>> >> > highlighted.
>> >> >> >>> >> > This has a 'activate/deactivate' function, and once
>> >> >> >>> >> > deactivated,
>> >> >> >>> >> > was
>> >> >> >>> >> > able to remove it without error.
>> >> >> >>> >> >
>> >> >> >>> >> > It does offer the option of deleting the disk when right
>> >> >> >>> >> > clicking
>> >> >> >>> >> > on
>> >> >> >>> >> > the VM and choosing 'edit', however, there is no 'deactivate'
>> >> >> >>> >> > option.
>> >> >> >>> >> > Not sure if this is by design (so that users should look
>> >> >> >>> >> > elsewhere).
>> >> >> >>> >> > I
>> >> >> >>> >> > can still try to run the delete from the 'Edit' page, and
>> >> >> >>> >> > capture
>> >> >> >>> >> > browser console output. Otherwise, apologies for t

Re: [ovirt-users] Unable to delete VM disk

2017-01-12 Thread Alexander Wels
On Friday, December 30, 2016 11:45:20 AM EST cmc wrote:
> Hi Alexander,
> 
> Thanks. I've attached the log. Relevant error is the last entry.
> 
> Kind regards,
> 
> Cam
> 

Just to be clear on the flow when this occurs, you do the following on a VM 
that is shut down:

1. Select the VM in the VM grid.
2. Click edit and the edit VM dialog pops up.
3. In the General tab you scroll down a little until you see the instance 
Images widget that has the disk listed. You have 3 options:
  - Edit (edit disk)
  - + (add new row, that will give you the option to attach/create a disk)
  - - (remove disk)
You click - (remove disk)?
4. You get the exception?

Alexander

> On Wed, Dec 14, 2016 at 3:12 PM, Alexander Wels  wrote:
> > On Wednesday, December 14, 2016 11:51:49 AM EST cmc wrote:
> >> Having some difficulty in getting the permutation string currently, as
> >> I can't get a cache.html file to appear in the Network section of the
> >> debugger, and both browsers I'm using (Chrome and FIrefox) do not
> >> print the permutation ID at the bottom of the console output. I'll see
> >> if I can get some more detail on how this works from some searching
> > 
> > I improved that, I just haven't updated the wiki, as soon as you install
> > the symbol maps, and you can recreate the issue, then the UI.log should
> > have the unobfuscated stack trace, so you don't have to do all that stuff
> > manually anymore.
> > 
> >> On Wed, Dec 14, 2016 at 8:21 AM, Fred Rolland  
wrote:
> >> > The UI log is obfuscated.
> >> > Can you please follow instruction on [1] and reproduce so that we get a
> >> > human readable log.
> >> > 
> >> > Thanks
> >> > 
> >> > [1]
> >> > http://www.ovirt.org/develop/developer-guide/engine/engine-debug-obfusc
> >> > ate
> >> > d-ui/>
> >> > 
> >> > On Tue, Dec 13, 2016 at 7:42 PM, cmc  wrote:
> >> >> Sorry, forgot the version: 4.0.5.5-1.el7.centos
> >> >> 
> >> >> On Tue, Dec 13, 2016 at 5:37 PM, cmc  wrote:
> >> >> > On the VM in the list of VMs, by right-clicking on it. It then gives
> >> >> > you a pop up window to edit the VM, starting in the 'General'
> >> >> > section
> >> >> > (much as when you create a new one)
> >> >> > 
> >> >> > Thanks,
> >> >> > 
> >> >> > Cam
> >> >> > 
> >> >> > On Tue, Dec 13, 2016 at 5:04 PM, Fred Rolland 
> >> >> > 
> >> >> > wrote:
> >> >> >> Hi,
> >> >> >> 
> >> >> >> Which version are you using ?
> >> >> >> When you mention "Edit", on which entity is it performed.?
> >> >> >> 
> >> >> >> The disks are currently not part of the edit VM window.
> >> >> >> 
> >> >> >> Thanks,
> >> >> >> Freddy
> >> >> >> 
> >> >> >> On Tue, Dec 13, 2016 at 6:06 PM, cmc  wrote:
> >> >> >>> This VM wasn't running.
> >> >> >>> 
> >> >> >>> On Tue, Dec 13, 2016 at 4:02 PM, Elad Ben Aharon
> >> >> >>> 
> >> >> >>> 
> >> >> >>> wrote:
> >> >> >>> > In general, in order to delete a disk while it is attached to a
> >> >> >>> > running
> >> >> >>> > VM,
> >> >> >>> > the disk has to be deactivated (hotunplugged) first so it won't
> >> >> >>> > be
> >> >> >>> > accessible for read and write from the VM.
> >> >> >>> > In the 'edit' VM prompt there is no option to deactivate the
> >> >> >>> > disk,
> >> >> >>> > it
> >> >> >>> > should
> >> >> >>> > be done from the disks subtab under the virtual machine.
> >> >> >>> > 
> >> >> >>> > On Tue, Dec 13, 2016 at 5:33 PM, cmc  wrote:
> >> >> >>> >> Actually, I just tried to create a new disk via the 'Edit' menu
> >> >> >>> >> once
> >> >> >>> >> I'd deleted it from the 'Disks' tab, and it threw an exception.
> >> >> >>> >> 
> >> >> >>> >> Attached is the console log.
> >> >> >>> >> 
> >> >> >>> >> On Tue, Dec 13, 2016 at 3:24 PM, cmc  wrote:
> >> >> >>> >> > Hi Elad,
> >> >> >>> >> > 
> >> >> >>> >> > I was trying to delete the disk via the 'edit' menu, but
> >> >> >>> >> > noticed
> >> >> >>> >> > just
> >> >> >>> >> > now that there was a 'disks' tab when the machine was
> >> >> >>> >> > highlighted.
> >> >> >>> >> > This has a 'activate/deactivate' function, and once
> >> >> >>> >> > deactivated,
> >> >> >>> >> > was
> >> >> >>> >> > able to remove it without error.
> >> >> >>> >> > 
> >> >> >>> >> > It does offer the option of deleting the disk when right
> >> >> >>> >> > clicking
> >> >> >>> >> > on
> >> >> >>> >> > the VM and choosing 'edit', however, there is no 'deactivate'
> >> >> >>> >> > option.
> >> >> >>> >> > Not sure if this is by design (so that users should look
> >> >> >>> >> > elsewhere).
> >> >> >>> >> > I
> >> >> >>> >> > can still try to run the delete from the 'Edit' page, and
> >> >> >>> >> > capture
> >> >> >>> >> > browser console output. Otherwise, apologies for troubling
> >> >> >>> >> > you
> >> >> >>> >> > with
> >> >> >>> >> > this.
> >> >> >>> >> > 
> >> >> >>> >> > Kind regards,
> >> >> >>> >> > 
> >> >> >>> >> > Cam
> >> >> >>> >> > 
> >> >> >>> >> > On Tue, Dec 13, 2016 at 12:27 PM, Elad Ben Aharon
> >> >> >>> >> > 
> >> >> >>> >> > 
> >> >> >>> >> > wrote:
> >> >> >>> >> >> There is no indication for image deletion in en

Re: [ovirt-users] Ovirt host activation and lvm looping with high CPU load trying to mount iSCSI storage

2017-01-12 Thread Yaniv Kaul
On Thu, Jan 12, 2017 at 12:02 PM, Mark Greenall 
wrote:

> Firstly, thanks @Yaniv and thanks @Nir for your responses.
>
> @Yaniv, in answer to this:
>
> >> Why do you have 1 SD per VM?
>
> It's a combination of performance and ease of management. We ran some IO
> tests with various configurations and settled on this one for a balance of
> reduced IO contention and ease of management. If there is a better
> recommended way of handling these then I'm all ears. If you believe having
> a large amount of storage domains adds to the problem then we can also
> review the setup.
>

I don't see how it can improve performance. Having several iSCSI
connections to a (single!) target may help, but certainly not too much.
Just from looking at your /var/log/messages:
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection1:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-37a238a33-4e21185c70857594-uk1-amd-cluster2-template-dstore01,
portal: 10.100.214.77,3260] through [iface: bond1.10] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection2:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-37a238a33-4e21185c70857594-uk1-amd-cluster2-template-dstore01,
portal: 10.100.214.77,3260] through [iface: default] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection3:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-192238a33-1f71185c70b57598-cuuk1ionhurap02-dstore01,
portal: 10.100.214.77,3260] through [iface: bond1.10] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection4:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-192238a33-1f71185c70b57598-cuuk1ionhurap02-dstore01,
portal: 10.100.214.77,3260] through [iface: default] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection5:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-223238a33-7301185c70e57598-cuuk1ionhurdb02-dstore01,
portal: 10.100.214.77,3260] through [iface: bond1.10] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection6:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-223238a33-7301185c70e57598-cuuk1ionhurdb02-dstore01,
portal: 10.100.214.77,3260] through [iface: default] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection7:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-212238a33-2a61185c719576bd-lnd-ion-anv-test-lin-64-dstore01,
portal: 10.100.214.77,3260] through [iface: bond1.10] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection8:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-212238a33-2a61185c719576bd-lnd-ion-anv-test-lin-64-dstore01,
portal: 10.100.214.77,3260] through [iface: default] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection9:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-ad4238a33-1b31185c75157c7e-lnd-ion-lindev-14-dstore01,
portal: 10.100.214.77,3260] through [iface: bond1.10] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection10:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-ad4238a33-1b31185c75157c7e-lnd-ion-lindev-14-dstore01,
portal: 10.100.214.77,3260] through [iface: default] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection11:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-b99479033-9a788b6aa6857d3b-lnd-anv-sup-03-dstore01,
portal: 10.100.214.77,3260] through [iface: bond1.10] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection12:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-b99479033-9a788b6aa6857d3b-lnd-anv-sup-03-dstore01,
portal: 10.100.214.77,3260] through [iface: default] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection13:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-cd9479033-ffc88b6aa6b57d3b-lnd-linsup-02-dstore01,
portal: 10.100.214.77,3260] through [iface: bond1.10] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection14:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-cd9479033-ffc88b6aa6b57d3b-lnd-linsup-02-dstore01,
portal: 10.100.214.77,3260] through [iface: default] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection15:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-db8479033-96f88b6aa6e57d3b-lnd-linsup-03-dstore01,
portal: 10.100.214.77,3260] through [iface: bond1.10] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection16:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-db8479033-96f88b6aa6e57d3b-lnd-linsup-03-dstore01,
portal: 10.100.214.77,3260] through [iface: default] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection17:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-eae479033-f6588b6aa7157d3b-lnd-linsup-04-dstore01,
portal: 10.100.214.77,3260] through [iface: bond1.10] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection18:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-eae479033-f6588b6aa7157d3b-lnd-linsup-04-dstore01,
portal: 10.100.214.77,3260] through [iface: default] is operational now
Jan 11 15:07:11 uk1-ion-ovm-08 iscsid: Connection19:0 to [target:
iqn.2001-05.com.equallogic:4-42a846-fac479033-bf888b6aa775

Re: [ovirt-users] WG: High Database Load after updating to oVirt 4.0.4

2017-01-12 Thread Roy Golan
+guchen,eberman

Guy, Eyal, can you run this on the 4.0.x setup with many disks
explain analyze select * from getdisksvmguid(uuid_generate_v1(), false,
uuid_generate_v1(), false);

On 12 January 2017 at 14:53, Grundmann, Christian <
christian.grundm...@fabasoft.com> wrote:

> Hi,
>
> ok will write a bug
>
>
>
> Setup:
>
> 8 Nodes
>
> Around 100 VMs running all the time
>
> 100-200 VMs dynamically created and destroyed from template using vagrant
>
> Around 2 disks per VM
>
> 7 Storage Domains around 1TB each
>
>
>
> Christian
>
>
>
>
>
>
>
>
>
>
>
> *Von:* Roy Golan [mailto:rgo...@redhat.com]
> *Gesendet:* Donnerstag, 12. Jänner 2017 13:41
> *An:* Grundmann, Christian 
> *Cc:* users@ovirt.org
> *Betreff:* Re: [ovirt-users] WG: High Database Load after updating to
> oVirt 4.0.4
>
>
>
>
>
> On 11 January 2017 at 17:16, Grundmann, Christian <
> christian.grundm...@fabasoft.com> wrote:
>
> | select * from  getdisksvmguid($1, $2, $3, $4)
>
>
>
>
>
> At the moment its best that you open a bug and put all the info there.
> I can tell that other setups, even big setup, didn't experience that
>
> so I guess some env factor is hiding here. How big is your setup,
> hosts/vm/disks/domains?
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] WG: High Database Load after updating to oVirt 4.0.4

2017-01-12 Thread Grundmann, Christian
Hi,
ok will write a bug

Setup:
8 Nodes
Around 100 VMs running all the time
100-200 VMs dynamically created and destroyed from template using vagrant
Around 2 disks per VM
7 Storage Domains around 1TB each

Christian





Von: Roy Golan [mailto:rgo...@redhat.com]
Gesendet: Donnerstag, 12. Jänner 2017 13:41
An: Grundmann, Christian 
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] WG: High Database Load after updating to oVirt 4.0.4


On 11 January 2017 at 17:16, Grundmann, Christian 
mailto:christian.grundm...@fabasoft.com>> 
wrote:
| select * from  getdisksvmguid($1, $2, $3, $4)


At the moment its best that you open a bug and put all the info there.
I can tell that other setups, even big setup, didn't experience that
so I guess some env factor is hiding here. How big is your setup, 
hosts/vm/disks/domains?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] WG: High Database Load after updating to oVirt 4.0.4

2017-01-12 Thread Roy Golan
On 11 January 2017 at 17:16, Grundmann, Christian <
christian.grundm...@fabasoft.com> wrote:

> | select * from  getdisksvmguid($1, $2, $3, $4)



At the moment its best that you open a bug and put all the info there.
I can tell that other setups, even big setup, didn't experience that
so I guess some env factor is hiding here. How big is your setup,
hosts/vm/disks/domains?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to pass unattend file to a sysprep Windows 2012r2 template

2017-01-12 Thread Denis Pithon
Hello oVirt users,

We currently run about 300 VMs with oVirt, both Linux and Windows servers.
Cloud init works great for Linux VMs and we would like to do the same for
Windows. We have a 2012r2 sysprep template but we do not manage to
configure the initial run such that the new Windows VM boot *and join our
AD domain* thereafter. Does someone have any clues/informations about this
kind of operation ? How to specify unattend file in the 'New Virtual
Machine' ? What kind of data (and format) is required in the 'sysprep'
field ?

Best whishes to everyone!
Denis

PS: We run oVirt Engine 4.0.5
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to delete VM disk

2017-01-12 Thread cmc
Hi Fred/Alexander,

Just wondering if you've had a chance to look at this.

Thanks,

Cam

On Fri, Dec 30, 2016 at 11:45 AM, cmc  wrote:
> Hi Alexander,
>
> Thanks. I've attached the log. Relevant error is the last entry.
>
> Kind regards,
>
> Cam
>
> On Wed, Dec 14, 2016 at 3:12 PM, Alexander Wels  wrote:
>> On Wednesday, December 14, 2016 11:51:49 AM EST cmc wrote:
>>> Having some difficulty in getting the permutation string currently, as
>>> I can't get a cache.html file to appear in the Network section of the
>>> debugger, and both browsers I'm using (Chrome and FIrefox) do not
>>> print the permutation ID at the bottom of the console output. I'll see
>>> if I can get some more detail on how this works from some searching
>>>
>>
>> I improved that, I just haven't updated the wiki, as soon as you install the
>> symbol maps, and you can recreate the issue, then the UI.log should have the
>> unobfuscated stack trace, so you don't have to do all that stuff manually
>> anymore.
>>
>>> On Wed, Dec 14, 2016 at 8:21 AM, Fred Rolland  wrote:
>>> > The UI log is obfuscated.
>>> > Can you please follow instruction on [1] and reproduce so that we get a
>>> > human readable log.
>>> >
>>> > Thanks
>>> >
>>> > [1]
>>> > http://www.ovirt.org/develop/developer-guide/engine/engine-debug-obfuscate
>>> > d-ui/>
>>> > On Tue, Dec 13, 2016 at 7:42 PM, cmc  wrote:
>>> >> Sorry, forgot the version: 4.0.5.5-1.el7.centos
>>> >>
>>> >> On Tue, Dec 13, 2016 at 5:37 PM, cmc  wrote:
>>> >> > On the VM in the list of VMs, by right-clicking on it. It then gives
>>> >> > you a pop up window to edit the VM, starting in the 'General' section
>>> >> > (much as when you create a new one)
>>> >> >
>>> >> > Thanks,
>>> >> >
>>> >> > Cam
>>> >> >
>>> >> > On Tue, Dec 13, 2016 at 5:04 PM, Fred Rolland 
>>> >> >
>>> >> > wrote:
>>> >> >> Hi,
>>> >> >>
>>> >> >> Which version are you using ?
>>> >> >> When you mention "Edit", on which entity is it performed.?
>>> >> >>
>>> >> >> The disks are currently not part of the edit VM window.
>>> >> >>
>>> >> >> Thanks,
>>> >> >> Freddy
>>> >> >>
>>> >> >> On Tue, Dec 13, 2016 at 6:06 PM, cmc  wrote:
>>> >> >>> This VM wasn't running.
>>> >> >>>
>>> >> >>> On Tue, Dec 13, 2016 at 4:02 PM, Elad Ben Aharon
>>> >> >>> 
>>> >> >>>
>>> >> >>> wrote:
>>> >> >>> > In general, in order to delete a disk while it is attached to a
>>> >> >>> > running
>>> >> >>> > VM,
>>> >> >>> > the disk has to be deactivated (hotunplugged) first so it won't be
>>> >> >>> > accessible for read and write from the VM.
>>> >> >>> > In the 'edit' VM prompt there is no option to deactivate the disk,
>>> >> >>> > it
>>> >> >>> > should
>>> >> >>> > be done from the disks subtab under the virtual machine.
>>> >> >>> >
>>> >> >>> > On Tue, Dec 13, 2016 at 5:33 PM, cmc  wrote:
>>> >> >>> >> Actually, I just tried to create a new disk via the 'Edit' menu
>>> >> >>> >> once
>>> >> >>> >> I'd deleted it from the 'Disks' tab, and it threw an exception.
>>> >> >>> >>
>>> >> >>> >> Attached is the console log.
>>> >> >>> >>
>>> >> >>> >> On Tue, Dec 13, 2016 at 3:24 PM, cmc  wrote:
>>> >> >>> >> > Hi Elad,
>>> >> >>> >> >
>>> >> >>> >> > I was trying to delete the disk via the 'edit' menu, but noticed
>>> >> >>> >> > just
>>> >> >>> >> > now that there was a 'disks' tab when the machine was
>>> >> >>> >> > highlighted.
>>> >> >>> >> > This has a 'activate/deactivate' function, and once deactivated,
>>> >> >>> >> > was
>>> >> >>> >> > able to remove it without error.
>>> >> >>> >> >
>>> >> >>> >> > It does offer the option of deleting the disk when right
>>> >> >>> >> > clicking
>>> >> >>> >> > on
>>> >> >>> >> > the VM and choosing 'edit', however, there is no 'deactivate'
>>> >> >>> >> > option.
>>> >> >>> >> > Not sure if this is by design (so that users should look
>>> >> >>> >> > elsewhere).
>>> >> >>> >> > I
>>> >> >>> >> > can still try to run the delete from the 'Edit' page, and
>>> >> >>> >> > capture
>>> >> >>> >> > browser console output. Otherwise, apologies for troubling you
>>> >> >>> >> > with
>>> >> >>> >> > this.
>>> >> >>> >> >
>>> >> >>> >> > Kind regards,
>>> >> >>> >> >
>>> >> >>> >> > Cam
>>> >> >>> >> >
>>> >> >>> >> > On Tue, Dec 13, 2016 at 12:27 PM, Elad Ben Aharon
>>> >> >>> >> > 
>>> >> >>> >> >
>>> >> >>> >> > wrote:
>>> >> >>> >> >> There is no indication for image deletion in engine.log
>>> >> >>> >> >> The browser console log is located in your browser under
>>> >> >>> >> >> 'settings'->'developer'.
>>> >> >>> >> >> Please try to delete a disk as you tried before, get the
>>> >> >>> >> >> console
>>> >> >>> >> >> log
>>> >> >>> >> >> and
>>> >> >>> >> >> provide it.
>>> >> >>> >> >>
>>> >> >>> >> >> Thanks
>>> >> >>> >> >>
>>> >> >>> >> >> On Mon, Dec 12, 2016 at 7:40 PM, cmc  wrote:
>>> >> >>> >> >>> Hi Eled,
>>> >> >>> >> >>>
>>> >> >>> >> >>> I've attached the ui log and the engine log but I'm not sure
>>> >> >>> >> >>> what
>>> >> >>> >> >>> the
>>> >> >>> >> >>> browser console log is - there is a 'console.l

[ovirt-users] oVirt Reports

2017-01-12 Thread Marcin Michta
Hi,

Someone can tell me what kind of informations I can get from ovirt
reports? oVirt web page is poor about it.
Screenshots will be helpful.

Thank you,
Marcin

-- 


---
The information in this email is confidential and may be legally 
privileged, it may contain information that is confidential in CodiLime Sp. 
z o. o. It is intended solely for the addressee. Any access to this email 
by third parties is unauthorized. If you are not the intended recipient of 
this message, any disclosure, copying, distribution or any action 
undertaken or neglected in reliance thereon is prohibited and may result in 
your liability for damages.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Master storage domain in locked state

2017-01-12 Thread knarra

Hi,

I have three glusterfs storage domains present on my system. data 
(master), vmstore and engine. I tried moving the master storage domain 
to maintenance state , it was stuck in preparing for maintenance for a 
long time and then i rebooted my hosts. Now i see that the master domain 
moves to maintenance state but vmstore which is master now is stuck in 
locked state. Any idea how to come out of this situation.


Any help is much appreciated.

Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine: Add another host

2017-01-12 Thread Simone Tiraboschi
On Wed, Jan 11, 2017 at 9:29 PM, Gianluca Cecchi 
wrote:

> On Wed, Jan 11, 2017 at 5:50 PM, gregor  wrote:
>
>> Hi,
>>
>> I have a hosted-engine setup on one host. Today I try to add another
>> host from the UI but this gives me some errors without detail.
>>
>> Is there a way to add a new host from the shell?
>>
>
Deploying additional hosted-engine hosts has been deprecated, deploying
from the web ui is the recommended way.
Could you please check host-deploy logs on the engine VM to check what went
wrong?


> Not a node [1] because I plan to use docker as well on the host, it's a
>> test environment.
>> Or is it better to install the host as node?
>>
>> cheers
>> gregor
>>
>> [1] http://www.ovirt.org/node/
>>
>>
> It would be useful to understand the errors you get in web ui, because
> they could be similar also in command line deploy
>
> I think you can follow what happened in 3.6 as described here:
> https://access.redhat.com/documentation/en-US/Red_Hat_
> Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/
> chap-Installing_Additional_Hosts_to_a_Self-Hosted_Environment.html
>
> For oVirt and CentOS I think that these below should be the commands to
> run on your second host (see the other details explained in the web page
> above, that could be different in some way in 4.0 vs 3.6)
>
> # yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm
> # yum install ovirt-hosted-engine-setup
> # hosted-engine --deploy
>
> HIH,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] WG: High Database Load after updating to oVirt 4.0.4

2017-01-12 Thread Grundmann, Christian
Hi,

I already did the downgrade again because this is a showstopper. 4.0.3 is the 
last working Version for me

I have attached a full pg_stat_activity output from last try



Thx Christian



Von: Roy Golan [mailto:rgo...@redhat.com]
Gesendet: Donnerstag, 12. Jänner 2017 09:13
An: Grundmann, Christian 
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] WG: High Database Load after updating to oVirt 4.0.4







On 11 January 2017 at 17:16, Grundmann, Christian 
mailto:christian.grundm...@fabasoft.com>> 
wrote:

   Hi,

   I updated to 4.0.6 today and again hitting this Problem can anyone plz help?



backend_start |  query_start  | 
state_change  | waiting |state| 
 query

   
---+---+---+-+-+--

   2017-01-11 15:52:41.612942+01 | 2017-01-11 16:14:45.676881+01 | 2017-01-11 
16:14:45.676882+01 | f   | active  | select * from  
getdisksvmguid($1, $2, $3, $4)

   2017-01-11 15:52:35.526771+01 | 2017-01-11 16:14:45.750546+01 | 2017-01-11 
16:14:45.750547+01 | f   | active  | select * from  
getdisksvmguid($1, $2, $3, $4)

   2017-01-11 14:48:41.133303+01 | 2017-01-11 16:14:42.89794+01  | 2017-01-11 
16:14:42.897991+01 | f   | idle| SELECT 1

   2017-01-11 14:48:43.504048+01 | 2017-01-11 16:14:46.794742+01 | 2017-01-11 
16:14:46.794813+01 | f   | idle| SELECT option_value FROM 
vdc_options WHERE option_name = 'DisconnectDwh'

   2017-01-11 14:48:43.531955+01 | 2017-01-11 16:14:34.541273+01 | 2017-01-11 
16:14:34.543513+01 | f   | idle| COMMIT

   2017-01-11 14:48:43.564148+01 | 2017-01-11 16:14:34.543635+01 | 2017-01-11 
16:14:34.544145+01 | f   | idle| COMMIT

   2017-01-11 14:48:43.569029+01 | 2017-01-11 16:00:01.86664+01  | 2017-01-11 
16:00:01.866711+01 | f   | idle in transaction | SELECT 'continueAgg', '1'  
 +

  |   | 
  | | | FROM history_configuration  
+

  |   | 
  | | | WHERE var_name = 
'lastHourAggr' +

  |   | 
  | | | AND var_datetime < 
'2017-01-11 15:00:00.00+0100'+

  |   | 
  | | |

   2017-01-11 14:48:43.572644+01 | 2017-01-11 14:48:43.57571+01  | 2017-01-11 
14:48:43.575736+01 | f   | idle| SET extra_float_digits = 3

   2017-01-11 14:48:43.577039+01 | 2017-01-11 14:48:43.580066+01 | 2017-01-11 
14:48:43.58009+01  | f   | idle| SET extra_float_digits = 3

   2017-01-11 14:48:54.308078+01 | 2017-01-11 16:14:46.931422+01 | 2017-01-11 
16:14:46.931423+01 | f   | active  | select * from  
getsnapshotbyleafguid($1)

   2017-01-11 14:48:54.465485+01 | 2017-01-11 16:14:21.113926+01 | 2017-01-11 
16:14:21.113959+01 | f   | idle| COMMIT

   2017-01-11 15:52:41.606561+01 | 2017-01-11 16:14:45.839754+01 | 2017-01-11 
16:14:45.839755+01 | f   | active  | select * from  
getdisksvmguid($1, $2, $3, $4)

   2017-01-11 14:48:56.477555+01 | 2017-01-11 16:14:45.276255+01 | 2017-01-11 
16:14:45.277038+01 | f   | idle| select * from  
getvdsbyvdsid($1, $2, $3)

   2017-01-11 15:52:41.736304+01 | 2017-01-11 16:14:44.48134+01  | 2017-01-11 
16:14:44.48134+01  | f   | active  | select * from  
getdisksvmguid($1, $2, $3, $4)

   2017-01-11 14:48:56.489949+01 | 2017-01-11 16:14:46.40924+01  | 2017-01-11 
16:14:46.409241+01 | f   | active  | select * from  
getdisksvmguid($1, $2, $3, $4)

   2017-01-11 15:52:41.618773+01 | 2017-01-11 16:14:45.732394+01 | 2017-01-11 
16:14:45.732394+01 | f   | active  | select * from  
getdisksvmguid($1, $2, $3, $4)

   2017-01-11 14:48:56.497824+01 | 2017-01-11 16:14:46.827751+01 | 2017-01-11 
16:14:46.827752+01 | f   | active  | select * from  
getsnapshotbyleafguid($1)

   2017-01-11 14:48:56.497732+01 | 2017-01-11 16:09:04.207597+01 | 2017-01-11 
16:09:04.342567+01 | f   | idle| select * from  
getvdsbyvdsid($1, $2, $3)

   2017-01-11 14:48:58.785162+01 | 2017-01-11 16:14:46.093658+01 | 2017-01-11 
16:14:

Re: [ovirt-users] WG: High Database Load after updating to oVirt 4.0.4

2017-01-12 Thread Roy Golan
On 11 January 2017 at 17:16, Grundmann, Christian <
christian.grundm...@fabasoft.com> wrote:

> Hi,
>
> I updated to 4.0.6 today and again hitting this Problem can anyone plz
> help?
>
>
>
>  backend_start |  query_start  |
> state_change  | waiting |state
> |  query
>
> ---+
> ---+---+-+--
> ---+
> --
>
> 2017-01-11 15:52:41.612942+01 | 2017-01-11 16:14:45.676881+01 | 2017-01-11
> 16:14:45.676882+01 | f   | active  | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 15:52:35.526771+01 | 2017-01-11 16:14:45.750546+01 | 2017-01-11
> 16:14:45.750547+01 | f   | active  | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 14:48:41.133303+01 | 2017-01-11 16:14:42.89794+01  | 2017-01-11
> 16:14:42.897991+01 | f   | idle| SELECT 1
>
> 2017-01-11 14:48:43.504048+01 | 2017-01-11 16:14:46.794742+01 | 2017-01-11
> 16:14:46.794813+01 | f   | idle| SELECT option_value
> FROM vdc_options WHERE option_name = 'DisconnectDwh'
>
> 2017-01-11 14:48:43.531955+01 | 2017-01-11 16:14:34.541273+01 | 2017-01-11
> 16:14:34.543513+01 | f   | idle| COMMIT
>
> 2017-01-11 14:48:43.564148+01 | 2017-01-11 16:14:34.543635+01 | 2017-01-11
> 16:14:34.544145+01 | f   | idle| COMMIT
>
> 2017-01-11 14:48:43.569029+01 | 2017-01-11 16:00:01.86664+01  | 2017-01-11
> 16:00:01.866711+01 | f   | idle in transaction | SELECT 'continueAgg',
> '1'   +
>
>|
> |   | | | FROM
> history_configuration
> +
>
>|
>   |   |
> | | WHERE var_name = 'lastHourAggr'
>  +
>
>|
> |   | | | AND
> var_datetime < '2017-01-11 15:00:00.00+0100'
> +
>
>|
> |   | | |
>
> 2017-01-11 14:48:43.572644+01 | 2017-01-11 14:48:43.57571+01  | 2017-01-11
> 14:48:43.575736+01 | f   | idle| SET extra_float_digits
> = 3
>
> 2017-01-11 14:48:43.577039+01 | 2017-01-11 14:48:43.580066+01 | 2017-01-11
> 14:48:43.58009+01  | f   | idle| SET extra_float_digits
> = 3
>
> 2017-01-11 14:48:54.308078+01 | 2017-01-11 16:14:46.931422+01 | 2017-01-11
> 16:14:46.931423+01 | f   | active  | select * from
> getsnapshotbyleafguid($1)
>
> 2017-01-11 14:48:54.465485+01 | 2017-01-11 16:14:21.113926+01 | 2017-01-11
> 16:14:21.113959+01 | f   | idle| COMMIT
>
> 2017-01-11 15:52:41.606561+01 | 2017-01-11 16:14:45.839754+01 | 2017-01-11
> 16:14:45.839755+01 | f   | active  | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 14:48:56.477555+01 | 2017-01-11 16:14:45.276255+01 | 2017-01-11
> 16:14:45.277038+01 | f   | idle| select * from
> getvdsbyvdsid($1, $2, $3)
>
> 2017-01-11 15:52:41.736304+01 | 2017-01-11 16:14:44.48134+01  | 2017-01-11
> 16:14:44.48134+01  | f   | active  | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 14:48:56.489949+01 | 2017-01-11 16:14:46.40924+01  | 2017-01-11
> 16:14:46.409241+01 | f   | active  | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 15:52:41.618773+01 | 2017-01-11 16:14:45.732394+01 | 2017-01-11
> 16:14:45.732394+01 | f   | active  | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 14:48:56.497824+01 | 2017-01-11 16:14:46.827751+01 | 2017-01-11
> 16:14:46.827752+01 | f   | active  | select * from
> getsnapshotbyleafguid($1)
>
> 2017-01-11 14:48:56.497732+01 | 2017-01-11 16:09:04.207597+01 | 2017-01-11
> 16:09:04.342567+01 | f   | idle| select * from
> getvdsbyvdsid($1, $2, $3)
>
> 2017-01-11 14:48:58.785162+01 | 2017-01-11 16:14:46.093658+01 | 2017-01-11
> 16:14:46.093659+01 | f   | active  | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 15:52:41.620421+01 | 2017-01-11 16:14:46.224543+01 | 2017-01-11
> 16:14:46.224543+01 | f   | active  | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 15:52:41.620478+01 | 2017-01-11 16:14:46.009864+01 | 2017-01-11
> 16:14:46.009865+01 | f   | active  | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 15:52:41.647839+01 | 2017-01-11 16:14:46.834005+01 | 2017-01-11
> 16:14:46.834005+01 | f   | active  | select * from
> gets