Re: [ovirt-users] oVirt/RHEV and HP Blades and HP iSCSI SAN

2016-06-24 Thread Colin Coe
Hi Fernando

The network is pretty much cast in stone now.  I even if I could change it,
I'd be relucant todo so as the firewall/router has 1Gb interfaces but the
iSCSI SANs and blade servers are all 10Gb.  Having these in different
subnets will create a 1Gb bottle neck.

Thanks


On Sat, Jun 25, 2016 at 12:45 PM, Fernando Frediani <
fernando.fredi...@upx.com.br> wrote:

> Hello Colin,
>
> I know well all the equipment you have in your hands as I used to work
> with these during a long time. Great stuff I can say.
>
> All seems Ok from what you describe, except the iSCSI network which should
> not be a bond, but two independent vlans (and subnets) using iSCSI
> multipath. Bond works, but it's not the recommended setup for these
> scenarios.
>
> Fernando
> On 24/06/2016 22:12, Colin Coe wrote:
>
> Hi all
>
> We run four RHEV datacenters, two PROD, one DEV and one TEST/Training.
> They are all  working OK but I'd like a definitive answer on how I should
> be configuring the networking side as I'm pretty sure we're getting
> sub-optimal networking performance.
>
> All datacenters are housed in HP C7000 Blade enclosures.  The PROD
> datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster
> of two 4730s. These are configured RAID5 internally with NRAID1. The DEV
> and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a
> cluster of three P4500s configured with RAID10 internally and NRAID5.
>
> The HP C7000 each have two Flex10/10D interconnect modules configured in a
> redundant ring so that we can upgrade the interconnects without dropping
> network connectivity to the infrastructure. We use fat RHEL-H 7.2
> hypervisors (HP BL460) and these are all configured with six network
> interfaces:
> - eno1 and eno2 are bond0 which is the rhevm interface
> - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this bond
> using 802.1q
> - eno5 and eno6 are bond2 and dedicated to iSCSI traffic
>
> Is this the "correct" way to do this?  If not, what should I be doing
> instead?
>
> Thanks
>
> CC
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt/RHEV and HP Blades and HP iSCSI SAN

2016-06-24 Thread Fernando Frediani

Hello Colin,

I know well all the equipment you have in your hands as I used to work 
with these during a long time. Great stuff I can say.


All seems Ok from what you describe, except the iSCSI network which 
should not be a bond, but two independent vlans (and subnets) using 
iSCSI multipath. Bond works, but it's not the recommended setup for 
these scenarios.


Fernando

On 24/06/2016 22:12, Colin Coe wrote:

Hi all

We run four RHEV datacenters, two PROD, one DEV and one 
TEST/Training.  They are all  working OK but I'd like a definitive 
answer on how I should be configuring the networking side as I'm 
pretty sure we're getting sub-optimal networking performance.


All datacenters are housed in HP C7000 Blade enclosures. The PROD 
datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a 
cluster of two 4730s. These are configured RAID5 internally with 
NRAID1. The DEV and TEST datacenters are using P4500 iSCSI SANs and 
each datacenter has a cluster of three P4500s configured with RAID10 
internally and NRAID5.


The HP C7000 each have two Flex10/10D interconnect modules configured 
in a redundant ring so that we can upgrade the interconnects without 
dropping network connectivity to the infrastructure. We use fat RHEL-H 
7.2 hypervisors (HP BL460) and these are all configured with six 
network interfaces:

- eno1 and eno2 are bond0 which is the rhevm interface
- eno3 and eno4 are bond1 and all the VM VLANs are trunked over this 
bond using 802.1q

- eno5 and eno6 are bond2 and dedicated to iSCSI traffic

Is this the "correct" way to do this?  If not, what should I be doing 
instead?


Thanks

CC


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt/RHEV and HP Blades and HP iSCSI SAN

2016-06-24 Thread Colin Coe
Hi Dan

I should have mentioned that we need to use the same subnet for both iSCSI
interfaces which is why I ended up bonding (mode 1) these.  Looking at
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Administration_Guide/sect-Preparing_and_Adding_Block_Storage.html#Configuring_iSCSI_Multipathing,
it doesn't say anything about tying the iSCSI Bond back to the host.  In
our DEV environment I removed the bond the iSCSI interfaces were using and
created the iSCSI Bond as per this link.  What do I do now?  Recreate the
bond and give it an IP?  I don't see where to put an IP for iSCSI against
the hosts?

Lastly, not using jumbo frames as where a critical infrastructure
organisation and I fear possible side effects.

Thanks

On Sat, Jun 25, 2016 at 10:30 AM, Dan Yasny  wrote:

> Two things off the top of my head after skimming the given details:
> 1. iSCSI will work better without the bond. It already uses multipath, so
> all you need is to separate the portal IPs/subnets and provide separate
> IPs/subnets to the iSCSI dedicated NICs, as is the recommended way here:
> https://access.redhat.com/solutions/131153 and also be sure to follow
> this:
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Administration_Guide/sect-Preparing_and_Adding_Block_Storage.html#Configuring_iSCSI_Multipathing
> 2. You haven't mentioned anything about jumbo frames, are you using those?
> If not, it is a very good idea to start.
>
> And 3: since this is RHEV, you might get much more help from the official
> support than from this list.
>
> Hope this helps
> Dan
>
> On Fri, Jun 24, 2016 at 9:12 PM, Colin Coe  wrote:
>
>> Hi all
>>
>> We run four RHEV datacenters, two PROD, one DEV and one TEST/Training.
>> They are all  working OK but I'd like a definitive answer on how I should
>> be configuring the networking side as I'm pretty sure we're getting
>> sub-optimal networking performance.
>>
>> All datacenters are housed in HP C7000 Blade enclosures.  The PROD
>> datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster
>> of two 4730s. These are configured RAID5 internally with NRAID1. The DEV
>> and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a
>> cluster of three P4500s configured with RAID10 internally and NRAID5.
>>
>> The HP C7000 each have two Flex10/10D interconnect modules configured in
>> a redundant ring so that we can upgrade the interconnects without dropping
>> network connectivity to the infrastructure. We use fat RHEL-H 7.2
>> hypervisors (HP BL460) and these are all configured with six network
>> interfaces:
>> - eno1 and eno2 are bond0 which is the rhevm interface
>> - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this bond
>> using 802.1q
>> - eno5 and eno6 are bond2 and dedicated to iSCSI traffic
>>
>> Is this the "correct" way to do this?  If not, what should I be doing
>> instead?
>>
>> Thanks
>>
>> CC
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt/RHEV and HP Blades and HP iSCSI SAN

2016-06-24 Thread Dan Yasny
Two things off the top of my head after skimming the given details:
1. iSCSI will work better without the bond. It already uses multipath, so
all you need is to separate the portal IPs/subnets and provide separate
IPs/subnets to the iSCSI dedicated NICs, as is the recommended way here:
https://access.redhat.com/solutions/131153 and also be sure to follow this:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Administration_Guide/sect-Preparing_and_Adding_Block_Storage.html#Configuring_iSCSI_Multipathing
2. You haven't mentioned anything about jumbo frames, are you using those?
If not, it is a very good idea to start.

And 3: since this is RHEV, you might get much more help from the official
support than from this list.

Hope this helps
Dan

On Fri, Jun 24, 2016 at 9:12 PM, Colin Coe  wrote:

> Hi all
>
> We run four RHEV datacenters, two PROD, one DEV and one TEST/Training.
> They are all  working OK but I'd like a definitive answer on how I should
> be configuring the networking side as I'm pretty sure we're getting
> sub-optimal networking performance.
>
> All datacenters are housed in HP C7000 Blade enclosures.  The PROD
> datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster
> of two 4730s. These are configured RAID5 internally with NRAID1. The DEV
> and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a
> cluster of three P4500s configured with RAID10 internally and NRAID5.
>
> The HP C7000 each have two Flex10/10D interconnect modules configured in a
> redundant ring so that we can upgrade the interconnects without dropping
> network connectivity to the infrastructure. We use fat RHEL-H 7.2
> hypervisors (HP BL460) and these are all configured with six network
> interfaces:
> - eno1 and eno2 are bond0 which is the rhevm interface
> - eno3 and eno4 are bond1 and all the VM VLANs are trunked over this bond
> using 802.1q
> - eno5 and eno6 are bond2 and dedicated to iSCSI traffic
>
> Is this the "correct" way to do this?  If not, what should I be doing
> instead?
>
> Thanks
>
> CC
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt/RHEV and HP Blades and HP iSCSI SAN

2016-06-24 Thread Colin Coe
Hi all

We run four RHEV datacenters, two PROD, one DEV and one TEST/Training.
They are all  working OK but I'd like a definitive answer on how I should
be configuring the networking side as I'm pretty sure we're getting
sub-optimal networking performance.

All datacenters are housed in HP C7000 Blade enclosures.  The PROD
datacenters use HP 4730 iSCSI SAN clusters, each datacenter has a cluster
of two 4730s. These are configured RAID5 internally with NRAID1. The DEV
and TEST datacenters are using P4500 iSCSI SANs and each datacenter has a
cluster of three P4500s configured with RAID10 internally and NRAID5.

The HP C7000 each have two Flex10/10D interconnect modules configured in a
redundant ring so that we can upgrade the interconnects without dropping
network connectivity to the infrastructure. We use fat RHEL-H 7.2
hypervisors (HP BL460) and these are all configured with six network
interfaces:
- eno1 and eno2 are bond0 which is the rhevm interface
- eno3 and eno4 are bond1 and all the VM VLANs are trunked over this bond
using 802.1q
- eno5 and eno6 are bond2 and dedicated to iSCSI traffic

Is this the "correct" way to do this?  If not, what should I be doing
instead?

Thanks

CC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.0 Engine Setup Failure

2016-06-24 Thread Melissa Mesler
Okay I finally got past the install.. Everything went fine. I then
bring it up in the web browser, add the certs, confirm security
exception and then nothing. There is nothing in the ovirt-
engine/engine.log to help guide me.
 
 
On Fri, Jun 24, 2016, at 01:59 PM, Martin Perina wrote:
> Could you please share installation log from /var/log/ovirt-
> engine/setup ?
> Thanks
> Martin Perina
>
> On Fri, Jun 24, 2016 at 6:31 PM, Melissa Mesler
>  wrote:
>> Also, this is in the engine-setup logs:
>>  2016-06-24 11:04:08 ERROR
>>  otopi.plugins.ovirt_engine_common.base.core.misc misc._terminate:148
>>  Execution of setup failed
>>
>>  On Fri, Jun 24, 2016, at 11:08 AM, Melissa Mesler wrote:
>>  > I am doing a clean install of Ovirt 4.0. Upon executing engine-
>>  > setup
>>  > with all default values, it fails. This is the error during setup:
>>  >
>>  > [ ERROR ] Failed to execute stage 'Misc configuration': Command
>>  > '/usr/bin/ovirt-aaa-jdbc-tool' failed to execute
>>  >
>>  >
>>  > Any ideas?
>>  ___
>>  Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt and Ceph

2016-06-24 Thread Charles Gomes
Hello

I've been reading lots of material about implementing oVirt with Ceph, however 
all talk about using Cinder.
Is there a way to get oVirt with Ceph without having to implement entire 
Openstack ?
I'm already currently using Foreman to deploy Ceph and KVM nodes, trying to 
minimize the amount of moving parts. I heard something about oVirt providing a 
managed Cinder appliance, have any seen this ?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.0 Engine Setup Failure

2016-06-24 Thread Martin Perina
Could you please share installation log from /var/log/ovirt-engine/setup ?

Thanks

Martin Perina

On Fri, Jun 24, 2016 at 6:31 PM, Melissa Mesler 
wrote:

> Also, this is in the engine-setup logs:
> 2016-06-24 11:04:08 ERROR
> otopi.plugins.ovirt_engine_common.base.core.misc misc._terminate:148
> Execution of setup failed
>
> On Fri, Jun 24, 2016, at 11:08 AM, Melissa Mesler wrote:
> > I am doing a clean install of Ovirt 4.0. Upon executing engine-setup
> > with all default values, it fails. This is the error during setup:
> >
> > [ ERROR ] Failed to execute stage 'Misc configuration': Command
> > '/usr/bin/ovirt-aaa-jdbc-tool' failed to execute
> >
> >
> > Any ideas?
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 to 4.0 upgrade cert issue

2016-06-24 Thread Martin Perina
On Fri, Jun 24, 2016 at 7:43 PM, Scott  wrote:

> You need to import your intermediate certificate and possibly your CA
> certificate into the ovirt-engine keystore.  This is the command I used:
>
> sudo keytool -importcert -trustcacerts -keystore
> /etc/pki/ovirt-engine/.truststore -storepass mypass -file
> /etc/pki/tls/certs/startcom.class1.server.ca.pem
>
> The password is actually "mypass".
>

​This is not a correct solution although it's working for now. Correct
steps are described at [1].

Thanks

Martin Perina

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1336838
​


>
> Scott
>
> On Fri, Jun 24, 2016 at 11:33 AM Matt Haught  wrote:
>
>> So I switched back to the original self-signed certs that I had luckily
>> saved and was able to get in without error. Is there a new process for
>> using non-self-signed certs with ovirt 4.0?
>>
>> Thanks,
>> --
>> Matt Haught
>>
>> On Fri, Jun 24, 2016 at 11:19 AM, Matt Haught  wrote:
>>
>>> I just attempted an upgrade from 3.6 to 4.0 hosted engine and ran into
>>> an problem. The hosted engine vm updated without issue, but when I go to
>>> login to the web interface to continue the process I get:
>>>
>>> sun.security.validator.ValidatorException: PKIX path building failed:
>>> sun.security.provider.certpath.SunCertPathBuilderException: unable to find
>>> valid certification path to requested target
>>>
>>> at every page load and I can't login. I have a feeling that the issue
>>> comes from where I replaced the self-signed certs with trusted ca signed
>>> certs a year ago. Is there a work around?
>>>
>>> CentOS 7.2
>>>
>>> Thanks,
>>>
>>> --
>>> Matt Haught
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 to 4.0 upgrade cert issue

2016-06-24 Thread Martin Perina
Hi,

if you are using HTTPS certificate signed by custom CA, manual action is
required after upgrade to 4.0 due to introduction of oVirt engine SSO
feature. More info about the manual steps can be found at [1].

Thanks

Martin Perina


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1336838

On Fri, Jun 24, 2016 at 6:32 PM, Matt Haught  wrote:

> So I switched back to the original self-signed certs that I had luckily
> saved and was able to get in without error. Is there a new process for
> using non-self-signed certs with ovirt 4.0?
>
> Thanks,
> --
> Matt Haught
>
> On Fri, Jun 24, 2016 at 11:19 AM, Matt Haught  wrote:
>
>> I just attempted an upgrade from 3.6 to 4.0 hosted engine and ran into an
>> problem. The hosted engine vm updated without issue, but when I go to login
>> to the web interface to continue the process I get:
>>
>> sun.security.validator.ValidatorException: PKIX path building failed:
>> sun.security.provider.certpath.SunCertPathBuilderException: unable to find
>> valid certification path to requested target
>>
>> at every page load and I can't login. I have a feeling that the issue
>> comes from where I replaced the self-signed certs with trusted ca signed
>> certs a year ago. Is there a work around?
>>
>> CentOS 7.2
>>
>> Thanks,
>>
>> --
>> Matt Haught
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6 to 4.0 upgrade cert issue

2016-06-24 Thread Scott
You need to import your intermediate certificate and possibly your CA
certificate into the ovirt-engine keystore.  This is the command I used:

sudo keytool -importcert -trustcacerts -keystore
/etc/pki/ovirt-engine/.truststore -storepass mypass -file
/etc/pki/tls/certs/startcom.class1.server.ca.pem

The password is actually "mypass".

Scott

On Fri, Jun 24, 2016 at 11:33 AM Matt Haught  wrote:

> So I switched back to the original self-signed certs that I had luckily
> saved and was able to get in without error. Is there a new process for
> using non-self-signed certs with ovirt 4.0?
>
> Thanks,
> --
> Matt Haught
>
> On Fri, Jun 24, 2016 at 11:19 AM, Matt Haught  wrote:
>
>> I just attempted an upgrade from 3.6 to 4.0 hosted engine and ran into an
>> problem. The hosted engine vm updated without issue, but when I go to login
>> to the web interface to continue the process I get:
>>
>> sun.security.validator.ValidatorException: PKIX path building failed:
>> sun.security.provider.certpath.SunCertPathBuilderException: unable to find
>> valid certification path to requested target
>>
>> at every page load and I can't login. I have a feeling that the issue
>> comes from where I replaced the self-signed certs with trusted ca signed
>> certs a year ago. Is there a work around?
>>
>> CentOS 7.2
>>
>> Thanks,
>>
>> --
>> Matt Haught
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to start self hosted engine after upgrading oVirt to 4.0

2016-06-24 Thread Stefano Danzi

HI!!

I found a workaround

the brocker process try to connect to vdsm to IPV4 host address using an 
IPV6 connection

(I noticed that doing a strace to the process),
but ipv6 is not intialized at boot. (why connect to IPV4 address using 
IPV6?)


I added the following lines to crontab:

@reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/lo/disable_ipv6' | 
/usr/bin/at now+1 minutes
@reboot echo 'echo 0 > /proc/sys/net/ipv6/conf/ovirtmgmt/disable_ipv6' | 
/usr/bin/at now+1 minutes
@reboot echo '/usr/sbin/route add default gw 192.168.1.254'  | 
/usr/bin/at now+1 minutes




Il 24/06/2016 12.36, Stefano Danzi ha scritto:
How I can change self hosted engine configuration to mount directly 
gluster storage without pass through gluster NFS?


Maybe this solve

Il 24/06/2016 10.16, Stefano Danzi ha scritto:
After an additional yum clean all && yum update was updated some 
other rpms.


Something changed.
My setup has engine storage on gluster, but mounted with NFS.
Now gluster daemon don't automatically start at boot. After starting 
manually gluster the error is the same:


==> /var/log/ovirt-hosted-engine-ha/broker.log <==
Thread-19::ERROR::2016-06-24 
10:10:36,758::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) 
Error while serving connection

Traceback (most recent call last):
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
line 166, in handle

data)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
line 299, in _dispatch

.set_storage_domain(client, sd_type, **options)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", 
line 66, in set_storage_domain

self._backends[client].connect()
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", 
line 400, in connect

volUUID=volume.volume_uuid
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", 
line 245, in _get_volume_path

volUUID
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
verbose=self.__verbose
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request
self.send_content(h, request_body)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content
connection.endheaders(request_body)
  File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders
self._send_output(message_body)
  File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output
self.send(msg)
  File "/usr/lib64/python2.7/httplib.py", line 797, in send
self.connect()
  File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, 
in connect

sock = socket.create_connection((self.host, self.port), self.timeout)
  File "/usr/lib64/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 101] Network is unreachable


VDSM.log

jsonrpc.Executor/5::DEBUG::2016-06-24 
10:10:21,694::task::995::Storage.TaskManager.Task::(_decref) 
Task=`5c3b6f30-d3a8-431e-9dd0-8df79b171709`::ref 0

aborting False
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Following parameters ['type'] were not recogn

ized
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Provided value "2" not defined in DiskType en

um for Volume.getInfo
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter capacity is not uint type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Required property allocType is not provided w

hen calling Volume.getInfo
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter mtime is not uint type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter ctime is not int type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter truesize is not uint type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter apparentsize is not uint type
jsonrpc.Executor/5::DEBUG::2016-06-24 
10:10:21,695::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) 
Return 'Volume.getInfo' in bridge with {'sta
tus': 'OK', 'domain': '46f55a31-f35f-465c-b3e2-df45c05e06a7', 
'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 
'parent': '--00
00--', 'format': 'RAW', 'image': 
'6838c974-7656-4b40-87cc-f562ff0b2a4c', 

Re: [ovirt-users] InClusterUpgrade Scheduling Policy

2016-06-24 Thread Scott
Actually, I figured out a work around. I changed the HostedEngine VM's
vds_group_id in the database to the vds_group_id of my temporary cluster
(found from the vds_groups table). This worked and I could put my main
cluster in upgrade mode. Now to continue the process...

Thanks,
Scott

On Fri, Jun 24, 2016, 9:29 AM Scott  wrote:

> Hi Roman,
>
> I made it through step 6 however it does look like the problem you
> mentioned has occurred.  My engine VM is running on my host in the
> temporary cluster.  The stats under Hosts show this.  But in the Virtual
> Machines tab this VM still thinks its on my main cluster and I can't change
> that setting.  Did you have a suggestion on how to work around this?
> Thankfully only one of my RHEV instances has this upgrade path.
>
> Thanks for your help,
> Scott
>
> On Fri, Jun 24, 2016 at 2:15 AM Roman Mohr  wrote:
>
>> On Thu, Jun 23, 2016 at 10:26 PM, Scott  wrote:
>> > Hi Roman,
>> >
>> > Thanks for the detailed steps.  I follow the idea you have outlined and
>> I
>> > think its easier than what I thought of (moving my self hosted engine
>> back
>> > to physical hardware, upgrading and moving it back to self hosted).  I
>> will
>> > give it a spin in my build RHEV cluster tomorrow and let you know how I
>> get
>> > on.
>> >
>>
>> Thanks.
>>
>> The bug is here: https://bugzilla.redhat.com/show_bug.cgi?id=1349745.
>>
>> I thought about the solution and I see one possible problem with this
>> approach. It might be that the engine still thinks that the VM is on
>> the old cluster.
>> Let me know if this happens, we can work around that too.
>>
>> Roman
>>
>> > Thanks again,
>> > Scott
>> >
>> > On Thu, Jun 23, 2016 at 2:41 PM Roman Mohr  wrote:
>> >>
>> >> Hi Scott,
>> >>
>> >> On Thu, Jun 23, 2016 at 8:54 PM, Scott  wrote:
>> >> > Hello list,
>> >> >
>> >> > I'm trying to upgrade a self-hosted engine RHEV environment running
>> >> > 3.5/el6
>> >> > to 3.6/el7.  I'm following the process outlined in these two
>> documents:
>> >> >
>> >> >
>> >> >
>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_from_6_to_7.html
>> >> > https://access.redhat.com/solutions/2300331
>> >> >
>> >> > The problem I'm having is I don't seem to be able to apply the
>> >> > "InClusterUpgrade" policy (procedure 5.5, step 4).  I get the
>> following
>> >> > error:
>> >> >
>> >> > Can not start cluster upgrade mode, see below for details:
>> >> > VM HostedEngine with id 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is
>> >> > configured
>> >> > to be not migratable.
>> >> >
>> >> That is correct, only the he-agents on each host decide where the
>> >> hosted engine VM can start
>> >>
>> >> > But the HostedEngine VM is not one I can edit due to being
>> mid-upgrade.
>> >> > And
>> >> > even if I could, the setting its complaining about can't be managed
>> by
>> >> > the
>> >> > engine (I tried in another RHEV instance).
>> >> >
>> >> Also true, it is very limited what you can currently do with the
>> >> hosted engine VM.
>> >>
>> >>
>> >> > Is this a bug?  What am I missing to be able to move on?  As it seems
>> >> > now,
>> >> > the InClusterUpgrade scheduling policy is useless and can't actually
>> be
>> >> > used.
>> >>
>> >> That is indeed something the InClusterUpgrade does not take into
>> >> consideration. I will file a bug report.
>> >>
>> >>  But what you can do is the following:
>> >>
>> >> You can create a temporary cluster, move one host and the hosted
>> >> engine VM there, upgrade all hosts and then start the hosted-engine VM
>> >> in the original cluster again.
>> >>
>> >> The detailed steps are:
>> >>
>> >> 1) Enter the global maintenance mode
>> >> 2) Create a temporary cluster
>> >> 3) Put one of the hosted engine hosts which does not currently host
>> >> the engine into maintenance
>> >> 4) Move this host to the temporary cluster
>> >> 5) Stop the hosted-engine-vm with `hosted-engine --destroy-vm` (it
>> >> should not come up again since you are in maintenance mode)
>> >> 6) Start the hosted-egine-vm with `hosted-engine --start-vm` on the
>> >> host in the temporary cluster
>> >> 7) Now you can enable the InClusterUpgrade policy on your main cluster
>> >> 7) Proceed with your main cluster like described in
>> >>
>> >>
>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_from_6_to_7.html
>> >> 8) When all hosts are upgraded and InClusterUpgrade policy is disabled
>> >> again, move the hosted-engine-vm back to the original cluster
>> >> 9) Upgrade the last host
>> >> 10) Migrate the last host back
>> >> 11) Delete the temporary cluster
>> >> 12) Deactivate maintenance mode
>> >>
>> >> Adding Sandro and Roy to keep me honest.
>> >>
>> >> Roman
>> >>
>> >> >
>> >> > Thanks for any suggestions/help,
>> >> > 

Re: [ovirt-users] Ovirt 4.0 Engine Setup Failure

2016-06-24 Thread Melissa Mesler
Also, this is in the engine-setup logs:
2016-06-24 11:04:08 ERROR
otopi.plugins.ovirt_engine_common.base.core.misc misc._terminate:148
Execution of setup failed

On Fri, Jun 24, 2016, at 11:08 AM, Melissa Mesler wrote:
> I am doing a clean install of Ovirt 4.0. Upon executing engine-setup
> with all default values, it fails. This is the error during setup:
> 
> [ ERROR ] Failed to execute stage 'Misc configuration': Command
> '/usr/bin/ovirt-aaa-jdbc-tool' failed to execute
> 
> 
> Any ideas?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt 4.0 Engine Setup Failure

2016-06-24 Thread Melissa Mesler
I am doing a clean install of Ovirt 4.0. Upon executing engine-setup
with all default values, it fails. This is the error during setup:

[ ERROR ] Failed to execute stage 'Misc configuration': Command
'/usr/bin/ovirt-aaa-jdbc-tool' failed to execute


Any ideas?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Snapshot deletion failure

2016-06-24 Thread Nicolás
Done, you can find it in [1]. Feel free to add/modify anything you 
consider inaccurate.


Thank you.

 [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1349950

El 24/06/16 a las 12:39, Nir Soffer escribió:

On Fri, Jun 24, 2016 at 11:39 AM, Vinzenz Feenstra  wrote:

On Jun 24, 2016, at 9:10 AM, nico...@devels.es wrote:

Hi,

We're trying to delete an auto-generated live snapshot that has been created 
after migrating an online VM's storage to a different domain. oVirt version is 
3.6.6 and VDSM version is 4.17.28. Most relevant log lines are:

2016-06-24 07:50:36,252 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-10) 
[799a22e3] Failed in 'MergeVDS' method
2016-06-24 07:50:36,256 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(pool-7-thread-10) [799a22e3] Correlation ID: null, Call Stack: null, Custom 
Event ID: -1, Message: VDSM host2.domain.com command failed: Merge failed
2016-06-24 07:50:36,256 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (pool-7-thread-10) 
[799a22e3] Command 'MergeVDSCommand(HostName = host2.domain.com, 
MergeVDSCommandParameters:{runAsync='true', 
hostId='c31dca1a-e5bc-43f6-940f-6397e3ddbee4', 
vmId='7083832a-a1a2-42b7-961f-2e9c0dcd7e18', 
storagePoolId='fa155d43-4e68-486f-9f9d-ae3e3916cc4f', 
storageDomainId='9339780c-3667-4fef-aa13-9bec08957c5f', 
imageGroupId='65a0b0d4-5c96-4dd9-a31b-4d08e40a46a5', 
imageId='9eec9e8f-38db-4abf-b1c4-92fa9383f8b1', 
baseImageId='568b2f77-0ddf-4349-a45c-36fcb0edecb6', 
topImageId='9eec9e8f-38db-4abf-b1c4-92fa9383f8b1', bandwidth='0'})' execution 
failed: VDSGenericException: VDSErrorException: Failed to MergeVDS, error = 
Merge failed, code = 52
2016-06-24 07:50:36,256 ERROR [org.ovirt.engine.core.bll.MergeCommand] 
(pool-7-thread-10) [799a22e3] Engine exception thrown while sending merge 
command: org.ovirt.engine.core.common.errors.EngineException: EngineException: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge 
failed, code = 52 (Failed with error mergeErr and code 52)

I'm attaching relevant logs (both for ovirt-engine and SPM's vdsm).

What could be the reason for this error?

@Nir,

that looks like some issue in VDSM please have a look

jsonrpc.Executor/3::ERROR::2016-06-24 07:50:36,216::vm::4955::virt.vm::(merge) 
vmId=`7083832a-a1a2-42b7-961f-2e9c0dcd7e18`::Live merge failed (job: 
3ea68e36-6d99-4af9-a54e-4c5b06df6a0f)
Traceback (most recent call last):
   File "/usr/share/vdsm/virt/vm.py", line 4951, in merge
 flags)
   File "/usr/share/vdsm/virt/virdomain.py", line 68, in f
 ret = attr(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 124, 
in wrapper
 ret = f(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in wrapper
 return func(inst, *args, **kwargs)
   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 668, in 
blockCommit
 if ret == -1: raise libvirtError ('virDomainBlockCommit() failed', 
dom=self)
libvirtError: block copy still active: disk 'vda' already in active block job

This means there was a previous attempt to merge, and the block job
did not complete yet.

Engine should detect that there is an active block job and wait until
it completes.

This is probably engine bug, please file a bug and attach complete vdsm and
engine logs.

Ala, can you look at this?

Nir


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 3.6 to 4.0 upgrade cert issue

2016-06-24 Thread Matt Haught
I just attempted an upgrade from 3.6 to 4.0 hosted engine and ran into an
problem. The hosted engine vm updated without issue, but when I go to login
to the web interface to continue the process I get:

sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to find
valid certification path to requested target

at every page load and I can't login. I have a feeling that the issue comes
from where I replaced the self-signed certs with trusted ca signed certs a
year ago. Is there a work around?

CentOS 7.2

Thanks,

--
Matt Haught
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] InClusterUpgrade Scheduling Policy

2016-06-24 Thread Scott
Hi Roman,

I made it through step 6 however it does look like the problem you
mentioned has occurred.  My engine VM is running on my host in the
temporary cluster.  The stats under Hosts show this.  But in the Virtual
Machines tab this VM still thinks its on my main cluster and I can't change
that setting.  Did you have a suggestion on how to work around this?
Thankfully only one of my RHEV instances has this upgrade path.

Thanks for your help,
Scott

On Fri, Jun 24, 2016 at 2:15 AM Roman Mohr  wrote:

> On Thu, Jun 23, 2016 at 10:26 PM, Scott  wrote:
> > Hi Roman,
> >
> > Thanks for the detailed steps.  I follow the idea you have outlined and I
> > think its easier than what I thought of (moving my self hosted engine
> back
> > to physical hardware, upgrading and moving it back to self hosted).  I
> will
> > give it a spin in my build RHEV cluster tomorrow and let you know how I
> get
> > on.
> >
>
> Thanks.
>
> The bug is here: https://bugzilla.redhat.com/show_bug.cgi?id=1349745.
>
> I thought about the solution and I see one possible problem with this
> approach. It might be that the engine still thinks that the VM is on
> the old cluster.
> Let me know if this happens, we can work around that too.
>
> Roman
>
> > Thanks again,
> > Scott
> >
> > On Thu, Jun 23, 2016 at 2:41 PM Roman Mohr  wrote:
> >>
> >> Hi Scott,
> >>
> >> On Thu, Jun 23, 2016 at 8:54 PM, Scott  wrote:
> >> > Hello list,
> >> >
> >> > I'm trying to upgrade a self-hosted engine RHEV environment running
> >> > 3.5/el6
> >> > to 3.6/el7.  I'm following the process outlined in these two
> documents:
> >> >
> >> >
> >> >
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_from_6_to_7.html
> >> > https://access.redhat.com/solutions/2300331
> >> >
> >> > The problem I'm having is I don't seem to be able to apply the
> >> > "InClusterUpgrade" policy (procedure 5.5, step 4).  I get the
> following
> >> > error:
> >> >
> >> > Can not start cluster upgrade mode, see below for details:
> >> > VM HostedEngine with id 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is
> >> > configured
> >> > to be not migratable.
> >> >
> >> That is correct, only the he-agents on each host decide where the
> >> hosted engine VM can start
> >>
> >> > But the HostedEngine VM is not one I can edit due to being
> mid-upgrade.
> >> > And
> >> > even if I could, the setting its complaining about can't be managed by
> >> > the
> >> > engine (I tried in another RHEV instance).
> >> >
> >> Also true, it is very limited what you can currently do with the
> >> hosted engine VM.
> >>
> >>
> >> > Is this a bug?  What am I missing to be able to move on?  As it seems
> >> > now,
> >> > the InClusterUpgrade scheduling policy is useless and can't actually
> be
> >> > used.
> >>
> >> That is indeed something the InClusterUpgrade does not take into
> >> consideration. I will file a bug report.
> >>
> >>  But what you can do is the following:
> >>
> >> You can create a temporary cluster, move one host and the hosted
> >> engine VM there, upgrade all hosts and then start the hosted-engine VM
> >> in the original cluster again.
> >>
> >> The detailed steps are:
> >>
> >> 1) Enter the global maintenance mode
> >> 2) Create a temporary cluster
> >> 3) Put one of the hosted engine hosts which does not currently host
> >> the engine into maintenance
> >> 4) Move this host to the temporary cluster
> >> 5) Stop the hosted-engine-vm with `hosted-engine --destroy-vm` (it
> >> should not come up again since you are in maintenance mode)
> >> 6) Start the hosted-egine-vm with `hosted-engine --start-vm` on the
> >> host in the temporary cluster
> >> 7) Now you can enable the InClusterUpgrade policy on your main cluster
> >> 7) Proceed with your main cluster like described in
> >>
> >>
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_from_6_to_7.html
> >> 8) When all hosts are upgraded and InClusterUpgrade policy is disabled
> >> again, move the hosted-engine-vm back to the original cluster
> >> 9) Upgrade the last host
> >> 10) Migrate the last host back
> >> 11) Delete the temporary cluster
> >> 12) Deactivate maintenance mode
> >>
> >> Adding Sandro and Roy to keep me honest.
> >>
> >> Roman
> >>
> >> >
> >> > Thanks for any suggestions/help,
> >> > Scott
> >> >
> >> > ___
> >> > Users mailing list
> >> > Users@ovirt.org
> >> > http://lists.ovirt.org/mailman/listinfo/users
> >> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Snapshot deletion failure

2016-06-24 Thread Nir Soffer
On Fri, Jun 24, 2016 at 11:39 AM, Vinzenz Feenstra  wrote:
>
>> On Jun 24, 2016, at 9:10 AM, nico...@devels.es wrote:
>>
>> Hi,
>>
>> We're trying to delete an auto-generated live snapshot that has been created 
>> after migrating an online VM's storage to a different domain. oVirt version 
>> is 3.6.6 and VDSM version is 4.17.28. Most relevant log lines are:
>>
>>2016-06-24 07:50:36,252 ERROR 
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] 
>> (pool-7-thread-10) [799a22e3] Failed in 'MergeVDS' method
>>2016-06-24 07:50:36,256 ERROR 
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>> (pool-7-thread-10) [799a22e3] Correlation ID: null, Call Stack: null, Custom 
>> Event ID: -1, Message: VDSM host2.domain.com command failed: Merge failed
>>2016-06-24 07:50:36,256 ERROR 
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] 
>> (pool-7-thread-10) [799a22e3] Command 'MergeVDSCommand(HostName = 
>> host2.domain.com, MergeVDSCommandParameters:{runAsync='true', 
>> hostId='c31dca1a-e5bc-43f6-940f-6397e3ddbee4', 
>> vmId='7083832a-a1a2-42b7-961f-2e9c0dcd7e18', 
>> storagePoolId='fa155d43-4e68-486f-9f9d-ae3e3916cc4f', 
>> storageDomainId='9339780c-3667-4fef-aa13-9bec08957c5f', 
>> imageGroupId='65a0b0d4-5c96-4dd9-a31b-4d08e40a46a5', 
>> imageId='9eec9e8f-38db-4abf-b1c4-92fa9383f8b1', 
>> baseImageId='568b2f77-0ddf-4349-a45c-36fcb0edecb6', 
>> topImageId='9eec9e8f-38db-4abf-b1c4-92fa9383f8b1', bandwidth='0'})' 
>> execution failed: VDSGenericException: VDSErrorException: Failed to 
>> MergeVDS, error = Merge failed, code = 52
>>2016-06-24 07:50:36,256 ERROR [org.ovirt.engine.core.bll.MergeCommand] 
>> (pool-7-thread-10) [799a22e3] Engine exception thrown while sending merge 
>> command: org.ovirt.engine.core.common.errors.EngineException: 
>> EngineException: 
>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
>> VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Merge 
>> failed, code = 52 (Failed with error mergeErr and code 52)
>>
>> I'm attaching relevant logs (both for ovirt-engine and SPM's vdsm).
>>
>> What could be the reason for this error?
>
> @Nir,
>
> that looks like some issue in VDSM please have a look
>
> jsonrpc.Executor/3::ERROR::2016-06-24 
> 07:50:36,216::vm::4955::virt.vm::(merge) 
> vmId=`7083832a-a1a2-42b7-961f-2e9c0dcd7e18`::Live merge failed (job: 
> 3ea68e36-6d99-4af9-a54e-4c5b06df6a0f)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/virt/vm.py", line 4951, in merge
> flags)
>   File "/usr/share/vdsm/virt/virdomain.py", line 68, in f
> ret = attr(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 
> 124, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 668, in 
> blockCommit
> if ret == -1: raise libvirtError ('virDomainBlockCommit() failed', 
> dom=self)
> libvirtError: block copy still active: disk 'vda' already in active block job

This means there was a previous attempt to merge, and the block job
did not complete yet.

Engine should detect that there is an active block job and wait until
it completes.

This is probably engine bug, please file a bug and attach complete vdsm and
engine logs.

Ala, can you look at this?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to start self hosted engine after upgrading oVirt to 4.0

2016-06-24 Thread Stefano Danzi
How I can change self hosted engine configuration to mount directly 
gluster storage without pass through gluster NFS?


Maybe this solve

Il 24/06/2016 10.16, Stefano Danzi ha scritto:
After an additional yum clean all && yum update was updated some other 
rpms.


Something changed.
My setup has engine storage on gluster, but mounted with NFS.
Now gluster daemon don't automatically start at boot. After starting 
manually gluster the error is the same:


==> /var/log/ovirt-hosted-engine-ha/broker.log <==
Thread-19::ERROR::2016-06-24 
10:10:36,758::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) 
Error while serving connection

Traceback (most recent call last):
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
line 166, in handle

data)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
line 299, in _dispatch

.set_storage_domain(client, sd_type, **options)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", 
line 66, in set_storage_domain

self._backends[client].connect()
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", 
line 400, in connect

volUUID=volume.volume_uuid
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", 
line 245, in _get_volume_path

volUUID
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
verbose=self.__verbose
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request
self.send_content(h, request_body)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content
connection.endheaders(request_body)
  File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders
self._send_output(message_body)
  File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output
self.send(msg)
  File "/usr/lib64/python2.7/httplib.py", line 797, in send
self.connect()
  File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, 
in connect

sock = socket.create_connection((self.host, self.port), self.timeout)
  File "/usr/lib64/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 101] Network is unreachable


VDSM.log

jsonrpc.Executor/5::DEBUG::2016-06-24 
10:10:21,694::task::995::Storage.TaskManager.Task::(_decref) 
Task=`5c3b6f30-d3a8-431e-9dd0-8df79b171709`::ref 0

aborting False
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Following parameters ['type'] were not recogn

ized
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Provided value "2" not defined in DiskType en

um for Volume.getInfo
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter capacity is not uint type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Required property allocType is not provided w

hen calling Volume.getInfo
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter mtime is not uint type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter ctime is not int type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter truesize is not uint type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter apparentsize is not uint type
jsonrpc.Executor/5::DEBUG::2016-06-24 
10:10:21,695::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) 
Return 'Volume.getInfo' in bridge with {'sta
tus': 'OK', 'domain': '46f55a31-f35f-465c-b3e2-df45c05e06a7', 
'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': 
'--00
00--', 'format': 'RAW', 'image': 
'6838c974-7656-4b40-87cc-f562ff0b2a4c', 'ctime': '1423074433', 
'disktype': '2', 'legality': 'LEGAL',
'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 
'capacity': '1048576', 'uuid': 
u'c66a14d3-112a-4104-9025-76bb2e7ad9f1', 'truesize

': '1048576', 'type': 'PREALLOCATED'}
JsonRpc (StompReactor)::ERROR::2016-06-24 
10:10:36,514::betterAsyncore::113::vds.dispatcher::(recv) SSL error 
during reading data: (104, 'Connection r

eset by peer')
JsonRpc (StompReactor)::WARNING::2016-06-24 
10:10:36,515::betterAsyncore::154::vds.dispatcher::(log_info) 
unhandled close event
JsonRpc (StompReactor)::ERROR::2016-06-24 

Re: [ovirt-users] Is oVirt 3.6 with GlusterFS 3.7 recommended for production usage?

2016-06-24 Thread Sahina Bose



On 06/24/2016 03:16 PM, Dewey Du wrote:

You mean we dont need to add new storage domain for each ovirt-node?
Maybe I choose the "Use Host" option wrongly? I thought this is the 
way to tie the storage of "Host-01" to GlusterFS by selecting 
"Host-01" on the "Use Host" option.
And I should do the same thing for "Host-02", which adding new storage 
domain with choosing "Host-02" on the "Use Host" option.

So do others. Host-03, Host-04 ...


While creating storage domain, the host provided in "Use host" is the 
one used to initially setup the connection to the storage.
In case of gluster storage domain, if like you said, you want to 
separate storage servers from hypervisor hosts - you would set up the 
gluster volume using bricks on Host-04, Host-05, Host-06.
And in oVirt create a cluster with your hypervisor hosts - Host-01, 
Host-02 and Host-03. To create a data storage domain - use any of the 
hosts (01-03) and provide path to gluster volume as host-04:/


In case of hyperconverged setup, the gluster volume can be setup on 
Host-01, Host-02, Host-03 itself. Check 
http://blogs-ramesh.blogspot.in/2016/01/ovirt-and-gluster-hyperconvergence.html






On Fri, Jun 24, 2016 at 3:15 PM, Sahina Bose > wrote:




On 06/24/2016 11:25 AM, Dewey Du wrote:

I prefer deploying as a hyperconverged setup, but it is still
under experiment, right?


Hyperconverged deployment with oVirt and Gluster has been tested
and is currently offered as a preview feature with guidance on
do's/don'ts & recommended gluster volume settings. We're working
on enhancing this further to make it easier to setup, integrate
better in the oVirt UI in 4.1



So I try to separate vm service and storage. I added new storage
domain( Domain Type "Data", Storage Type "GlusterFS", Use Host
"host-01". But then I can't add another new storage domain(
Domain Type "Data", Storage Type "GlusterFS", Use Host
"host-02"). The input field "path" is unwritable (gray) on the
Popup New Domain Window.

My question is, should we add a new storage domain for each
ovirt-node?


No, you dont need to.
This seems like a bug in the Create storage domain UI. Does
refreshing your browser fix the greyed-out input field? Any errors
seen in engine logs?






On Tue, Jun 21, 2016 at 7:15 PM, Sahina Bose > wrote:

Make sure that you use replica 3 gluster volumes for storing
VM images. Are you planning to deploy as a hyperconverged setup?
Either way, try and use the latest ovirt 3.6 and glusterfs
3.7 (3.7.12 that addresses bugs related to sharding and
o-direct is due to be released soon)

On 06/21/2016 07:08 AM, Dewey Du wrote:

I want to deploy oVirt 3.6 with GlusterFS 3.7 to my online
servers. Is it recommended for production usage?

Thx.


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users








___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is oVirt 3.6 with GlusterFS 3.7 recommended for production usage?

2016-06-24 Thread Dewey Du
You mean we dont need to add new storage domain for each ovirt-node?
Maybe I choose the "Use Host" option wrongly? I thought this is the way to
tie the storage of "Host-01" to GlusterFS by selecting "Host-01" on the
"Use Host" option.
And I should do the same thing for "Host-02", which adding new storage
domain with choosing "Host-02" on the "Use Host" option.
So do others. Host-03, Host-04 ...



On Fri, Jun 24, 2016 at 3:15 PM, Sahina Bose  wrote:

>
>
> On 06/24/2016 11:25 AM, Dewey Du wrote:
>
> I prefer deploying as a hyperconverged setup, but it is still under 
> experiment,
> right?
>
>
> Hyperconverged deployment with oVirt and Gluster has been tested and is
> currently offered as a preview feature with guidance on do's/don'ts &
> recommended gluster volume settings. We're working on enhancing this
> further to make it easier to setup, integrate better in the oVirt UI in 4.1
>
>
> So I try to separate vm service and storage. I added new storage domain(
> Domain Type "Data", Storage Type "GlusterFS", Use Host "host-01". But then I
> can't add another new storage domain( Domain Type "Data", Storage Type
> "GlusterFS", Use Host "host-02"). The input field "path" is unwritable
> (gray) on the Popup New Domain Window.
>
> My question is, should we add a new storage domain for each ovirt-node?
>
>
> No, you dont need to.
> This seems like a bug in the Create storage domain UI. Does refreshing
> your browser fix the greyed-out input field? Any errors seen in engine logs?
>
>
>
>
>
> On Tue, Jun 21, 2016 at 7:15 PM, Sahina Bose  wrote:
>
>> Make sure that you use replica 3 gluster volumes for storing VM images.
>> Are you planning to deploy as a hyperconverged setup?
>> Either way, try and use the latest ovirt 3.6 and glusterfs 3.7 (3.7.12
>> that addresses bugs related to sharding and o-direct is due to be released
>> soon)
>>
>> On 06/21/2016 07:08 AM, Dewey Du wrote:
>>
>> I want to deploy oVirt 3.6 with GlusterFS 3.7 to my online servers. Is it
>> recommended for production usage?
>>
>> Thx.
>>
>>
>> ___
>> Users mailing 
>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to start self hosted engine after upgrading oVirt to 4.0

2016-06-24 Thread Stefano Danzi

After an additional yum clean all && yum update was updated some other rpms.

Something changed.
My setup has engine storage on gluster, but mounted with NFS.
Now gluster daemon don't automatically start at boot. After starting 
manually gluster the error is the same:


==> /var/log/ovirt-hosted-engine-ha/broker.log <==
Thread-19::ERROR::2016-06-24 
10:10:36,758::listener::182::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) 
Error while serving connection

Traceback (most recent call last):
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
line 166, in handle

data)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py", 
line 299, in _dispatch

.set_storage_domain(client, sd_type, **options)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", 
line 66, in set_storage_domain

self._backends[client].connect()
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", 
line 400, in connect

volUUID=volume.volume_uuid
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", 
line 245, in _get_volume_path

volUUID
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
verbose=self.__verbose
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request
self.send_content(h, request_body)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content
connection.endheaders(request_body)
  File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders
self._send_output(message_body)
  File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output
self.send(msg)
  File "/usr/lib64/python2.7/httplib.py", line 797, in send
self.connect()
  File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, 
in connect

sock = socket.create_connection((self.host, self.port), self.timeout)
  File "/usr/lib64/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 101] Network is unreachable


VDSM.log

jsonrpc.Executor/5::DEBUG::2016-06-24 
10:10:21,694::task::995::Storage.TaskManager.Task::(_decref) 
Task=`5c3b6f30-d3a8-431e-9dd0-8df79b171709`::ref 0

aborting False
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Following parameters ['type'] were not recogn

ized
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Provided value "2" not defined in DiskType en

um for Volume.getInfo
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter capacity is not uint type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Required property allocType is not provided w

hen calling Volume.getInfo
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter mtime is not uint type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,694::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter ctime is not int type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter truesize is not uint type
jsonrpc.Executor/5::WARNING::2016-06-24 
10:10:21,695::vdsmapi::143::SchemaCache::(_report_inconsistency) 
Parameter apparentsize is not uint type
jsonrpc.Executor/5::DEBUG::2016-06-24 
10:10:21,695::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) 
Return 'Volume.getInfo' in bridge with {'sta
tus': 'OK', 'domain': '46f55a31-f35f-465c-b3e2-df45c05e06a7', 'voltype': 
'LEAF', 'description': 'hosted-engine.lockspace', 'parent': 
'--00
00--', 'format': 'RAW', 'image': 
'6838c974-7656-4b40-87cc-f562ff0b2a4c', 'ctime': '1423074433', 
'disktype': '2', 'legality': 'LEGAL',
'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 
'capacity': '1048576', 'uuid': u'c66a14d3-112a-4104-9025-76bb2e7ad9f1', 
'truesize

': '1048576', 'type': 'PREALLOCATED'}
JsonRpc (StompReactor)::ERROR::2016-06-24 
10:10:36,514::betterAsyncore::113::vds.dispatcher::(recv) SSL error 
during reading data: (104, 'Connection r

eset by peer')
JsonRpc (StompReactor)::WARNING::2016-06-24 
10:10:36,515::betterAsyncore::154::vds.dispatcher::(log_info) unhandled 
close event
JsonRpc (StompReactor)::ERROR::2016-06-24 
10:10:43,807::betterAsyncore::132::vds.dispatcher::(send) SSL error 
during sending data: (104, 'Connection r

eset by peer')
JsonRpc (StompReactor)::ERROR::2016-06-24 
10:10:43,959::betterAsyncore::113::vds.dispatcher::(recv) SSL error 

Re: [ovirt-users] Is oVirt 3.6 with GlusterFS 3.7 recommended for production usage?

2016-06-24 Thread Sahina Bose



On 06/24/2016 11:25 AM, Dewey Du wrote:
I prefer deploying as a hyperconverged setup, but it is still under 
experiment, right?


Hyperconverged deployment with oVirt and Gluster has been tested and is 
currently offered as a preview feature with guidance on do's/don'ts & 
recommended gluster volume settings. We're working on enhancing this 
further to make it easier to setup, integrate better in the oVirt UI in 4.1




So I try to separate vm service and storage. I added new storage 
domain( Domain Type "Data", Storage Type "GlusterFS", Use Host 
"host-01". But then I can't add another new storage domain( Domain 
Type "Data", Storage Type "GlusterFS", Use Host "host-02"). The input 
field "path" is unwritable (gray) on the Popup New Domain Window.


My question is, should we add a new storage domain for each ovirt-node?


No, you dont need to.
This seems like a bug in the Create storage domain UI. Does refreshing 
your browser fix the greyed-out input field? Any errors seen in engine logs?







On Tue, Jun 21, 2016 at 7:15 PM, Sahina Bose > wrote:


Make sure that you use replica 3 gluster volumes for storing VM
images. Are you planning to deploy as a hyperconverged setup?
Either way, try and use the latest ovirt 3.6 and glusterfs 3.7
(3.7.12 that addresses bugs related to sharding and o-direct is
due to be released soon)

On 06/21/2016 07:08 AM, Dewey Du wrote:

I want to deploy oVirt 3.6 with GlusterFS 3.7 to my online
servers. Is it recommended for production usage?

Thx.


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] InClusterUpgrade Scheduling Policy

2016-06-24 Thread Roman Mohr
On Thu, Jun 23, 2016 at 10:26 PM, Scott  wrote:
> Hi Roman,
>
> Thanks for the detailed steps.  I follow the idea you have outlined and I
> think its easier than what I thought of (moving my self hosted engine back
> to physical hardware, upgrading and moving it back to self hosted).  I will
> give it a spin in my build RHEV cluster tomorrow and let you know how I get
> on.
>

Thanks.

The bug is here: https://bugzilla.redhat.com/show_bug.cgi?id=1349745.

I thought about the solution and I see one possible problem with this
approach. It might be that the engine still thinks that the VM is on
the old cluster.
Let me know if this happens, we can work around that too.

Roman

> Thanks again,
> Scott
>
> On Thu, Jun 23, 2016 at 2:41 PM Roman Mohr  wrote:
>>
>> Hi Scott,
>>
>> On Thu, Jun 23, 2016 at 8:54 PM, Scott  wrote:
>> > Hello list,
>> >
>> > I'm trying to upgrade a self-hosted engine RHEV environment running
>> > 3.5/el6
>> > to 3.6/el7.  I'm following the process outlined in these two documents:
>> >
>> >
>> > https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_from_6_to_7.html
>> > https://access.redhat.com/solutions/2300331
>> >
>> > The problem I'm having is I don't seem to be able to apply the
>> > "InClusterUpgrade" policy (procedure 5.5, step 4).  I get the following
>> > error:
>> >
>> > Can not start cluster upgrade mode, see below for details:
>> > VM HostedEngine with id 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is
>> > configured
>> > to be not migratable.
>> >
>> That is correct, only the he-agents on each host decide where the
>> hosted engine VM can start
>>
>> > But the HostedEngine VM is not one I can edit due to being mid-upgrade.
>> > And
>> > even if I could, the setting its complaining about can't be managed by
>> > the
>> > engine (I tried in another RHEV instance).
>> >
>> Also true, it is very limited what you can currently do with the
>> hosted engine VM.
>>
>>
>> > Is this a bug?  What am I missing to be able to move on?  As it seems
>> > now,
>> > the InClusterUpgrade scheduling policy is useless and can't actually be
>> > used.
>>
>> That is indeed something the InClusterUpgrade does not take into
>> consideration. I will file a bug report.
>>
>>  But what you can do is the following:
>>
>> You can create a temporary cluster, move one host and the hosted
>> engine VM there, upgrade all hosts and then start the hosted-engine VM
>> in the original cluster again.
>>
>> The detailed steps are:
>>
>> 1) Enter the global maintenance mode
>> 2) Create a temporary cluster
>> 3) Put one of the hosted engine hosts which does not currently host
>> the engine into maintenance
>> 4) Move this host to the temporary cluster
>> 5) Stop the hosted-engine-vm with `hosted-engine --destroy-vm` (it
>> should not come up again since you are in maintenance mode)
>> 6) Start the hosted-egine-vm with `hosted-engine --start-vm` on the
>> host in the temporary cluster
>> 7) Now you can enable the InClusterUpgrade policy on your main cluster
>> 7) Proceed with your main cluster like described in
>>
>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_from_6_to_7.html
>> 8) When all hosts are upgraded and InClusterUpgrade policy is disabled
>> again, move the hosted-engine-vm back to the original cluster
>> 9) Upgrade the last host
>> 10) Migrate the last host back
>> 11) Delete the temporary cluster
>> 12) Deactivate maintenance mode
>>
>> Adding Sandro and Roy to keep me honest.
>>
>> Roman
>>
>> >
>> > Thanks for any suggestions/help,
>> > Scott
>> >
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Snapshot deletion failure

2016-06-24 Thread nicolas

Hi,

We're trying to delete an auto-generated live snapshot that has been 
created after migrating an online VM's storage to a different domain. 
oVirt version is 3.6.6 and VDSM version is 4.17.28. Most relevant log 
lines are:


2016-06-24 07:50:36,252 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] 
(pool-7-thread-10) [799a22e3] Failed in 'MergeVDS' method
2016-06-24 07:50:36,256 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(pool-7-thread-10) [799a22e3] Correlation ID: null, Call Stack: null, 
Custom Event ID: -1, Message: VDSM host2.domain.com command failed: 
Merge failed
2016-06-24 07:50:36,256 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] 
(pool-7-thread-10) [799a22e3] Command 'MergeVDSCommand(HostName = 
host2.domain.com, MergeVDSCommandParameters:{runAsync='true', 
hostId='c31dca1a-e5bc-43f6-940f-6397e3ddbee4', 
vmId='7083832a-a1a2-42b7-961f-2e9c0dcd7e18', 
storagePoolId='fa155d43-4e68-486f-9f9d-ae3e3916cc4f', 
storageDomainId='9339780c-3667-4fef-aa13-9bec08957c5f', 
imageGroupId='65a0b0d4-5c96-4dd9-a31b-4d08e40a46a5', 
imageId='9eec9e8f-38db-4abf-b1c4-92fa9383f8b1', 
baseImageId='568b2f77-0ddf-4349-a45c-36fcb0edecb6', 
topImageId='9eec9e8f-38db-4abf-b1c4-92fa9383f8b1', bandwidth='0'})' 
execution failed: VDSGenericException: VDSErrorException: Failed to 
MergeVDS, error = Merge failed, code = 52
2016-06-24 07:50:36,256 ERROR 
[org.ovirt.engine.core.bll.MergeCommand] (pool-7-thread-10) [799a22e3] 
Engine exception thrown while sending merge command: 
org.ovirt.engine.core.common.errors.EngineException: EngineException: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed to MergeVDS, error = 
Merge failed, code = 52 (Failed with error mergeErr and code 52)


I'm attaching relevant logs (both for ovirt-engine and SPM's vdsm).

What could be the reason for this error?

Thanks.

snapshot-deletion.tar.gz
Description: GNU Zip compressed data
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.0.0 - Cluster Compatibility Version

2016-06-24 Thread Michal Skrivanek

> On 24 Jun 2016, at 08:15, Sandro Bonazzola  wrote:
> 
> 
> 
> On Fri, Jun 24, 2016 at 12:08 AM, Scott  > wrote:
> Hello list,
> 
> I've successfully upgraded oVirt to 4.0 from 3.6 on my engine and three 
> hosts.  However, it doesn't look like I can change the Cluster Compatibility 
> Version to 4.0.  It tells me I need to shut down all the VMs in the cluster.  
> Except I use Hosted Engine.  How do I change the cluster compatibility 
> version when the engine is down?
> 
> We're on it, tracked by https://bugzilla.redhat.com/show_bug.cgi?id=1348907 
>  (During cluster level 
> upgrade - warn and mark VMs as pending a configuration change when they are 
> running)

While that’s in progress, I’m still not clear whether that is a real blocker 
for this flow or not as Roy/Didi was saying due to HE having it’s own separate 
configuration the set of hosts it runs on does not necessarily have to follow 
the engine’s view of the cluster. So I don’t really know what’s required for HE 
to change its cluster

Thanks,
michal

> 
> 
>  
> 
> Thanks in advance,
> Scott
> 
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users 
> 
> 
> 
> 
> 
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.0.0 - Cluster Compatibility Version

2016-06-24 Thread Sandro Bonazzola
On Fri, Jun 24, 2016 at 12:08 AM, Scott  wrote:

> Hello list,
>
> I've successfully upgraded oVirt to 4.0 from 3.6 on my engine and three
> hosts.  However, it doesn't look like I can change the Cluster
> Compatibility Version to 4.0.  It tells me I need to shut down all the VMs
> in the cluster.  Except I use Hosted Engine.  How do I change the cluster
> compatibility version when the engine is down?
>

We're on it, tracked by https://bugzilla.redhat.com/show_bug.cgi?id=1348907
(During cluster level upgrade - warn and mark VMs as pending a
configuration change when they are running)




>
> Thanks in advance,
> Scott
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users