Re: [ovirt-users] HA cluster

2016-01-04 Thread Budur Nagaraju
I get the below out put ,

[root@he ~]# lsmod |grep kvm
kvm_intel  55624  0
kvm   345460  1 kvm_intel


On Tue, Jan 5, 2016 at 12:06 AM, Budur Nagaraju  wrote:

> Is there any command to check KVM is available or not ?
>
> Below is the output when I run the rpm command.
>
> [root@he /]# rpm -qa |grep kvm
> qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64
>
>
> On Mon, Jan 4, 2016 at 8:24 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Mon, Jan 4, 2016 at 3:06 PM, Budur Nagaraju  wrote:
>>
>>> Hi Simone
>>>
>>> I have installed KVM server on the physical machine  and installed
>>> centos6.7 vm on the server and tried to deploy Hosted-engine in the vm
>>> ,getting the same Error below is the Logs.
>>>
>>> http://pastebin.com/pg6k8irV
>>>
>>> can you pls help me ?
>>>
>>>
>> The issue is here:
>>
>> Thread-84::ERROR::2016-01-04
>> 19:31:42,304::vm::2358::vm.Vm::(_startUnderlyingVm)
>> vmId=`3d3edc54-ceae-43e5-84a4-50a21c31d9cd`::The vm start process failed
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm
>> self._run()
>>   File "/usr/share/vdsm/virt/vm.py", line 3363, in _run
>> self._connection.createXML(domxml, flags),
>>   File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line
>> 119, in wrapper
>> ret = f(*args, **kwargs)
>>   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in
>> createXML
>> if ret is None:raise libvirtError('virDomainCreateXML() failed',
>> conn=self)
>> libvirtError: unsupported configuration: Domain requires KVM, but it is
>> not available. Check that virtualization is enabled in the host BIOS, and
>> host configuration is setup to load the kvm modules.
>>
>> libvirt refuses to start the engine VM cause KVM is not available.
>> Can you please check it?
>>
>>
>>> Thanks,
>>> Nagaraju
>>>
>>>
>>> On Wed, Dec 2, 2015 at 5:35 PM, Simone Tiraboschi 
>>> wrote:
>>>


 On Wed, Dec 2, 2015 at 12:19 PM, Budur Nagaraju 
 wrote:

> I have installed KVM in the nested environment  in ESXi6.x version is
> that recommended ?
>

 I often use KVM over KVM in nested environment but honestly I never
 tried to run KVM over ESXi but I suspect that all of your issues comes from
 there.


> apart from Hosted engine is there any other alternate way to configure
> Engine HA cluster ?
>

 Nothing else from the project. You can use two external VMs in cluster
 with pacemaker but it's completely up to you.


>
>
> -Nagaraju
>
>
> On Wed, Dec 2, 2015 at 4:11 PM, Simone Tiraboschi  > wrote:
>
>>
>>
>> On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju 
>> wrote:
>>
>>> pls fine the logs from the below mentioned URL,
>>>
>>> http://pastebin.com/ZeKyyFbN
>>>
>>
>> OK, the issue is here:
>>
>> Thread-88::ERROR::2015-12-02
>> 15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm)
>> vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm
>> self._run()
>>   File "/usr/share/vdsm/virt/vm.py", line 3363, in _run
>> self._connection.createXML(domxml, flags),
>>   File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py",
>> line 119, in wrapper
>> ret = f(*args, **kwargs)
>>   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in
>> createXML
>> if ret is None:raise libvirtError('virDomainCreateXML() failed',
>> conn=self)
>> libvirtError: unsupported configuration: Domain requires KVM, but it
>> is not available. Check that virtualization is enabled in the host BIOS,
>> and host configuration is setup to load the kvm modules.
>> Thread-88::DEBUG::2015-12-02
>> 15:06:27,751::vm::2813::vm.Vm::(setDownStatus)
>> vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down:
>> unsupported configuration: Domain requires KVM, but it is not available.
>> Check that virtualization is enabled in the host BIOS, and host
>> configuration is setup to load the kvm modules. (code=1)
>>
>> but it's pretty strange cause hosted-engine-setup already explicitly
>> check for visualization support and just exits with a clear error if not.
>> Did you played with the kvm module while hosted-engine-setup was
>> running?
>>
>> Can you please hosted-engine-setup logs?
>>
>>
>>>
>>> On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi <
>>> stira...@redhat.com> wrote:
>>>


 On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan 
 wrote:

> Maybe even makes sense to open a bugzilla ticket already. Better
> safe than sorry.
>

 We still need at least one log file to understa

Re: [ovirt-users] HA cluster

2016-01-04 Thread Budur Nagaraju
Is there any command to check KVM is available or not ?

Below is the output when I run the rpm command.

[root@he /]# rpm -qa |grep kvm
qemu-kvm-rhev-0.12.1.2-2.479.el6_7.2.x86_64


On Mon, Jan 4, 2016 at 8:24 PM, Simone Tiraboschi 
wrote:

>
>
> On Mon, Jan 4, 2016 at 3:06 PM, Budur Nagaraju  wrote:
>
>> Hi Simone
>>
>> I have installed KVM server on the physical machine  and installed
>> centos6.7 vm on the server and tried to deploy Hosted-engine in the vm
>> ,getting the same Error below is the Logs.
>>
>> http://pastebin.com/pg6k8irV
>>
>> can you pls help me ?
>>
>>
> The issue is here:
>
> Thread-84::ERROR::2016-01-04
> 19:31:42,304::vm::2358::vm.Vm::(_startUnderlyingVm)
> vmId=`3d3edc54-ceae-43e5-84a4-50a21c31d9cd`::The vm start process failed
> Traceback (most recent call last):
>   File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm
> self._run()
>   File "/usr/share/vdsm/virt/vm.py", line 3363, in _run
> self._connection.createXML(domxml, flags),
>   File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line
> 119, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in
> createXML
> if ret is None:raise libvirtError('virDomainCreateXML() failed',
> conn=self)
> libvirtError: unsupported configuration: Domain requires KVM, but it is
> not available. Check that virtualization is enabled in the host BIOS, and
> host configuration is setup to load the kvm modules.
>
> libvirt refuses to start the engine VM cause KVM is not available.
> Can you please check it?
>
>
>> Thanks,
>> Nagaraju
>>
>>
>> On Wed, Dec 2, 2015 at 5:35 PM, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Wed, Dec 2, 2015 at 12:19 PM, Budur Nagaraju 
>>> wrote:
>>>
 I have installed KVM in the nested environment  in ESXi6.x version is
 that recommended ?

>>>
>>> I often use KVM over KVM in nested environment but honestly I never
>>> tried to run KVM over ESXi but I suspect that all of your issues comes from
>>> there.
>>>
>>>
 apart from Hosted engine is there any other alternate way to configure
 Engine HA cluster ?

>>>
>>> Nothing else from the project. You can use two external VMs in cluster
>>> with pacemaker but it's completely up to you.
>>>
>>>


 -Nagaraju


 On Wed, Dec 2, 2015 at 4:11 PM, Simone Tiraboschi 
 wrote:

>
>
> On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju 
> wrote:
>
>> pls fine the logs from the below mentioned URL,
>>
>> http://pastebin.com/ZeKyyFbN
>>
>
> OK, the issue is here:
>
> Thread-88::ERROR::2015-12-02
> 15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm)
> vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed
> Traceback (most recent call last):
>   File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm
> self._run()
>   File "/usr/share/vdsm/virt/vm.py", line 3363, in _run
> self._connection.createXML(domxml, flags),
>   File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py",
> line 119, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in
> createXML
> if ret is None:raise libvirtError('virDomainCreateXML() failed',
> conn=self)
> libvirtError: unsupported configuration: Domain requires KVM, but it
> is not available. Check that virtualization is enabled in the host BIOS,
> and host configuration is setup to load the kvm modules.
> Thread-88::DEBUG::2015-12-02
> 15:06:27,751::vm::2813::vm.Vm::(setDownStatus)
> vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down:
> unsupported configuration: Domain requires KVM, but it is not available.
> Check that virtualization is enabled in the host BIOS, and host
> configuration is setup to load the kvm modules. (code=1)
>
> but it's pretty strange cause hosted-engine-setup already explicitly
> check for visualization support and just exits with a clear error if not.
> Did you played with the kvm module while hosted-engine-setup was
> running?
>
> Can you please hosted-engine-setup logs?
>
>
>>
>> On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi <
>> stira...@redhat.com> wrote:
>>
>>>
>>>
>>> On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan 
>>> wrote:
>>>
 Maybe even makes sense to open a bugzilla ticket already. Better
 safe than sorry.

>>>
>>> We still need at least one log file to understand what happened.
>>>
>>>
 On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" 
 wrote:

>
> On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju <
> nbud...@gmail.com> wrote:
>
>> I do not know what logs you are expecting ? the logs which I got
>> is pasted in the mail i

Re: [ovirt-users] HA cluster

2016-01-04 Thread Simone Tiraboschi
On Mon, Jan 4, 2016 at 3:06 PM, Budur Nagaraju  wrote:

> Hi Simone
>
> I have installed KVM server on the physical machine  and installed
> centos6.7 vm on the server and tried to deploy Hosted-engine in the vm
> ,getting the same Error below is the Logs.
>
> http://pastebin.com/pg6k8irV
>
> can you pls help me ?
>
>
The issue is here:

Thread-84::ERROR::2016-01-04
19:31:42,304::vm::2358::vm.Vm::(_startUnderlyingVm)
vmId=`3d3edc54-ceae-43e5-84a4-50a21c31d9cd`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm
self._run()
  File "/usr/share/vdsm/virt/vm.py", line 3363, in _run
self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line
119, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in
createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed',
conn=self)
libvirtError: unsupported configuration: Domain requires KVM, but it is not
available. Check that virtualization is enabled in the host BIOS, and host
configuration is setup to load the kvm modules.

libvirt refuses to start the engine VM cause KVM is not available.
Can you please check it?


> Thanks,
> Nagaraju
>
>
> On Wed, Dec 2, 2015 at 5:35 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Wed, Dec 2, 2015 at 12:19 PM, Budur Nagaraju 
>> wrote:
>>
>>> I have installed KVM in the nested environment  in ESXi6.x version is
>>> that recommended ?
>>>
>>
>> I often use KVM over KVM in nested environment but honestly I never tried
>> to run KVM over ESXi but I suspect that all of your issues comes from there.
>>
>>
>>> apart from Hosted engine is there any other alternate way to configure
>>> Engine HA cluster ?
>>>
>>
>> Nothing else from the project. You can use two external VMs in cluster
>> with pacemaker but it's completely up to you.
>>
>>
>>>
>>>
>>> -Nagaraju
>>>
>>>
>>> On Wed, Dec 2, 2015 at 4:11 PM, Simone Tiraboschi 
>>> wrote:
>>>


 On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju 
 wrote:

> pls fine the logs from the below mentioned URL,
>
> http://pastebin.com/ZeKyyFbN
>

 OK, the issue is here:

 Thread-88::ERROR::2015-12-02
 15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm)
 vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed
 Traceback (most recent call last):
   File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm
 self._run()
   File "/usr/share/vdsm/virt/vm.py", line 3363, in _run
 self._connection.createXML(domxml, flags),
   File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py",
 line 119, in wrapper
 ret = f(*args, **kwargs)
   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in
 createXML
 if ret is None:raise libvirtError('virDomainCreateXML() failed',
 conn=self)
 libvirtError: unsupported configuration: Domain requires KVM, but it is
 not available. Check that virtualization is enabled in the host BIOS, and
 host configuration is setup to load the kvm modules.
 Thread-88::DEBUG::2015-12-02
 15:06:27,751::vm::2813::vm.Vm::(setDownStatus)
 vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down:
 unsupported configuration: Domain requires KVM, but it is not available.
 Check that virtualization is enabled in the host BIOS, and host
 configuration is setup to load the kvm modules. (code=1)

 but it's pretty strange cause hosted-engine-setup already explicitly
 check for visualization support and just exits with a clear error if not.
 Did you played with the kvm module while hosted-engine-setup was
 running?

 Can you please hosted-engine-setup logs?


>
> On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi <
> stira...@redhat.com> wrote:
>
>>
>>
>> On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan 
>> wrote:
>>
>>> Maybe even makes sense to open a bugzilla ticket already. Better
>>> safe than sorry.
>>>
>>
>> We still need at least one log file to understand what happened.
>>
>>
>>> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" 
>>> wrote:
>>>

 On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju >>> > wrote:

> I do not know what logs you are expecting ? the logs which I got
> is pasted in the mail if you require in pastebin let me know I will 
> upload
> there .
>


 Please run sosreport utility and share the resulting archive where
 you prefer.
 You can follow this guide:
 http://www.linuxtechi.com/how-to-create-sosreport-in-linux/

>
>
> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola <
> sbona...@redhat.com> wro

Re: [ovirt-users] HA cluster

2016-01-04 Thread Budur Nagaraju
Hi Simone

I have installed KVM server on the physical machine  and installed
centos6.7 vm on the server and tried to deploy Hosted-engine in the vm
,getting the same Error below is the Logs.

http://pastebin.com/pg6k8irV

can you pls help me ?

Thanks,
Nagaraju


On Wed, Dec 2, 2015 at 5:35 PM, Simone Tiraboschi 
wrote:

>
>
> On Wed, Dec 2, 2015 at 12:19 PM, Budur Nagaraju  wrote:
>
>> I have installed KVM in the nested environment  in ESXi6.x version is
>> that recommended ?
>>
>
> I often use KVM over KVM in nested environment but honestly I never tried
> to run KVM over ESXi but I suspect that all of your issues comes from there.
>
>
>> apart from Hosted engine is there any other alternate way to configure
>> Engine HA cluster ?
>>
>
> Nothing else from the project. You can use two external VMs in cluster
> with pacemaker but it's completely up to you.
>
>
>>
>>
>> -Nagaraju
>>
>>
>> On Wed, Dec 2, 2015 at 4:11 PM, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju 
>>> wrote:
>>>
 pls fine the logs from the below mentioned URL,

 http://pastebin.com/ZeKyyFbN

>>>
>>> OK, the issue is here:
>>>
>>> Thread-88::ERROR::2015-12-02
>>> 15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm)
>>> vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed
>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm
>>> self._run()
>>>   File "/usr/share/vdsm/virt/vm.py", line 3363, in _run
>>> self._connection.createXML(domxml, flags),
>>>   File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py",
>>> line 119, in wrapper
>>> ret = f(*args, **kwargs)
>>>   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in
>>> createXML
>>> if ret is None:raise libvirtError('virDomainCreateXML() failed',
>>> conn=self)
>>> libvirtError: unsupported configuration: Domain requires KVM, but it is
>>> not available. Check that virtualization is enabled in the host BIOS, and
>>> host configuration is setup to load the kvm modules.
>>> Thread-88::DEBUG::2015-12-02
>>> 15:06:27,751::vm::2813::vm.Vm::(setDownStatus)
>>> vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down:
>>> unsupported configuration: Domain requires KVM, but it is not available.
>>> Check that virtualization is enabled in the host BIOS, and host
>>> configuration is setup to load the kvm modules. (code=1)
>>>
>>> but it's pretty strange cause hosted-engine-setup already explicitly
>>> check for visualization support and just exits with a clear error if not.
>>> Did you played with the kvm module while hosted-engine-setup was running?
>>>
>>> Can you please hosted-engine-setup logs?
>>>
>>>

 On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi >>> > wrote:

>
>
> On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan 
> wrote:
>
>> Maybe even makes sense to open a bugzilla ticket already. Better safe
>> than sorry.
>>
>
> We still need at least one log file to understand what happened.
>
>
>> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" 
>> wrote:
>>
>>>
>>> On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju 
>>> wrote:
>>>
 I do not know what logs you are expecting ? the logs which I got is
 pasted in the mail if you require in pastebin let me know I will upload
 there .

>>>
>>>
>>> Please run sosreport utility and share the resulting archive where
>>> you prefer.
>>> You can follow this guide:
>>> http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
>>>


 On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola <
 sbona...@redhat.com> wrote:

>
>
> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju  > wrote:
>
>> I got only 10lines to in the vdsm logs and are below ,
>>
>>
> Can you please provide full sos report?
>
>
>
>>
>> [root@he /]# tail -f /var/log/vdsm/vdsm.log
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
>> Trying to release resource 'Storage.HsmDomainMonitorLock'
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
>> Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
>> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if 
>> anyone is
>> waiting for it.
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
>> No one is waiting for resource 'Storage.HsmDomain

Re: [ovirt-users] HA cluster

2015-12-02 Thread Ryan Barry
On Wed, Dec 2, 2015 at 5:05 AM,  wrote:

> On Wed, Dec 2, 2015 at 12:19 PM, Budur Nagaraju  wrote:
>
> > I have installed KVM in the nested environment  in ESXi6.x version is
> that
> > recommended ?
> >
>
> I often use KVM over KVM in nested environment but honestly I never tried
> to run KVM over ESXi but I suspect that all of your issues comes from
> there.
>
> It should be fine in ESXi as well, as long as the VMX for that VM has
vhv.enabled=true, and the config for the VM is reloaded.

>
> > apart from Hosted engine is there any other alternate way to configure
> > Engine HA cluster ?
> >
>
> Nothing else from the project. You can use two external VMs in cluster with
> pacemaker but it's completely up to you.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA cluster

2015-12-02 Thread Simone Tiraboschi
On Wed, Dec 2, 2015 at 12:19 PM, Budur Nagaraju  wrote:

> I have installed KVM in the nested environment  in ESXi6.x version is that
> recommended ?
>

I often use KVM over KVM in nested environment but honestly I never tried
to run KVM over ESXi but I suspect that all of your issues comes from there.


> apart from Hosted engine is there any other alternate way to configure
> Engine HA cluster ?
>

Nothing else from the project. You can use two external VMs in cluster with
pacemaker but it's completely up to you.


>
>
> -Nagaraju
>
>
> On Wed, Dec 2, 2015 at 4:11 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju 
>> wrote:
>>
>>> pls fine the logs from the below mentioned URL,
>>>
>>> http://pastebin.com/ZeKyyFbN
>>>
>>
>> OK, the issue is here:
>>
>> Thread-88::ERROR::2015-12-02
>> 15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm)
>> vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm
>> self._run()
>>   File "/usr/share/vdsm/virt/vm.py", line 3363, in _run
>> self._connection.createXML(domxml, flags),
>>   File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line
>> 119, in wrapper
>> ret = f(*args, **kwargs)
>>   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in
>> createXML
>> if ret is None:raise libvirtError('virDomainCreateXML() failed',
>> conn=self)
>> libvirtError: unsupported configuration: Domain requires KVM, but it is
>> not available. Check that virtualization is enabled in the host BIOS, and
>> host configuration is setup to load the kvm modules.
>> Thread-88::DEBUG::2015-12-02
>> 15:06:27,751::vm::2813::vm.Vm::(setDownStatus)
>> vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down:
>> unsupported configuration: Domain requires KVM, but it is not available.
>> Check that virtualization is enabled in the host BIOS, and host
>> configuration is setup to load the kvm modules. (code=1)
>>
>> but it's pretty strange cause hosted-engine-setup already explicitly
>> check for visualization support and just exits with a clear error if not.
>> Did you played with the kvm module while hosted-engine-setup was running?
>>
>> Can you please hosted-engine-setup logs?
>>
>>
>>>
>>> On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi 
>>> wrote:
>>>


 On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan 
 wrote:

> Maybe even makes sense to open a bugzilla ticket already. Better safe
> than sorry.
>

 We still need at least one log file to understand what happened.


> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" 
> wrote:
>
>>
>> On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju 
>> wrote:
>>
>>> I do not know what logs you are expecting ? the logs which I got is
>>> pasted in the mail if you require in pastebin let me know I will upload
>>> there .
>>>
>>
>>
>> Please run sosreport utility and share the resulting archive where
>> you prefer.
>> You can follow this guide:
>> http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
>>
>>>
>>>
>>> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola <
>>> sbona...@redhat.com> wrote:
>>>


 On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju 
 wrote:

> I got only 10lines to in the vdsm logs and are below ,
>
>
 Can you please provide full sos report?



>
> [root@he /]# tail -f /var/log/vdsm/vdsm.log
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
> Trying to release resource 'Storage.HsmDomainMonitorLock'
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
> Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if 
> anyone is
> waiting for it.
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
> No one is waiting for resource 'Storage.HsmDomainMonitorLock', 
> Clearing
> records.
> Thread-100::INFO::2015-11-27
> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
> stopMonitoringDomain, Return response: None
> Thread-100::DEBUG::2015-11-27
> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
> Thread-100::DEBUG::2015-11-27
> 12:58:57,361::t

Re: [ovirt-users] HA cluster

2015-12-02 Thread Budur Nagaraju
I have installed KVM in the nested environment  in ESXi6.x version is that
recommended ? apart from Hosted engine is there any other alternate way to
configure Engine HA cluster ?

-Nagaraju


On Wed, Dec 2, 2015 at 4:11 PM, Simone Tiraboschi 
wrote:

>
>
> On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju  wrote:
>
>> pls fine the logs from the below mentioned URL,
>>
>> http://pastebin.com/ZeKyyFbN
>>
>
> OK, the issue is here:
>
> Thread-88::ERROR::2015-12-02
> 15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm)
> vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed
> Traceback (most recent call last):
>   File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm
> self._run()
>   File "/usr/share/vdsm/virt/vm.py", line 3363, in _run
> self._connection.createXML(domxml, flags),
>   File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line
> 119, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in
> createXML
> if ret is None:raise libvirtError('virDomainCreateXML() failed',
> conn=self)
> libvirtError: unsupported configuration: Domain requires KVM, but it is
> not available. Check that virtualization is enabled in the host BIOS, and
> host configuration is setup to load the kvm modules.
> Thread-88::DEBUG::2015-12-02
> 15:06:27,751::vm::2813::vm.Vm::(setDownStatus)
> vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down:
> unsupported configuration: Domain requires KVM, but it is not available.
> Check that virtualization is enabled in the host BIOS, and host
> configuration is setup to load the kvm modules. (code=1)
>
> but it's pretty strange cause hosted-engine-setup already explicitly check
> for visualization support and just exits with a clear error if not.
> Did you played with the kvm module while hosted-engine-setup was running?
>
> Can you please hosted-engine-setup logs?
>
>
>>
>> On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan 
>>> wrote:
>>>
 Maybe even makes sense to open a bugzilla ticket already. Better safe
 than sorry.

>>>
>>> We still need at least one log file to understand what happened.
>>>
>>>
 On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" 
 wrote:

>
> On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju 
> wrote:
>
>> I do not know what logs you are expecting ? the logs which I got is
>> pasted in the mail if you require in pastebin let me know I will upload
>> there .
>>
>
>
> Please run sosreport utility and share the resulting archive where you
> prefer.
> You can follow this guide:
> http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
>
>>
>>
>> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola <
>> sbona...@redhat.com> wrote:
>>
>>>
>>>
>>> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju 
>>> wrote:
>>>
 I got only 10lines to in the vdsm logs and are below ,


>>> Can you please provide full sos report?
>>>
>>>
>>>

 [root@he /]# tail -f /var/log/vdsm/vdsm.log
 Thread-100::DEBUG::2015-11-27
 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
 Trying to release resource 'Storage.HsmDomainMonitorLock'
 Thread-100::DEBUG::2015-11-27
 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
 Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
 Thread-100::DEBUG::2015-11-27
 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
 Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone 
 is
 waiting for it.
 Thread-100::DEBUG::2015-11-27
 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
 No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing
 records.
 Thread-100::INFO::2015-11-27
 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
 stopMonitoringDomain, Return response: None
 Thread-100::DEBUG::2015-11-27
 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
 Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
 Thread-100::DEBUG::2015-11-27
 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState)
 Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state 
 preparing ->
 state finished
 Thread-100::DEBUG::2015-11-27
 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
 Owner.releaseAll requests {} resources {}
 Thread-100::DEBUG::2015-11-27
 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
 Own

Re: [ovirt-users] HA cluster

2015-12-02 Thread Simone Tiraboschi
On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju  wrote:

> pls fine the logs from the below mentioned URL,
>
> http://pastebin.com/ZeKyyFbN
>

OK, the issue is here:

Thread-88::ERROR::2015-12-02
15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm)
vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm
self._run()
  File "/usr/share/vdsm/virt/vm.py", line 3363, in _run
self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line
119, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in
createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed',
conn=self)
libvirtError: unsupported configuration: Domain requires KVM, but it is not
available. Check that virtualization is enabled in the host BIOS, and host
configuration is setup to load the kvm modules.
Thread-88::DEBUG::2015-12-02 15:06:27,751::vm::2813::vm.Vm::(setDownStatus)
vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down:
unsupported configuration: Domain requires KVM, but it is not available.
Check that virtualization is enabled in the host BIOS, and host
configuration is setup to load the kvm modules. (code=1)

but it's pretty strange cause hosted-engine-setup already explicitly check
for visualization support and just exits with a clear error if not.
Did you played with the kvm module while hosted-engine-setup was running?

Can you please hosted-engine-setup logs?


>
> On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan  wrote:
>>
>>> Maybe even makes sense to open a bugzilla ticket already. Better safe
>>> than sorry.
>>>
>>
>> We still need at least one log file to understand what happened.
>>
>>
>>> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" 
>>> wrote:
>>>

 On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju 
 wrote:

> I do not know what logs you are expecting ? the logs which I got is
> pasted in the mail if you require in pastebin let me know I will upload
> there .
>


 Please run sosreport utility and share the resulting archive where you
 prefer.
 You can follow this guide:
 http://www.linuxtechi.com/how-to-create-sosreport-in-linux/

>
>
> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola  > wrote:
>
>>
>>
>> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju 
>> wrote:
>>
>>> I got only 10lines to in the vdsm logs and are below ,
>>>
>>>
>> Can you please provide full sos report?
>>
>>
>>
>>>
>>> [root@he /]# tail -f /var/log/vdsm/vdsm.log
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
>>> Trying to release resource 'Storage.HsmDomainMonitorLock'
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
>>> Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
>>> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone 
>>> is
>>> waiting for it.
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
>>> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing
>>> records.
>>> Thread-100::INFO::2015-11-27
>>> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
>>> stopMonitoringDomain, Return response: None
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state 
>>> preparing ->
>>> state finished
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref)
>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
>>>
>>>
>>>
>>> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi <
>>> stira...@redhat.com> wrote:
>>>


 On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju >>> > wrote:


Re: [ovirt-users] HA cluster

2015-12-02 Thread Sandro Bonazzola
On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju  wrote:

> pls fine the logs from the below mentioned URL,
>
> http://pastebin.com/ZeKyyFbN
>

I'm sorry but without a full sos report we can't help you.



>
>
> On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan  wrote:
>>
>>> Maybe even makes sense to open a bugzilla ticket already. Better safe
>>> than sorry.
>>>
>>
>> We still need at least one log file to understand what happened.
>>
>>
>>> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" 
>>> wrote:
>>>

 On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju 
 wrote:

> I do not know what logs you are expecting ? the logs which I got is
> pasted in the mail if you require in pastebin let me know I will upload
> there .
>


 Please run sosreport utility and share the resulting archive where you
 prefer.
 You can follow this guide:
 http://www.linuxtechi.com/how-to-create-sosreport-in-linux/

>
>
> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola  > wrote:
>
>>
>>
>> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju 
>> wrote:
>>
>>> I got only 10lines to in the vdsm logs and are below ,
>>>
>>>
>> Can you please provide full sos report?
>>
>>
>>
>>>
>>> [root@he /]# tail -f /var/log/vdsm/vdsm.log
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
>>> Trying to release resource 'Storage.HsmDomainMonitorLock'
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
>>> Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
>>> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone 
>>> is
>>> waiting for it.
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
>>> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing
>>> records.
>>> Thread-100::INFO::2015-11-27
>>> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
>>> stopMonitoringDomain, Return response: None
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state 
>>> preparing ->
>>> state finished
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref)
>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
>>>
>>>
>>>
>>> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi <
>>> stira...@redhat.com> wrote:
>>>


 On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju >>> > wrote:

>
>
>
> *Below are the entire logs*
>
>
 Sorry, with the entire log I mean if you can attach or share
 somewhere the whole /var/log/vdsm/vdsm.log  cause the latest ten lines 
 are
 not enough to point out the issue.


>
>
>
>
> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
>
> Detector thread::DEBUG::2015-11-26
> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:50944
> Detector thread::DEBUG::2015-11-26
> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml 
> over
> http detected from ('127.0.0.1', 50944)
> Detector thread::DEBUG::2015-11-26
> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
> Adding connection from 127.0.0.1:50945
> Detector thread::DEBUG::2015-11-26
> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
> Connection removed from 127.0.0.1:50945
> Detector thread::DEBUG::2015-11-26
> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:50945
> Detector thread::DEBUG::2

Re: [ovirt-users] HA cluster

2015-12-02 Thread Budur Nagaraju
pls fine the logs from the below mentioned URL,

http://pastebin.com/ZeKyyFbN

On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi 
wrote:

>
>
> On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan  wrote:
>
>> Maybe even makes sense to open a bugzilla ticket already. Better safe
>> than sorry.
>>
>
> We still need at least one log file to understand what happened.
>
>
>> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" 
>> wrote:
>>
>>>
>>> On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju 
>>> wrote:
>>>
 I do not know what logs you are expecting ? the logs which I got is
 pasted in the mail if you require in pastebin let me know I will upload
 there .

>>>
>>>
>>> Please run sosreport utility and share the resulting archive where you
>>> prefer.
>>> You can follow this guide:
>>> http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
>>>


 On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola 
 wrote:

>
>
> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju 
> wrote:
>
>> I got only 10lines to in the vdsm logs and are below ,
>>
>>
> Can you please provide full sos report?
>
>
>
>>
>> [root@he /]# tail -f /var/log/vdsm/vdsm.log
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
>> Trying to release resource 'Storage.HsmDomainMonitorLock'
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
>> Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
>> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is
>> waiting for it.
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
>> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing
>> records.
>> Thread-100::INFO::2015-11-27
>> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
>> stopMonitoringDomain, Return response: None
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing 
>> ->
>> state finished
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>> Owner.releaseAll requests {} resources {}
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>> Owner.cancelAll requests {}
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref)
>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
>>
>>
>>
>> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi <
>> stira...@redhat.com> wrote:
>>
>>>
>>>
>>> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju 
>>> wrote:
>>>



 *Below are the entire logs*


>>> Sorry, with the entire log I mean if you can attach or share
>>> somewhere the whole /var/log/vdsm/vdsm.log  cause the latest ten lines 
>>> are
>>> not enough to point out the issue.
>>>
>>>




 *[root@he ~]# tail -f /var/log/vdsm/vdsm.log *

 Detector thread::DEBUG::2015-11-26
 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
 Detected protocol xml from 127.0.0.1:50944
 Detector thread::DEBUG::2015-11-26
 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
 http detected from ('127.0.0.1', 50944)
 Detector thread::DEBUG::2015-11-26
 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
 Adding connection from 127.0.0.1:50945
 Detector thread::DEBUG::2015-11-26
 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
 Connection removed from 127.0.0.1:50945
 Detector thread::DEBUG::2015-11-26
 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
 Detected protocol xml from 127.0.0.1:50945
 Detector thread::DEBUG::2015-11-26
 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
 http detected from ('127.0.0.1', 50945)
 Detector thread::DEBUG::2015-11-26
 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>>>

Re: [ovirt-users] HA cluster

2015-11-27 Thread Simone Tiraboschi
On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan  wrote:

> Maybe even makes sense to open a bugzilla ticket already. Better safe than
> sorry.
>

We still need at least one log file to understand what happened.


> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi"  wrote:
>
>>
>> On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju 
>> wrote:
>>
>>> I do not know what logs you are expecting ? the logs which I got is
>>> pasted in the mail if you require in pastebin let me know I will upload
>>> there .
>>>
>>
>>
>> Please run sosreport utility and share the resulting archive where you
>> prefer.
>> You can follow this guide:
>> http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
>>
>>>
>>>
>>> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola 
>>> wrote:
>>>


 On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju 
 wrote:

> I got only 10lines to in the vdsm logs and are below ,
>
>
 Can you please provide full sos report?



>
> [root@he /]# tail -f /var/log/vdsm/vdsm.log
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
> Trying to release resource 'Storage.HsmDomainMonitorLock'
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
> Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is
> waiting for it.
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing
> records.
> Thread-100::INFO::2015-11-27
> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
> stopMonitoringDomain, Return response: None
> Thread-100::DEBUG::2015-11-27
> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
> Thread-100::DEBUG::2015-11-27
> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing 
> ->
> state finished
> Thread-100::DEBUG::2015-11-27
> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
> Thread-100::DEBUG::2015-11-27
> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-100::DEBUG::2015-11-27
> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref)
> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
>
>
>
> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi <
> stira...@redhat.com> wrote:
>
>>
>>
>> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju 
>> wrote:
>>
>>>
>>>
>>>
>>> *Below are the entire logs*
>>>
>>>
>> Sorry, with the entire log I mean if you can attach or share
>> somewhere the whole /var/log/vdsm/vdsm.log  cause the latest ten lines 
>> are
>> not enough to point out the issue.
>>
>>
>>>
>>>
>>>
>>>
>>> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
>>>
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>>> Detected protocol xml from 127.0.0.1:50944
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>>> http detected from ('127.0.0.1', 50944)
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>>> Adding connection from 127.0.0.1:50945
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>>> Connection removed from 127.0.0.1:50945
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>>> Detected protocol xml from 127.0.0.1:50945
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>>> http detected from ('127.0.0.1', 50945)
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>>> Adding connection from 127.0.0.1:50946
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>>> Connection removed from 127.0.0.1:50946
>>> Detector thread::DEBUG::2015-11-26
>>> 1

Re: [ovirt-users] HA cluster

2015-11-27 Thread Maxim Kovgan
Maybe even makes sense to open a bugzilla ticket already. Better safe than
sorry.
On Nov 27, 2015 11:35 AM, "Simone Tiraboschi"  wrote:

>
> On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju 
> wrote:
>
>> I do not know what logs you are expecting ? the logs which I got is
>> pasted in the mail if you require in pastebin let me know I will upload
>> there .
>>
>
>
> Please run sosreport utility and share the resulting archive where you
> prefer.
> You can follow this guide:
> http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
>
>>
>>
>> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju 
>>> wrote:
>>>
 I got only 10lines to in the vdsm logs and are below ,


>>> Can you please provide full sos report?
>>>
>>>
>>>

 [root@he /]# tail -f /var/log/vdsm/vdsm.log
 Thread-100::DEBUG::2015-11-27
 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
 Trying to release resource 'Storage.HsmDomainMonitorLock'
 Thread-100::DEBUG::2015-11-27
 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
 Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
 Thread-100::DEBUG::2015-11-27
 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
 Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is
 waiting for it.
 Thread-100::DEBUG::2015-11-27
 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
 No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing
 records.
 Thread-100::INFO::2015-11-27
 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
 stopMonitoringDomain, Return response: None
 Thread-100::DEBUG::2015-11-27
 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
 Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
 Thread-100::DEBUG::2015-11-27
 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState)
 Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing ->
 state finished
 Thread-100::DEBUG::2015-11-27
 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
 Owner.releaseAll requests {} resources {}
 Thread-100::DEBUG::2015-11-27
 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
 Owner.cancelAll requests {}
 Thread-100::DEBUG::2015-11-27
 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref)
 Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False



 On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi >>> > wrote:

>
>
> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju 
> wrote:
>
>>
>>
>>
>> *Below are the entire logs*
>>
>>
> Sorry, with the entire log I mean if you can attach or share somewhere
> the whole /var/log/vdsm/vdsm.log  cause the latest ten lines are not 
> enough
> to point out the issue.
>
>
>>
>>
>>
>>
>> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
>>
>> Detector thread::DEBUG::2015-11-26
>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:50944
>> Detector thread::DEBUG::2015-11-26
>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>> http detected from ('127.0.0.1', 50944)
>> Detector thread::DEBUG::2015-11-26
>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>> Adding connection from 127.0.0.1:50945
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>> Connection removed from 127.0.0.1:50945
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:50945
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>> http detected from ('127.0.0.1', 50945)
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>> Adding connection from 127.0.0.1:50946
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>> Connection removed from 127.0.0.1:50946
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:50946
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>> http detected fro

Re: [ovirt-users] HA cluster

2015-11-27 Thread Simone Tiraboschi
On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju  wrote:

> I do not know what logs you are expecting ? the logs which I got is pasted
> in the mail if you require in pastebin let me know I will upload there .
>


Please run sosreport utility and share the resulting archive where you
prefer.
You can follow this guide:
http://www.linuxtechi.com/how-to-create-sosreport-in-linux/

>
>
> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju 
>> wrote:
>>
>>> I got only 10lines to in the vdsm logs and are below ,
>>>
>>>
>> Can you please provide full sos report?
>>
>>
>>
>>>
>>> [root@he /]# tail -f /var/log/vdsm/vdsm.log
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
>>> Trying to release resource 'Storage.HsmDomainMonitorLock'
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
>>> Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
>>> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is
>>> waiting for it.
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
>>> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing
>>> records.
>>> Thread-100::INFO::2015-11-27
>>> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
>>> stopMonitoringDomain, Return response: None
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing ->
>>> state finished
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref)
>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
>>>
>>>
>>>
>>> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi 
>>> wrote:
>>>


 On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju 
 wrote:

>
>
>
> *Below are the entire logs*
>
>
 Sorry, with the entire log I mean if you can attach or share somewhere
 the whole /var/log/vdsm/vdsm.log  cause the latest ten lines are not enough
 to point out the issue.


>
>
>
>
> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
>
> Detector thread::DEBUG::2015-11-26
> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:50944
> Detector thread::DEBUG::2015-11-26
> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
> http detected from ('127.0.0.1', 50944)
> Detector thread::DEBUG::2015-11-26
> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
> Adding connection from 127.0.0.1:50945
> Detector thread::DEBUG::2015-11-26
> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
> Connection removed from 127.0.0.1:50945
> Detector thread::DEBUG::2015-11-26
> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:50945
> Detector thread::DEBUG::2015-11-26
> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
> http detected from ('127.0.0.1', 50945)
> Detector thread::DEBUG::2015-11-26
> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
> Adding connection from 127.0.0.1:50946
> Detector thread::DEBUG::2015-11-26
> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
> Connection removed from 127.0.0.1:50946
> Detector thread::DEBUG::2015-11-26
> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:50946
> Detector thread::DEBUG::2015-11-26
> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
> http detected from ('127.0.0.1', 50946)
>
>
>
>
> *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log *
>
> MainProcess::DEBUG::2015-11-26
> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
> call readMultipathConf with () {}
> MainPr

Re: [ovirt-users] HA cluster

2015-11-27 Thread Budur Nagaraju
I do not know what logs you are expecting ? the logs which I got is pasted
in the mail if you require in pastebin let me know I will upload there .

On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola 
wrote:

>
>
> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju  wrote:
>
>> I got only 10lines to in the vdsm logs and are below ,
>>
>>
> Can you please provide full sos report?
>
>
>
>>
>> [root@he /]# tail -f /var/log/vdsm/vdsm.log
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
>> Trying to release resource 'Storage.HsmDomainMonitorLock'
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
>> Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
>> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is
>> waiting for it.
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
>> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing
>> records.
>> Thread-100::INFO::2015-11-27
>> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
>> stopMonitoringDomain, Return response: None
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing ->
>> state finished
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>> Owner.releaseAll requests {} resources {}
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>> Owner.cancelAll requests {}
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref)
>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
>>
>>
>>
>> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju 
>>> wrote:
>>>



 *Below are the entire logs*


>>> Sorry, with the entire log I mean if you can attach or share somewhere
>>> the whole /var/log/vdsm/vdsm.log  cause the latest ten lines are not enough
>>> to point out the issue.
>>>
>>>




 *[root@he ~]# tail -f /var/log/vdsm/vdsm.log *

 Detector thread::DEBUG::2015-11-26
 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
 Detected protocol xml from 127.0.0.1:50944
 Detector thread::DEBUG::2015-11-26
 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
 http detected from ('127.0.0.1', 50944)
 Detector thread::DEBUG::2015-11-26
 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
 Adding connection from 127.0.0.1:50945
 Detector thread::DEBUG::2015-11-26
 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
 Connection removed from 127.0.0.1:50945
 Detector thread::DEBUG::2015-11-26
 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
 Detected protocol xml from 127.0.0.1:50945
 Detector thread::DEBUG::2015-11-26
 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
 http detected from ('127.0.0.1', 50945)
 Detector thread::DEBUG::2015-11-26
 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
 Adding connection from 127.0.0.1:50946
 Detector thread::DEBUG::2015-11-26
 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
 Connection removed from 127.0.0.1:50946
 Detector thread::DEBUG::2015-11-26
 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
 Detected protocol xml from 127.0.0.1:50946
 Detector thread::DEBUG::2015-11-26
 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
 http detected from ('127.0.0.1', 50946)




 *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log *

 MainProcess::DEBUG::2015-11-26
 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
 call readMultipathConf with () {}
 MainProcess::DEBUG::2015-11-26
 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
 return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {',
 'polling_interval5', 'getuid_callout
 "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
 'no_path_retry   fail', '

Re: [ovirt-users] HA cluster

2015-11-27 Thread Sandro Bonazzola
On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju  wrote:

> I got only 10lines to in the vdsm logs and are below ,
>
>
Can you please provide full sos report?



>
> [root@he /]# tail -f /var/log/vdsm/vdsm.log
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
> Trying to release resource 'Storage.HsmDomainMonitorLock'
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
> Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is
> waiting for it.
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing
> records.
> Thread-100::INFO::2015-11-27
> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
> stopMonitoringDomain, Return response: None
> Thread-100::DEBUG::2015-11-27
> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
> Thread-100::DEBUG::2015-11-27
> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing ->
> state finished
> Thread-100::DEBUG::2015-11-27
> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
> Thread-100::DEBUG::2015-11-27
> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-100::DEBUG::2015-11-27
> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref)
> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
>
>
>
> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju 
>> wrote:
>>
>>>
>>>
>>>
>>> *Below are the entire logs*
>>>
>>>
>> Sorry, with the entire log I mean if you can attach or share somewhere
>> the whole /var/log/vdsm/vdsm.log  cause the latest ten lines are not enough
>> to point out the issue.
>>
>>
>>>
>>>
>>>
>>>
>>> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
>>>
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>>> Detected protocol xml from 127.0.0.1:50944
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>>> http detected from ('127.0.0.1', 50944)
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>>> Adding connection from 127.0.0.1:50945
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>>> Connection removed from 127.0.0.1:50945
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>>> Detected protocol xml from 127.0.0.1:50945
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>>> http detected from ('127.0.0.1', 50945)
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>>> Adding connection from 127.0.0.1:50946
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>>> Connection removed from 127.0.0.1:50946
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>>> Detected protocol xml from 127.0.0.1:50946
>>> Detector thread::DEBUG::2015-11-26
>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>>> http detected from ('127.0.0.1', 50946)
>>>
>>>
>>>
>>>
>>> *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log *
>>>
>>> MainProcess::DEBUG::2015-11-26
>>> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
>>> call readMultipathConf with () {}
>>> MainProcess::DEBUG::2015-11-26
>>> 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
>>> return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {',
>>> 'polling_interval5', 'getuid_callout
>>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
>>> 'no_path_retry   fail', 'user_friendly_names no', '
>>> flush_on_last_del   yes', 'fast_io_fail_tmo5', '
>>> dev_loss_tmo30', 'max_fds 4096', '}', '',
>>> 'devices {', 'device {', 'vendor  "HITACHI"', '
>>> product "DF.*"', 'getuid_callout
>>> "/lib/udev/scsi_

Re: [ovirt-users] HA cluster

2015-11-26 Thread Budur Nagaraju
I got only 10lines to in the vdsm logs and are below ,


[root@he /]# tail -f /var/log/vdsm/vdsm.log
Thread-100::DEBUG::2015-11-27
12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource 'Storage.HsmDomainMonitorLock'
Thread-100::DEBUG::2015-11-27
12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
Thread-100::DEBUG::2015-11-27
12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is
waiting for it.
Thread-100::DEBUG::2015-11-27
12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing
records.
Thread-100::INFO::2015-11-27
12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
stopMonitoringDomain, Return response: None
Thread-100::DEBUG::2015-11-27
12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
Thread-100::DEBUG::2015-11-27
12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState)
Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing ->
state finished
Thread-100::DEBUG::2015-11-27
12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-100::DEBUG::2015-11-27
12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-100::DEBUG::2015-11-27
12:58:57,361::task::993::Storage.TaskManager.Task::(_decref)
Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False



On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi 
wrote:

>
>
> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju 
> wrote:
>
>>
>>
>>
>> *Below are the entire logs*
>>
>>
> Sorry, with the entire log I mean if you can attach or share somewhere the
> whole /var/log/vdsm/vdsm.log  cause the latest ten lines are not enough to
> point out the issue.
>
>
>>
>>
>>
>>
>> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
>>
>> Detector thread::DEBUG::2015-11-26
>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:50944
>> Detector thread::DEBUG::2015-11-26
>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>> http detected from ('127.0.0.1', 50944)
>> Detector thread::DEBUG::2015-11-26
>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>> Adding connection from 127.0.0.1:50945
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>> Connection removed from 127.0.0.1:50945
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:50945
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>> http detected from ('127.0.0.1', 50945)
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>> Adding connection from 127.0.0.1:50946
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>> Connection removed from 127.0.0.1:50946
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:50946
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>> http detected from ('127.0.0.1', 50946)
>>
>>
>>
>>
>> *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log *
>>
>> MainProcess::DEBUG::2015-11-26
>> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
>> call readMultipathConf with () {}
>> MainProcess::DEBUG::2015-11-26
>> 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
>> return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {',
>> 'polling_interval5', 'getuid_callout
>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
>> 'no_path_retry   fail', 'user_friendly_names no', '
>> flush_on_last_del   yes', 'fast_io_fail_tmo5', '
>> dev_loss_tmo30', 'max_fds 4096', '}', '',
>> 'devices {', 'device {', 'vendor  "HITACHI"', '
>> product "DF.*"', 'getuid_callout
>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
>> '}', 'device {', 'vendor  "COMPELNT"', '
>> product "Compellent Vol"', 'no_path_retry
>> fail', '}', 'device {', '# multipath.conf.default', '
>> ven

Re: [ovirt-users] HA cluster

2015-11-26 Thread Simone Tiraboschi
On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju  wrote:

>
>
>
> *Below are the entire logs*
>
>
Sorry, with the entire log I mean if you can attach or share somewhere the
whole /var/log/vdsm/vdsm.log  cause the latest ten lines are not enough to
point out the issue.

>
>
>
>
> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
>
> Detector thread::DEBUG::2015-11-26
> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:50944
> Detector thread::DEBUG::2015-11-26
> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
> http detected from ('127.0.0.1', 50944)
> Detector thread::DEBUG::2015-11-26
> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
> Adding connection from 127.0.0.1:50945
> Detector thread::DEBUG::2015-11-26
> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
> Connection removed from 127.0.0.1:50945
> Detector thread::DEBUG::2015-11-26
> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:50945
> Detector thread::DEBUG::2015-11-26
> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
> http detected from ('127.0.0.1', 50945)
> Detector thread::DEBUG::2015-11-26
> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
> Adding connection from 127.0.0.1:50946
> Detector thread::DEBUG::2015-11-26
> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
> Connection removed from 127.0.0.1:50946
> Detector thread::DEBUG::2015-11-26
> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:50946
> Detector thread::DEBUG::2015-11-26
> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
> http detected from ('127.0.0.1', 50946)
>
>
>
>
> *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log *
>
> MainProcess::DEBUG::2015-11-26
> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
> call readMultipathConf with () {}
> MainProcess::DEBUG::2015-11-26
> 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
> return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {',
> 'polling_interval5', 'getuid_callout
> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
> 'no_path_retry   fail', 'user_friendly_names no', '
> flush_on_last_del   yes', 'fast_io_fail_tmo5', '
> dev_loss_tmo30', 'max_fds 4096', '}', '',
> 'devices {', 'device {', 'vendor  "HITACHI"', '
> product "DF.*"', 'getuid_callout
> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
> '}', 'device {', 'vendor  "COMPELNT"', '
> product "Compellent Vol"', 'no_path_retry
> fail', '}', 'device {', '# multipath.conf.default', '
> vendor  "DGC"', 'product ".*"', '
> product_blacklist   "LUNZ"', 'path_grouping_policy
> "group_by_prio"', 'path_checker"emc_clariion"', '
> hardware_handler"1 emc"', 'prio"emc"', '
> failbackimmediate', 'rr_weight
> "uniform"', '# vdsm required configuration', '
> getuid_callout  "/lib/udev/scsi_id --whitelisted
> --replace-whitespace --device=/dev/%n"', 'features"0"',
> 'no_path_retry   fail', '}', '}']
> MainProcess|Thread-13::DEBUG::2015-11-26
> 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
> call getHardwareInfo with () {}
> MainProcess|Thread-13::DEBUG::2015-11-26
> 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
> return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID':
> 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise
> Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)',
> 'systemManufacturer': 'Red Hat'}
> MainProcess|Thread-21::DEBUG::2015-11-26
> 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
> call validateAccess with ('qemu', ('qemu', 'kvm'),
> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {}
> MainProcess|Thread-21::DEBUG::2015-11-26
> 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
> return validateAccess with None
> MainProcess|Thread-22::DEBUG::2015-11-26
> 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
> call validateAccess with ('qemu', ('qemu', 'kvm'),
> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {}
> MainProcess|Thread-22::DEBUG::2015-11-26
> 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
> return validateAccess with None
> MainProcess|PolicyEngine::DEBUG::2015-11-26
> 15:13:40,619::su

Re: [ovirt-users] HA cluster

2015-11-26 Thread Budur Nagaraju
*Below are the entire logs*




*[root@he ~]# tail -f /var/log/vdsm/vdsm.log *

Detector thread::DEBUG::2015-11-26
15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:50944
Detector thread::DEBUG::2015-11-26
15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 50944)
Detector thread::DEBUG::2015-11-26
15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
Adding connection from 127.0.0.1:50945
Detector thread::DEBUG::2015-11-26
15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
Connection removed from 127.0.0.1:50945
Detector thread::DEBUG::2015-11-26
15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:50945
Detector thread::DEBUG::2015-11-26
15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 50945)
Detector thread::DEBUG::2015-11-26
15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
Adding connection from 127.0.0.1:50946
Detector thread::DEBUG::2015-11-26
15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
Connection removed from 127.0.0.1:50946
Detector thread::DEBUG::2015-11-26
15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:50946
Detector thread::DEBUG::2015-11-26
15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 50946)




*[root@he ~]# tail -f /var/log/vdsm/supervdsm.log *

MainProcess::DEBUG::2015-11-26
15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call readMultipathConf with () {}
MainProcess::DEBUG::2015-11-26
15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {',
'polling_interval5', 'getuid_callout
"/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
'no_path_retry   fail', 'user_friendly_names no', '
flush_on_last_del   yes', 'fast_io_fail_tmo5', '
dev_loss_tmo30', 'max_fds 4096', '}', '',
'devices {', 'device {', 'vendor  "HITACHI"', '
product "DF.*"', 'getuid_callout
"/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
'}', 'device {', 'vendor  "COMPELNT"', '
product "Compellent Vol"', 'no_path_retry
fail', '}', 'device {', '# multipath.conf.default', '
vendor  "DGC"', 'product ".*"', '
product_blacklist   "LUNZ"', 'path_grouping_policy
"group_by_prio"', 'path_checker"emc_clariion"', '
hardware_handler"1 emc"', 'prio"emc"', '
failbackimmediate', 'rr_weight
"uniform"', '# vdsm required configuration', '
getuid_callout  "/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n"', 'features"0"',
'no_path_retry   fail', '}', '}']
MainProcess|Thread-13::DEBUG::2015-11-26
15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call getHardwareInfo with () {}
MainProcess|Thread-13::DEBUG::2015-11-26
15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID':
'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise
Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)',
'systemManufacturer': 'Red Hat'}
MainProcess|Thread-21::DEBUG::2015-11-26
15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call validateAccess with ('qemu', ('qemu', 'kvm'),
'/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {}
MainProcess|Thread-21::DEBUG::2015-11-26
15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
return validateAccess with None
MainProcess|Thread-22::DEBUG::2015-11-26
15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call validateAccess with ('qemu', ('qemu', 'kvm'),
'/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {}
MainProcess|Thread-22::DEBUG::2015-11-26
15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
return validateAccess with None
MainProcess|PolicyEngine::DEBUG::2015-11-26
15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call ksmTune with ({'run': 0},) {}
MainProcess|PolicyEngine::DEBUG::2015-11-26
15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
return ksmTune with None



*[root@he ~]# tail -f /var/log/vdsm/connectivity.log *


2015-11-26 15:02:02,632:DEBUG:recent_client:False
2015-11-26 15:04:44,975:DEBUG:recent_client:True
2015-11-26 15:05:15,039:DEB

Re: [ovirt-users] HA cluster

2015-11-26 Thread Budur Nagaraju
Below are the logs,


[root@he ~]# tail -f /var/log/vdsm/vdsm.log
Detector thread::DEBUG::2015-11-26
15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:50944
Detector thread::DEBUG::2015-11-26
15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 50944)
Detector thread::DEBUG::2015-11-26
15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
Adding connection from 127.0.0.1:50945
Detector thread::DEBUG::2015-11-26
15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
Connection removed from 127.0.0.1:50945
Detector thread::DEBUG::2015-11-26
15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:50945
Detector thread::DEBUG::2015-11-26
15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 50945)
Detector thread::DEBUG::2015-11-26
15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
Adding connection from 127.0.0.1:50946
Detector thread::DEBUG::2015-11-26
15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
Connection removed from 127.0.0.1:50946
Detector thread::DEBUG::2015-11-26
15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:50946
Detector thread::DEBUG::2015-11-26
15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 50946)



On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi 
wrote:

>
>
> On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju 
> wrote:
>
>> I have done a fresh installation and now am getting the below error,
>>
>> [ INFO  ] Updating hosted-engine configuration
>> [ INFO  ] Stage: Transaction commit
>> [ INFO  ] Stage: Closing up
>>   The following network ports should be opened:
>>   tcp:5900
>>   tcp:5901
>>   udp:5900
>>   udp:5901
>>   An example of the required configuration for iptables can be
>> found at:
>>   /etc/ovirt-hosted-engine/iptables.example
>>   In order to configure firewalld, copy the files from
>>   /etc/ovirt-hosted-engine/firewalld to /etc/firewalld/services
>>   and execute the following commands:
>>   firewall-cmd -service hosted-console
>> [ INFO  ] Creating VM
>> [ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary
>> password for console connection. The VM may not have been created: please
>> check VDSM logs
>> [ INFO  ] Stage: Clean up
>> [ INFO  ] Generating answer file
>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf'
>> [ INFO  ] Stage: Pre-termination
>> [ INFO  ] Stage: Termination
>>
>>
>>
>> [root@he ovirt]# tail -f /var/log/vdsm/
>> backup/   connectivity.log  mom.log   supervdsm.log
>> vdsm.log
>> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log
>> Detector thread::DEBUG::2015-11-26
>> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:42741
>> Detector thread::DEBUG::2015-11-26
>> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>> http detected from ('127.0.0.1', 42741)
>> Detector thread::DEBUG::2015-11-26
>> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>> Adding connection from 127.0.0.1:42742
>> Detector thread::DEBUG::2015-11-26
>> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>> Connection removed from 127.0.0.1:42742
>> Detector thread::DEBUG::2015-11-26
>> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:42742
>> Detector thread::DEBUG::2015-11-26
>> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>> http detected from ('127.0.0.1', 42742)
>> Detector thread::DEBUG::2015-11-26
>> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>> Adding connection from 127.0.0.1:42743
>> Detector thread::DEBUG::2015-11-26
>> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>> Connection removed from 127.0.0.1:42743
>> Detector thread::DEBUG::2015-11-26
>> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:42743
>> Detector thread::DEBUG::2015-11-26
>> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>> http detected from ('127.0.0.1', 42743)
>>
>>
>
> It failed before, can you please attach the whole VDSM logs?
>
>
>>
>> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju 
>>> wrote:
>>>
>

Re: [ovirt-users] HA cluster

2015-11-26 Thread Simone Tiraboschi
On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju  wrote:

> I have done a fresh installation and now am getting the below error,
>
> [ INFO  ] Updating hosted-engine configuration
> [ INFO  ] Stage: Transaction commit
> [ INFO  ] Stage: Closing up
>   The following network ports should be opened:
>   tcp:5900
>   tcp:5901
>   udp:5900
>   udp:5901
>   An example of the required configuration for iptables can be
> found at:
>   /etc/ovirt-hosted-engine/iptables.example
>   In order to configure firewalld, copy the files from
>   /etc/ovirt-hosted-engine/firewalld to /etc/firewalld/services
>   and execute the following commands:
>   firewall-cmd -service hosted-console
> [ INFO  ] Creating VM
> [ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary
> password for console connection. The VM may not have been created: please
> check VDSM logs
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
>
>
>
> [root@he ovirt]# tail -f /var/log/vdsm/
> backup/   connectivity.log  mom.log   supervdsm.log
> vdsm.log
> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log
> Detector thread::DEBUG::2015-11-26
> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:42741
> Detector thread::DEBUG::2015-11-26
> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
> http detected from ('127.0.0.1', 42741)
> Detector thread::DEBUG::2015-11-26
> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
> Adding connection from 127.0.0.1:42742
> Detector thread::DEBUG::2015-11-26
> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
> Connection removed from 127.0.0.1:42742
> Detector thread::DEBUG::2015-11-26
> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:42742
> Detector thread::DEBUG::2015-11-26
> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
> http detected from ('127.0.0.1', 42742)
> Detector thread::DEBUG::2015-11-26
> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
> Adding connection from 127.0.0.1:42743
> Detector thread::DEBUG::2015-11-26
> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
> Connection removed from 127.0.0.1:42743
> Detector thread::DEBUG::2015-11-26
> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:42743
> Detector thread::DEBUG::2015-11-26
> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
> http detected from ('127.0.0.1', 42743)
>
>

It failed before, can you please attach the whole VDSM logs?


>
> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju 
>> wrote:
>>
>>> Its a fresh setup ,I have deleted all the vms ,still am facing same
>>> issues .
>>>
>>>
>> Can you please paste the output of
>>  vdsClient -s 0 list
>> ?
>> thanks
>>
>>
>>>
>>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali 
>>> wrote:
>>>
 Hi

 Seems like you have existing VMs running on the host (you can check
 that by looking for qemu processes on your host).
 Is that a clean deployment, or was the host used before for running VMs?
 Perhaps you already ran the hosted engine setup, and the VM was left
 there?

 CC-ing Sandro who is more familiar in that than me.

 Thanks,
 Oved

 On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju 
 wrote:

> HI
>
> Getting below error while configuring Hosted engine,
>
> root@he ~]# hosted-engine --deploy
> [ INFO  ] Stage: Initializing
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>   Continuing will configure this host for serving as
> hypervisor and create a VM where you have to install oVirt Engine
> afterwards.
>   Are you sure you want to continue? (Yes, No)[Yes]: yes
>   Configuration files: []
>   Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log
>   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
>   It has been detected that this program is executed through
> an SSH connection without using screen.
>   Continuing with the installation may lead to broken
> installation if the network connection fails.
>   It is highly recommended to abort the installation and run
> it inside a screen session using command "screen".
>

Re: [ovirt-users] HA cluster

2015-11-26 Thread Budur Nagaraju
I have done a fresh installation and now am getting the below error,

[ INFO  ] Updating hosted-engine configuration
[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up
  The following network ports should be opened:
  tcp:5900
  tcp:5901
  udp:5900
  udp:5901
  An example of the required configuration for iptables can be
found at:
  /etc/ovirt-hosted-engine/iptables.example
  In order to configure firewalld, copy the files from
  /etc/ovirt-hosted-engine/firewalld to /etc/firewalld/services
  and execute the following commands:
  firewall-cmd -service hosted-console
[ INFO  ] Creating VM
[ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary
password for console connection. The VM may not have been created: please
check VDSM logs
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination



[root@he ovirt]# tail -f /var/log/vdsm/
backup/   connectivity.log  mom.log   supervdsm.log
vdsm.log
[root@he ovirt]# tail -f /var/log/vdsm/vdsm.log
Detector thread::DEBUG::2015-11-26
14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:42741
Detector thread::DEBUG::2015-11-26
14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 42741)
Detector thread::DEBUG::2015-11-26
14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
Adding connection from 127.0.0.1:42742
Detector thread::DEBUG::2015-11-26
14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
Connection removed from 127.0.0.1:42742
Detector thread::DEBUG::2015-11-26
14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:42742
Detector thread::DEBUG::2015-11-26
14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 42742)
Detector thread::DEBUG::2015-11-26
14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
Adding connection from 127.0.0.1:42743
Detector thread::DEBUG::2015-11-26
14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
Connection removed from 127.0.0.1:42743
Detector thread::DEBUG::2015-11-26
14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:42743
Detector thread::DEBUG::2015-11-26
14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 42743)


On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi 
wrote:

>
>
> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju  wrote:
>
>> Its a fresh setup ,I have deleted all the vms ,still am facing same
>> issues .
>>
>>
> Can you please paste the output of
>  vdsClient -s 0 list
> ?
> thanks
>
>
>>
>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali 
>> wrote:
>>
>>> Hi
>>>
>>> Seems like you have existing VMs running on the host (you can check that
>>> by looking for qemu processes on your host).
>>> Is that a clean deployment, or was the host used before for running VMs?
>>> Perhaps you already ran the hosted engine setup, and the VM was left
>>> there?
>>>
>>> CC-ing Sandro who is more familiar in that than me.
>>>
>>> Thanks,
>>> Oved
>>>
>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju 
>>> wrote:
>>>
 HI

 Getting below error while configuring Hosted engine,

 root@he ~]# hosted-engine --deploy
 [ INFO  ] Stage: Initializing
 [ INFO  ] Generating a temporary VNC password.
 [ INFO  ] Stage: Environment setup
   Continuing will configure this host for serving as hypervisor
 and create a VM where you have to install oVirt Engine afterwards.
   Are you sure you want to continue? (Yes, No)[Yes]: yes
   Configuration files: []
   Log file:
 /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log
   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
   It has been detected that this program is executed through an
 SSH connection without using screen.
   Continuing with the installation may lead to broken
 installation if the network connection fails.
   It is highly recommended to abort the installation and run it
 inside a screen session using command "screen".
   Do you want to continue anyway? (Yes, No)[No]: yes
 [WARNING] Cannot detect if hardware supports virtualization
 [ INFO  ] Bridge ovirtmgmt already created
 [ INFO  ] Stage: Environment packages setup
 [ INFO  ] Stage: Programs detection
 [ INFO  ] Stage: Environment setup

 *[ ERRO

Re: [ovirt-users] HA cluster

2015-11-26 Thread Simone Tiraboschi
On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju  wrote:

> Its a fresh setup ,I have deleted all the vms ,still am facing same issues
> .
>
>
Can you please paste the output of
 vdsClient -s 0 list
?
thanks


>
> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali 
> wrote:
>
>> Hi
>>
>> Seems like you have existing VMs running on the host (you can check that
>> by looking for qemu processes on your host).
>> Is that a clean deployment, or was the host used before for running VMs?
>> Perhaps you already ran the hosted engine setup, and the VM was left
>> there?
>>
>> CC-ing Sandro who is more familiar in that than me.
>>
>> Thanks,
>> Oved
>>
>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju 
>> wrote:
>>
>>> HI
>>>
>>> Getting below error while configuring Hosted engine,
>>>
>>> root@he ~]# hosted-engine --deploy
>>> [ INFO  ] Stage: Initializing
>>> [ INFO  ] Generating a temporary VNC password.
>>> [ INFO  ] Stage: Environment setup
>>>   Continuing will configure this host for serving as hypervisor
>>> and create a VM where you have to install oVirt Engine afterwards.
>>>   Are you sure you want to continue? (Yes, No)[Yes]: yes
>>>   Configuration files: []
>>>   Log file:
>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log
>>>   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
>>>   It has been detected that this program is executed through an
>>> SSH connection without using screen.
>>>   Continuing with the installation may lead to broken
>>> installation if the network connection fails.
>>>   It is highly recommended to abort the installation and run it
>>> inside a screen session using command "screen".
>>>   Do you want to continue anyway? (Yes, No)[No]: yes
>>> [WARNING] Cannot detect if hardware supports virtualization
>>> [ INFO  ] Bridge ovirtmgmt already created
>>> [ INFO  ] Stage: Environment packages setup
>>> [ INFO  ] Stage: Programs detection
>>> [ INFO  ] Stage: Environment setup
>>>
>>> *[ ERROR ] The following VMs has been found:
>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage
>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running*
>>> [ INFO  ] Stage: Clean up
>>> [ INFO  ] Generating answer file
>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf'
>>> [ INFO  ] Stage: Pre-termination
>>> [ INFO  ] Stage: Termination
>>> [root@he ~]#
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA cluster

2015-11-25 Thread Budur Nagaraju
Its a fresh setup ,I have deleted all the vms ,still am facing same issues .


On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali  wrote:

> Hi
>
> Seems like you have existing VMs running on the host (you can check that
> by looking for qemu processes on your host).
> Is that a clean deployment, or was the host used before for running VMs?
> Perhaps you already ran the hosted engine setup, and the VM was left there?
>
> CC-ing Sandro who is more familiar in that than me.
>
> Thanks,
> Oved
>
> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju  wrote:
>
>> HI
>>
>> Getting below error while configuring Hosted engine,
>>
>> root@he ~]# hosted-engine --deploy
>> [ INFO  ] Stage: Initializing
>> [ INFO  ] Generating a temporary VNC password.
>> [ INFO  ] Stage: Environment setup
>>   Continuing will configure this host for serving as hypervisor
>> and create a VM where you have to install oVirt Engine afterwards.
>>   Are you sure you want to continue? (Yes, No)[Yes]: yes
>>   Configuration files: []
>>   Log file:
>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log
>>   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
>>   It has been detected that this program is executed through an
>> SSH connection without using screen.
>>   Continuing with the installation may lead to broken
>> installation if the network connection fails.
>>   It is highly recommended to abort the installation and run it
>> inside a screen session using command "screen".
>>   Do you want to continue anyway? (Yes, No)[No]: yes
>> [WARNING] Cannot detect if hardware supports virtualization
>> [ INFO  ] Bridge ovirtmgmt already created
>> [ INFO  ] Stage: Environment packages setup
>> [ INFO  ] Stage: Programs detection
>> [ INFO  ] Stage: Environment setup
>>
>> *[ ERROR ] The following VMs has been found:
>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage
>> 'Environment setup': Cannot setup Hosted Engine with other VMs running*
>> [ INFO  ] Stage: Clean up
>> [ INFO  ] Generating answer file
>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf'
>> [ INFO  ] Stage: Pre-termination
>> [ INFO  ] Stage: Termination
>> [root@he ~]#
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA cluster

2015-11-25 Thread Oved Ourfali
Hi

Seems like you have existing VMs running on the host (you can check that by
looking for qemu processes on your host).
Is that a clean deployment, or was the host used before for running VMs?
Perhaps you already ran the hosted engine setup, and the VM was left there?

CC-ing Sandro who is more familiar in that than me.

Thanks,
Oved

On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju  wrote:

> HI
>
> Getting below error while configuring Hosted engine,
>
> root@he ~]# hosted-engine --deploy
> [ INFO  ] Stage: Initializing
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>   Continuing will configure this host for serving as hypervisor
> and create a VM where you have to install oVirt Engine afterwards.
>   Are you sure you want to continue? (Yes, No)[Yes]: yes
>   Configuration files: []
>   Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log
>   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
>   It has been detected that this program is executed through an
> SSH connection without using screen.
>   Continuing with the installation may lead to broken installation
> if the network connection fails.
>   It is highly recommended to abort the installation and run it
> inside a screen session using command "screen".
>   Do you want to continue anyway? (Yes, No)[No]: yes
> [WARNING] Cannot detect if hardware supports virtualization
> [ INFO  ] Bridge ovirtmgmt already created
> [ INFO  ] Stage: Environment packages setup
> [ INFO  ] Stage: Programs detection
> [ INFO  ] Stage: Environment setup
>
> *[ ERROR ] The following VMs has been found:
> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage
> 'Environment setup': Cannot setup Hosted Engine with other VMs running*
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [root@he ~]#
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] HA cluster

2015-11-25 Thread Budur Nagaraju
HI

Getting below error while configuring Hosted engine,

root@he ~]# hosted-engine --deploy
[ INFO  ] Stage: Initializing
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
  Continuing will configure this host for serving as hypervisor and
create a VM where you have to install oVirt Engine afterwards.
  Are you sure you want to continue? (Yes, No)[Yes]: yes
  Configuration files: []
  Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log
  Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
  It has been detected that this program is executed through an SSH
connection without using screen.
  Continuing with the installation may lead to broken installation
if the network connection fails.
  It is highly recommended to abort the installation and run it
inside a screen session using command "screen".
  Do you want to continue anyway? (Yes, No)[No]: yes
[WARNING] Cannot detect if hardware supports virtualization
[ INFO  ] Bridge ovirtmgmt already created
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup

*[ ERROR ] The following VMs has been found:
2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage
'Environment setup': Cannot setup Hosted Engine with other VMs running*
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[root@he ~]#
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ha cluster 3 nodes but 1 slow

2015-10-18 Thread Ravishankar N



On 10/18/2015 07:27 PM, Nicolas LIENARD wrote:

Hey Nil

What about 
https://gluster.readthedocs.org/en/release-3.7.0/Features/afr-arbiter-volumes/ 
?


Regards
Nico


Le 18 octobre 2015 15:12:23 GMT+02:00, Nir Soffer  
a écrit :


On Sat, Oct 17, 2015 at 12:45 PM, Nicolas LIENARD  
wrote:

Hi Currently, i ve 3 nodes, 2 in same DC and a third in
another DC. They are all bridged together through a vpn. I
know a cluster is at least 3 nodes to satisfy the quorum. 




Just adding a 3rd node (without actually using it for 3 way replication) 
might not help in preventing split-brains. gluster has client-quorum and 
server-quorum. Have a look at 
http://comments.gmane.org/gmane.comp.file-systems.gluster.user/22609 for 
some information.


If you are indeed using it as a replica-3, then it is better to have all 
3 nodes in the same DC. gluster clients sends every write() to all 
bricks of the replica (and waits for their responses too), so if one of 
them is in another DC, it might slow writes due to network latency.


My question is to know if i can have my VM balancing on the 2
fast nodes with HA and glusterfs replica 2. 




replica 2 definitely provides HA but if you have more chances of files 
ending in split-brain if you have frequent network disconnects , which 
is why replica 3 with client-quorum set to 'auto' is better for 
preventing split-brains.
arbiter-volumes are a kind of a sweet-spot between replica-2 and 
replica-3 that can prevent split-brains. The link that Nir shared 
describes it and how to create one etc.


Regards,
Ravi


gluster replica 2 is not supported.

Nir



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ha cluster 3 nodes but 1 slow

2015-10-18 Thread Nicolas LIENARD
Hey Nil 

What about 
https://gluster.readthedocs.org/en/release-3.7.0/Features/afr-arbiter-volumes/ ?

Regards
Nico 


Le 18 octobre 2015 15:12:23 GMT+02:00, Nir Soffer  a écrit :
>On Sat, Oct 17, 2015 at 12:45 PM, Nicolas LIENARD 
>wrote:
>> Hi
>>
>> Currently, i ve 3 nodes, 2 in same DC and a third in another DC.
>>
>> They are all bridged together through a vpn.
>>
>> I know a cluster is at least 3 nodes to satisfy the quorum.
>>
>> My question is to know if i can have my VM balancing on the 2 fast
>nodes
>> with HA and glusterfs replica 2.
>
>gluster replica 2 is not supported.
>
>Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ha cluster 3 nodes but 1 slow

2015-10-18 Thread Nir Soffer
On Sat, Oct 17, 2015 at 12:45 PM, Nicolas LIENARD  wrote:
> Hi
>
> Currently, i ve 3 nodes, 2 in same DC and a third in another DC.
>
> They are all bridged together through a vpn.
>
> I know a cluster is at least 3 nodes to satisfy the quorum.
>
> My question is to know if i can have my VM balancing on the 2 fast nodes
> with HA and glusterfs replica 2.

gluster replica 2 is not supported.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ha cluster 3 nodes but 1 slow

2015-10-18 Thread Yaniv Kaul
Perhaps using
https://gluster.readthedocs.org/en/release-3.7.0/Features/afr-arbiter-volumes/
?
This has not been tested, AFAIK.
Y.

On Sat, Oct 17, 2015 at 12:45 PM, Nicolas LIENARD 
wrote:

> Hi
>
> Currently, i ve 3 nodes, 2 in same DC and a third in another DC.
>
> They are all bridged together through a vpn.
>
> I know a cluster is at least 3 nodes to satisfy the quorum.
>
> My question is to know if i can have my VM balancing on the 2 fast nodes
> with HA and glusterfs replica 2.
>
> And use the slow third node to satisfy quorum and a gluster
> geo-replication to act as a Backup.
>
> Let me know if this is technically suitable with Ovirt.
>
> Thanks a lot
> Regards
> Nico
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ha cluster 3 nodes but 1 slow

2015-10-17 Thread Nicolas LIENARD
Hi

Currently, i ve 3 nodes, 2 in same DC and a third in another DC. 

They are all bridged together through a vpn. 

I know a cluster is at least 3 nodes to satisfy the quorum. 

My question is to know if i can have my VM balancing on the 2 fast nodes with 
HA and glusterfs replica 2.

And use the slow third node to satisfy quorum and a gluster geo-replication to 
act as a Backup.

Let me know if this is technically suitable with Ovirt.

Thanks a lot 
Regards 
Nico___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users