Re: [ovirt-users] Importing existing KVM hosts to Ovirt

2017-04-18 Thread Konstantin Raskoshnyi
I don't have that user at all.
The old hosts have only libvirtd installed and running kvm hosts

Also instruction is for XEN, we don't run XEN VMs

So it

On Tue, Apr 18, 2017 at 2:20 AM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
> On 18 Apr 2017, at 09:25, Konstantin Raskoshnyi <konra...@gmail.com>
> wrote:
>
> Hi Shahar,
> That's for the info,
> I'm getting this errors when trying to do with ssh method, even though I
> can run virsh -c qemu+ssh://user@tank4/system list without any problems
>
>
> can you run that as a “vdsm” user? You need to exchange keys for “vdsm”
> (note it has disabled login, so you need to use "runuser" or something)
>
> Thanks,
> michal
>
>
> Here's the error,
>
> 2017-04-18 07:24:06,152Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.
> GetVmsNamesFromExternalProviderVDSCommand] (default task-20)
> [69b6998b-e451-4a9b-8112-d57c8b9e88ec] Command '
> GetVmsNamesFromExternalProviderVDSCommand(HostName = tank5,
> GetVmsFromExternalProviderParameters:{runAsync='true',
> hostId='aa9ef44e-04e4-4a63-b982-e0ead9a8d497', url='qemu+ssh://test@tank4/
> system', username='test', originType='KVM', namesOfVms='null'})'
> execution failed: VDSGenericException: VDSErrorException: Failed to
> GetVmsNamesFromExternalProviderVDS, error = Cannot recv data: Host key
> verification failed.: Connection reset by peer, code = 65
>
> 2017-04-18 07:18:05,381Z ERROR [org.ovirt.engine.core.bll.
> GetVmsFromExternalProviderQuery] (default task-52)
> [bb2db0e0-d0f8-4619-b6a0-8a7efdd06a6c] Exception:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to
> GetVmsNamesFromExternalProviderVDS, error = Cannot recv data: Host key
> verification failed.: Connection reset by peer, code = 65 (Failed with
> error unexpected and code 16)
> at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:118)
> [bll.jar:]
> at 
> org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
> [bll.jar:]
> at 
> org.ovirt.engine.core.bll.QueriesCommandBase.runVdsCommand(QueriesCommandBase.java:242)
> [bll.jar:]
> at org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuer
> y.getVmsFromExternalProvider(GetVmsFromExternalProviderQuery.java:46)
> [bll.jar:]
> at org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuer
> y.executeQueryCommand(GetVmsFromExternalProviderQuery.java:40) [bll.jar:]
> at 
> org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:110)
> [bll.jar:]
> at org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
> [dal.jar:]
> at org.ovirt.engine.core.bll.executor.DefaultBackendQueryExecutor.execute(
> DefaultBackendQueryExecutor.java:14) [bll.jar:]
> at org.ovirt.engine.core.bll.Backend.runQueryImpl(Backend.java:579)
> [bll.jar:]
> at org.ovirt.engine.core.bll.Backend.runQuery(Backend.java:547) [bll.jar:]
> at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source)
> [:1.8.0_121]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_121]
> at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_121]
>
> On Mon, Apr 17, 2017 at 11:12 PM, Shahar Havivi <shav...@redhat.com>
>  wrote:
>
>> Hi,
>> There is a wiki page for importing VMs from other hypervisors here:
>> https://www.ovirt.org/develop/release-management/features/vi
>> rt/virt-v2v-integration/
>>
>> and specific for KVM from Libvirt the wiki page is in progress but you
>> can read about it here:
>> https://github.com/oVirt/ovirt-site/pull/876/files
>>
>> On Fri, Apr 14, 2017 at 9:05 PM, Michal Skrivanek <
>> michal.skriva...@redhat.com> wrote:
>>
>>>
>>> > On 14 Apr 2017, at 19:48, Konstantin Raskoshnyi <konra...@gmail.com>
>>> wrote:
>>> >
>>> > Hi guys, I just installed oVirt 4.1 - works great!
>>> >
>>> > But the questing is, we have around 50 existing  kvm hosts, is it
>>> really possible during adding them to oVirt add all VMs from them to oVirt?
>>>
>>> you can try GUI Import VM via libvirt, unless you use some exotic
>>> options/storage it should work just fine
>>>
>>> >
>>> > Second options I see - import disks to oVirts and re-create machines
>>>
>>> that’s possible as well, disk by disk.
>>>
>>> Thanks,
>>> michal
>>>
>>> >
>>> > Thanks for the help.
>>> > 

Re: [ovirt-users] Importing existing KVM hosts to Ovirt

2017-04-18 Thread Konstantin Raskoshnyi
Hi Shahar,
That's for the info,
I'm getting this errors when trying to do with ssh method, even though I
can run virsh -c qemu+ssh://user@tank4/system list without any problems

Here's the error,

2017-04-18 07:24:06,152Z ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand]
(default task-20) [69b6998b-e451-4a9b-8112-d57c8b9e88ec] Command
'GetVmsNamesFromExternalProviderVDSCommand(HostName = tank5,
GetVmsFromExternalProviderParameters:{runAsync='true',
hostId='aa9ef44e-04e4-4a63-b982-e0ead9a8d497',
url='qemu+ssh://test@tank4/system',
username='test', originType='KVM', namesOfVms='null'})' execution failed:
VDSGenericException: VDSErrorException: Failed to
GetVmsNamesFromExternalProviderVDS, error = Cannot recv data: Host key
verification failed.: Connection reset by peer, code = 65

2017-04-18 07:18:05,381Z ERROR
[org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default
task-52) [bb2db0e0-d0f8-4619-b6a0-8a7efdd06a6c] Exception:
org.ovirt.engine.core.common.errors.EngineException: EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GetVmsNamesFromExternalProviderVDS, error = Cannot recv data: Host key
verification failed.: Connection reset by peer, code = 65 (Failed with
error unexpected and code 16)
at
org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:118)
[bll.jar:]
at
org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
[bll.jar:]
at
org.ovirt.engine.core.bll.QueriesCommandBase.runVdsCommand(QueriesCommandBase.java:242)
[bll.jar:]
at
org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery.getVmsFromExternalProvider(GetVmsFromExternalProviderQuery.java:46)
[bll.jar:]
at
org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery.executeQueryCommand(GetVmsFromExternalProviderQuery.java:40)
[bll.jar:]
at
org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:110)
[bll.jar:]
at org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]
at
org.ovirt.engine.core.bll.executor.DefaultBackendQueryExecutor.execute(DefaultBackendQueryExecutor.java:14)
[bll.jar:]
at org.ovirt.engine.core.bll.Backend.runQueryImpl(Backend.java:579)
[bll.jar:]
at org.ovirt.engine.core.bll.Backend.runQuery(Backend.java:547) [bll.jar:]
at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source) [:1.8.0_121]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_121]
at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_121]

On Mon, Apr 17, 2017 at 11:12 PM, Shahar Havivi <shav...@redhat.com> wrote:

> Hi,
> There is a wiki page for importing VMs from other hypervisors here:
> https://www.ovirt.org/develop/release-management/features/
> virt/virt-v2v-integration/
>
> and specific for KVM from Libvirt the wiki page is in progress but you can
> read about it here:
> https://github.com/oVirt/ovirt-site/pull/876/files
>
> On Fri, Apr 14, 2017 at 9:05 PM, Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>> > On 14 Apr 2017, at 19:48, Konstantin Raskoshnyi <konra...@gmail.com>
>> wrote:
>> >
>> > Hi guys, I just installed oVirt 4.1 - works great!
>> >
>> > But the questing is, we have around 50 existing  kvm hosts, is it
>> really possible during adding them to oVirt add all VMs from them to oVirt?
>>
>> you can try GUI Import VM via libvirt, unless you use some exotic
>> options/storage it should work just fine
>>
>> >
>> > Second options I see - import disks to oVirts and re-create machines
>>
>> that’s possible as well, disk by disk.
>>
>> Thanks,
>> michal
>>
>> >
>> > Thanks for the help.
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] JUST CANT UNDERSTAND WHY OVIRT DOESNT HAVE A WEB-GUI FOR EDITING , REMOVING UNWANTED STORAGE DOMAINS

2017-04-17 Thread Konstantin Raskoshnyi
Use fqdn instead of IP address.

On Mon, Apr 17, 2017 at 6:38 AM martin chamambo  wrote:

> JUST CANT UNDERSTAND WHY OVIRT DOESNT HAVE A WEB-GUI FOR EDITING ,REMOVING
> UNWANTED STORAGE DOMAINS
>
> I configured openfiler as my iscsi storage and while i am able to connect
> to it i wanted to change the IP address of the connecting IP to one of
> the logical networks which i disignated as a STORAGE role.
>
> This is a nightmare changing this and i have to set hosts in maintenence
> mode and delete the datacenter ,
>
> is there any other way ?
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage redundancy in Ovirt

2017-04-17 Thread Konstantin Raskoshnyi
Hi Nir,
BMC - board management controller, in my case I have ilo.
Yes I set up power management for all hosts - ovirt sees ilo status as ok.
I use remote pdu to shutdown the port, after that happens the picture I
attached.
After I switch power port on, ovirt is able to read ilo status, sees that
Linux is down and immediately switches the spm server.

On Mon, Apr 17, 2017 at 6:07 AM Nir Soffer <nsof...@redhat.com> wrote:

> On Mon, Apr 17, 2017 at 8:24 AM Konstantin Raskoshnyi <konra...@gmail.com>
> wrote:
>
>> But actually, it didn't work well. After main SPM host went down I see
>> this
>>
> [image: Screen Shot 2017-04-16 at 10.22.00 PM.png]
>>
>
>> 2017-04-17 05:23:15,554Z ERROR
>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
>> (DefaultQuartzScheduler5) [4dcc033d-26bf-49bb-bfaa-03a970dbbec1] SPM Init:
>> could not find reported vds or not up - pool: 'STG' vds_spm_id: '1'
>> 2017-04-17 05:23:15,567Z INFO
>>  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
>> (DefaultQuartzScheduler5) [4dcc033d-26bf-49bb-bfaa-03a970dbbec1] SPM
>> selection - vds seems as spm 'tank5'
>> 2017-04-17 05:23:15,567Z WARN
>>  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
>> (DefaultQuartzScheduler5) [4dcc033d-26bf-49bb-bfaa-03a970dbbec1] spm vds is
>> non responsive, stopping spm selection.
>>
>> So that means only if BMC is up it's possible to automatically switch
>>  SPM host?
>>
>
> BMC?
>
> If your SPM is no responsive, the system will try to fence it. Did you
> configure power management for all hosts? did you check that it
> work? How did you simulate non-responsive host?
>
> If power management is not configured or fail, the system cannot
> move the spm to another host, unless you manually confirm that the
> SPM host was rebooted.
>
> Nir
>
>
>>
>> Thanks
>>
>> On Sun, Apr 16, 2017 at 8:29 PM, Konstantin Raskoshnyi <
>> konra...@gmail.com> wrote:
>>
>>> Oh, fence agent works fine if I select ilo4,
>>> Thank you for your help!
>>>
>>> On Sun, Apr 16, 2017 at 8:22 PM Dan Yasny <dya...@gmail.com> wrote:
>>>
>>>> On Sun, Apr 16, 2017 at 11:19 PM, Konstantin Raskoshnyi <
>>>> konra...@gmail.com> wrote:
>>>>
>>>>> Makes sense.
>>>>> I was trying to set it up, but doesn't work with our staging hardware.
>>>>> We have old ilo100, I'll try again.
>>>>> Thanks!
>>>>>
>>>>>
>>>> It is absolutely necessary for any HA to work properly. There's of
>>>> course the "confirm host has been shutdown" option, which serves as an
>>>> override for the fence command, but it's manual
>>>>
>>>>
>>>>> On Sun, Apr 16, 2017 at 8:18 PM Dan Yasny <dya...@gmail.com> wrote:
>>>>>
>>>>>> On Sun, Apr 16, 2017 at 11:15 PM, Konstantin Raskoshnyi <
>>>>>> konra...@gmail.com> wrote:
>>>>>>
>>>>>>> Fence agent under each node?
>>>>>>>
>>>>>>
>>>>>> When you configure a host, there's the power management tab, where
>>>>>> you need to enter the bmc details for the host. If you don't have fencing
>>>>>> enabled, how do you expect the system to make sure a host running a 
>>>>>> service
>>>>>> is actually down (and it is safe to start HA services elsewhere), and 
>>>>>> not,
>>>>>> for example, just unreachable by the engine? How do you avoid a 
>>>>>> splitbraid
>>>>>> -> SBA ?
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> On Sun, Apr 16, 2017 at 8:14 PM Dan Yasny <dya...@gmail.com> wrote:
>>>>>>>
>>>>>>>> On Sun, Apr 16, 2017 at 11:13 PM, Konstantin Raskoshnyi <
>>>>>>>> konra...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> "Corner cases"?
>>>>>>>>> I tried to simulate crash of SPM server and ovirt kept trying to
>>>>>>>>> reistablished connection to the failed node.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Did you configure fencing?
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Apr 16, 2017 at 8:10 PM Dan

Re: [ovirt-users] storage redundancy in Ovirt

2017-04-16 Thread Konstantin Raskoshnyi
But actually, it didn't work well. After main SPM host went down I see this

2017-04-17 05:23:15,554Z ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
(DefaultQuartzScheduler5) [4dcc033d-26bf-49bb-bfaa-03a970dbbec1] SPM Init:
could not find reported vds or not up - pool: 'STG' vds_spm_id: '1'
2017-04-17 05:23:15,567Z INFO
 [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
(DefaultQuartzScheduler5) [4dcc033d-26bf-49bb-bfaa-03a970dbbec1] SPM
selection - vds seems as spm 'tank5'
2017-04-17 05:23:15,567Z WARN
 [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
(DefaultQuartzScheduler5) [4dcc033d-26bf-49bb-bfaa-03a970dbbec1] spm vds is
non responsive, stopping spm selection.

So that means only if BMC is up it's possible to automatically switch  SPM
host?

Thanks

On Sun, Apr 16, 2017 at 8:29 PM, Konstantin Raskoshnyi <konra...@gmail.com>
wrote:

> Oh, fence agent works fine if I select ilo4,
> Thank you for your help!
>
> On Sun, Apr 16, 2017 at 8:22 PM Dan Yasny <dya...@gmail.com> wrote:
>
>> On Sun, Apr 16, 2017 at 11:19 PM, Konstantin Raskoshnyi <
>> konra...@gmail.com> wrote:
>>
>>> Makes sense.
>>> I was trying to set it up, but doesn't work with our staging hardware.
>>> We have old ilo100, I'll try again.
>>> Thanks!
>>>
>>>
>> It is absolutely necessary for any HA to work properly. There's of course
>> the "confirm host has been shutdown" option, which serves as an override
>> for the fence command, but it's manual
>>
>>
>>> On Sun, Apr 16, 2017 at 8:18 PM Dan Yasny <dya...@gmail.com> wrote:
>>>
>>>> On Sun, Apr 16, 2017 at 11:15 PM, Konstantin Raskoshnyi <
>>>> konra...@gmail.com> wrote:
>>>>
>>>>> Fence agent under each node?
>>>>>
>>>>
>>>> When you configure a host, there's the power management tab, where you
>>>> need to enter the bmc details for the host. If you don't have fencing
>>>> enabled, how do you expect the system to make sure a host running a service
>>>> is actually down (and it is safe to start HA services elsewhere), and not,
>>>> for example, just unreachable by the engine? How do you avoid a splitbraid
>>>> -> SBA ?
>>>>
>>>>
>>>>>
>>>>> On Sun, Apr 16, 2017 at 8:14 PM Dan Yasny <dya...@gmail.com> wrote:
>>>>>
>>>>>> On Sun, Apr 16, 2017 at 11:13 PM, Konstantin Raskoshnyi <
>>>>>> konra...@gmail.com> wrote:
>>>>>>
>>>>>>> "Corner cases"?
>>>>>>> I tried to simulate crash of SPM server and ovirt kept trying to
>>>>>>> reistablished connection to the failed node.
>>>>>>>
>>>>>>
>>>>>> Did you configure fencing?
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Apr 16, 2017 at 8:10 PM Dan Yasny <dya...@gmail.com> wrote:
>>>>>>>
>>>>>>>> On Sun, Apr 16, 2017 at 7:29 AM, Nir Soffer <nsof...@redhat.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> On Sun, Apr 16, 2017 at 2:05 PM Dan Yasny <dya...@redhat.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Apr 16, 2017 7:01 AM, "Nir Soffer" <nsof...@redhat.com> wrote:
>>>>>>>>>>
>>>>>>>>>> On Sun, Apr 16, 2017 at 4:17 AM Dan Yasny <dya...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> When you set up a storage domain, you need to specify a host to
>>>>>>>>>>> perform the initial storage operations, but once the SD is defined, 
>>>>>>>>>>> it's
>>>>>>>>>>> details are in the engine database, and all the hosts get connected 
>>>>>>>>>>> to it
>>>>>>>>>>> directly. If the first host you used to define the SD goes down, 
>>>>>>>>>>> all other
>>>>>>>>>>> hosts will still remain connected and work. SPM is an HA service, 
>>>>>>>>>>> and if
>>>>>>>>>>> the current SPM host goes down, SPM gets star

Re: [ovirt-users] storage redundancy in Ovirt

2017-04-16 Thread Konstantin Raskoshnyi
Oh, fence agent works fine if I select ilo4,
Thank you for your help!

On Sun, Apr 16, 2017 at 8:22 PM Dan Yasny <dya...@gmail.com> wrote:

> On Sun, Apr 16, 2017 at 11:19 PM, Konstantin Raskoshnyi <
> konra...@gmail.com> wrote:
>
>> Makes sense.
>> I was trying to set it up, but doesn't work with our staging hardware.
>> We have old ilo100, I'll try again.
>> Thanks!
>>
>>
> It is absolutely necessary for any HA to work properly. There's of course
> the "confirm host has been shutdown" option, which serves as an override
> for the fence command, but it's manual
>
>
>> On Sun, Apr 16, 2017 at 8:18 PM Dan Yasny <dya...@gmail.com> wrote:
>>
>>> On Sun, Apr 16, 2017 at 11:15 PM, Konstantin Raskoshnyi <
>>> konra...@gmail.com> wrote:
>>>
>>>> Fence agent under each node?
>>>>
>>>
>>> When you configure a host, there's the power management tab, where you
>>> need to enter the bmc details for the host. If you don't have fencing
>>> enabled, how do you expect the system to make sure a host running a service
>>> is actually down (and it is safe to start HA services elsewhere), and not,
>>> for example, just unreachable by the engine? How do you avoid a splitbraid
>>> -> SBA ?
>>>
>>>
>>>>
>>>> On Sun, Apr 16, 2017 at 8:14 PM Dan Yasny <dya...@gmail.com> wrote:
>>>>
>>>>> On Sun, Apr 16, 2017 at 11:13 PM, Konstantin Raskoshnyi <
>>>>> konra...@gmail.com> wrote:
>>>>>
>>>>>> "Corner cases"?
>>>>>> I tried to simulate crash of SPM server and ovirt kept trying to
>>>>>> reistablished connection to the failed node.
>>>>>>
>>>>>
>>>>> Did you configure fencing?
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Apr 16, 2017 at 8:10 PM Dan Yasny <dya...@gmail.com> wrote:
>>>>>>
>>>>>>> On Sun, Apr 16, 2017 at 7:29 AM, Nir Soffer <nsof...@redhat.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> On Sun, Apr 16, 2017 at 2:05 PM Dan Yasny <dya...@redhat.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Apr 16, 2017 7:01 AM, "Nir Soffer" <nsof...@redhat.com> wrote:
>>>>>>>>>
>>>>>>>>> On Sun, Apr 16, 2017 at 4:17 AM Dan Yasny <dya...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> When you set up a storage domain, you need to specify a host to
>>>>>>>>>> perform the initial storage operations, but once the SD is defined, 
>>>>>>>>>> it's
>>>>>>>>>> details are in the engine database, and all the hosts get connected 
>>>>>>>>>> to it
>>>>>>>>>> directly. If the first host you used to define the SD goes down, all 
>>>>>>>>>> other
>>>>>>>>>> hosts will still remain connected and work. SPM is an HA service, 
>>>>>>>>>> and if
>>>>>>>>>> the current SPM host goes down, SPM gets started on another host in 
>>>>>>>>>> the DC.
>>>>>>>>>> In short, unless your actual NFS exporting host goes down, there is 
>>>>>>>>>> no
>>>>>>>>>> outage.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> There is no storage outage, but if you shutdown the spm host, the
>>>>>>>>> spm host
>>>>>>>>> will not move to a new host until the spm host is online again, or
>>>>>>>>> you confirm
>>>>>>>>> manually that the spm host was rebooted.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> In a properly configured setup the SBA should take care of that.
>>>>>>>>> That's the whole point of HA services
>>>>>>>>>
>>>>>>>>
>>>>>>>> In some cases like power loss or hardware failure, there is no way
>>>>>>>>

Re: [ovirt-users] storage redundancy in Ovirt

2017-04-16 Thread Konstantin Raskoshnyi
Makes sense.
I was trying to set it up, but doesn't work with our staging hardware.
We have old ilo100, I'll try again.
Thanks!
On Sun, Apr 16, 2017 at 8:18 PM Dan Yasny <dya...@gmail.com> wrote:

> On Sun, Apr 16, 2017 at 11:15 PM, Konstantin Raskoshnyi <
> konra...@gmail.com> wrote:
>
>> Fence agent under each node?
>>
>
> When you configure a host, there's the power management tab, where you
> need to enter the bmc details for the host. If you don't have fencing
> enabled, how do you expect the system to make sure a host running a service
> is actually down (and it is safe to start HA services elsewhere), and not,
> for example, just unreachable by the engine? How do you avoid a splitbraid
> -> SBA ?
>
>
>>
>> On Sun, Apr 16, 2017 at 8:14 PM Dan Yasny <dya...@gmail.com> wrote:
>>
>>> On Sun, Apr 16, 2017 at 11:13 PM, Konstantin Raskoshnyi <
>>> konra...@gmail.com> wrote:
>>>
>>>> "Corner cases"?
>>>> I tried to simulate crash of SPM server and ovirt kept trying to
>>>> reistablished connection to the failed node.
>>>>
>>>
>>> Did you configure fencing?
>>>
>>>
>>>>
>>>>
>>>> On Sun, Apr 16, 2017 at 8:10 PM Dan Yasny <dya...@gmail.com> wrote:
>>>>
>>>>> On Sun, Apr 16, 2017 at 7:29 AM, Nir Soffer <nsof...@redhat.com>
>>>>> wrote:
>>>>>
>>>>>> On Sun, Apr 16, 2017 at 2:05 PM Dan Yasny <dya...@redhat.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Apr 16, 2017 7:01 AM, "Nir Soffer" <nsof...@redhat.com> wrote:
>>>>>>>
>>>>>>> On Sun, Apr 16, 2017 at 4:17 AM Dan Yasny <dya...@gmail.com> wrote:
>>>>>>>
>>>>>>>> When you set up a storage domain, you need to specify a host to
>>>>>>>> perform the initial storage operations, but once the SD is defined, 
>>>>>>>> it's
>>>>>>>> details are in the engine database, and all the hosts get connected to 
>>>>>>>> it
>>>>>>>> directly. If the first host you used to define the SD goes down, all 
>>>>>>>> other
>>>>>>>> hosts will still remain connected and work. SPM is an HA service, and 
>>>>>>>> if
>>>>>>>> the current SPM host goes down, SPM gets started on another host in 
>>>>>>>> the DC.
>>>>>>>> In short, unless your actual NFS exporting host goes down, there is no
>>>>>>>> outage.
>>>>>>>>
>>>>>>>
>>>>>>> There is no storage outage, but if you shutdown the spm host, the
>>>>>>> spm host
>>>>>>> will not move to a new host until the spm host is online again, or
>>>>>>> you confirm
>>>>>>> manually that the spm host was rebooted.
>>>>>>>
>>>>>>>
>>>>>>> In a properly configured setup the SBA should take care of that.
>>>>>>> That's the whole point of HA services
>>>>>>>
>>>>>>
>>>>>> In some cases like power loss or hardware failure, there is no way to
>>>>>> start
>>>>>> the spm host, and the system cannot recover automatically.
>>>>>>
>>>>>
>>>>> There are always corner cases, no doubt. But in a normal situation.
>>>>> where an SPM host goes down because of a hardware failure, it gets fenced,
>>>>> other hosts contend for SPM and start it. No surprises there.
>>>>>
>>>>>
>>>>>>
>>>>>> Nir
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Nir
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> On Sat, Apr 15, 2017 at 1:53 PM, Konstantin Raskoshnyi <
>>>>>>>> konra...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi Fernando,
>>>>>>>>> I see each host has direct connection nfs mount, but yes, if main
>>>>>>>>> host to which I connected nfs storage going down the storage becomes
>>>>>>>>&g

Re: [ovirt-users] storage redundancy in Ovirt

2017-04-16 Thread Konstantin Raskoshnyi
Fence agent under each node?

On Sun, Apr 16, 2017 at 8:14 PM Dan Yasny <dya...@gmail.com> wrote:

> On Sun, Apr 16, 2017 at 11:13 PM, Konstantin Raskoshnyi <
> konra...@gmail.com> wrote:
>
>> "Corner cases"?
>> I tried to simulate crash of SPM server and ovirt kept trying to
>> reistablished connection to the failed node.
>>
>
> Did you configure fencing?
>
>
>>
>>
>> On Sun, Apr 16, 2017 at 8:10 PM Dan Yasny <dya...@gmail.com> wrote:
>>
>>> On Sun, Apr 16, 2017 at 7:29 AM, Nir Soffer <nsof...@redhat.com> wrote:
>>>
>>>> On Sun, Apr 16, 2017 at 2:05 PM Dan Yasny <dya...@redhat.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Apr 16, 2017 7:01 AM, "Nir Soffer" <nsof...@redhat.com> wrote:
>>>>>
>>>>> On Sun, Apr 16, 2017 at 4:17 AM Dan Yasny <dya...@gmail.com> wrote:
>>>>>
>>>>>> When you set up a storage domain, you need to specify a host to
>>>>>> perform the initial storage operations, but once the SD is defined, it's
>>>>>> details are in the engine database, and all the hosts get connected to it
>>>>>> directly. If the first host you used to define the SD goes down, all 
>>>>>> other
>>>>>> hosts will still remain connected and work. SPM is an HA service, and if
>>>>>> the current SPM host goes down, SPM gets started on another host in the 
>>>>>> DC.
>>>>>> In short, unless your actual NFS exporting host goes down, there is no
>>>>>> outage.
>>>>>>
>>>>>
>>>>> There is no storage outage, but if you shutdown the spm host, the spm
>>>>> host
>>>>> will not move to a new host until the spm host is online again, or you
>>>>> confirm
>>>>> manually that the spm host was rebooted.
>>>>>
>>>>>
>>>>> In a properly configured setup the SBA should take care of that.
>>>>> That's the whole point of HA services
>>>>>
>>>>
>>>> In some cases like power loss or hardware failure, there is no way to
>>>> start
>>>> the spm host, and the system cannot recover automatically.
>>>>
>>>
>>> There are always corner cases, no doubt. But in a normal situation.
>>> where an SPM host goes down because of a hardware failure, it gets fenced,
>>> other hosts contend for SPM and start it. No surprises there.
>>>
>>>
>>>>
>>>> Nir
>>>>
>>>>
>>>>>
>>>>>
>>>>> Nir
>>>>>
>>>>>
>>>>>>
>>>>>> On Sat, Apr 15, 2017 at 1:53 PM, Konstantin Raskoshnyi <
>>>>>> konra...@gmail.com> wrote:
>>>>>>
>>>>>>> Hi Fernando,
>>>>>>> I see each host has direct connection nfs mount, but yes, if main
>>>>>>> host to which I connected nfs storage going down the storage becomes
>>>>>>> unavailable and all vms are down
>>>>>>>
>>>>>>>
>>>>>>> On Sat, Apr 15, 2017 at 10:37 AM FERNANDO FREDIANI <
>>>>>>> fernando.fredi...@upx.com> wrote:
>>>>>>>
>>>>>>>> Hello Konstantin.
>>>>>>>>
>>>>>>>> That doesn`t make much sense make a whole cluster depend on a
>>>>>>>> single host. From what I know any host talk directly to NFS Storage 
>>>>>>>> Array
>>>>>>>> or whatever other Shared Storage you have.
>>>>>>>> Have you tested that host going down if that affects the other with
>>>>>>>> the NFS mounted directlly in a NFS Storage array ?
>>>>>>>>
>>>>>>>> Fernando
>>>>>>>>
>>>>>>>> 2017-04-15 12:42 GMT-03:00 Konstantin Raskoshnyi <
>>>>>>>> konra...@gmail.com>:
>>>>>>>>
>>>>>>>>> In ovirt you have to attach storage through specific host.
>>>>>>>>> If host goes down storage is not available.
>>>>>>>>>
>>>>>>>>> On Sat, Apr 15, 2017 at 7:31 AM FERNANDO FREDIANI <
>>>>>>>>> fernando.fre

Re: [ovirt-users] storage redundancy in Ovirt

2017-04-16 Thread Konstantin Raskoshnyi
"Corner cases"?
I tried to simulate crash of SPM server and ovirt kept trying to
reistablished connection to the failed node.


On Sun, Apr 16, 2017 at 8:10 PM Dan Yasny <dya...@gmail.com> wrote:

> On Sun, Apr 16, 2017 at 7:29 AM, Nir Soffer <nsof...@redhat.com> wrote:
>
>> On Sun, Apr 16, 2017 at 2:05 PM Dan Yasny <dya...@redhat.com> wrote:
>>
>>>
>>>
>>> On Apr 16, 2017 7:01 AM, "Nir Soffer" <nsof...@redhat.com> wrote:
>>>
>>> On Sun, Apr 16, 2017 at 4:17 AM Dan Yasny <dya...@gmail.com> wrote:
>>>
>>>> When you set up a storage domain, you need to specify a host to perform
>>>> the initial storage operations, but once the SD is defined, it's details
>>>> are in the engine database, and all the hosts get connected to it directly.
>>>> If the first host you used to define the SD goes down, all other hosts will
>>>> still remain connected and work. SPM is an HA service, and if the current
>>>> SPM host goes down, SPM gets started on another host in the DC. In short,
>>>> unless your actual NFS exporting host goes down, there is no outage.
>>>>
>>>
>>> There is no storage outage, but if you shutdown the spm host, the spm
>>> host
>>> will not move to a new host until the spm host is online again, or you
>>> confirm
>>> manually that the spm host was rebooted.
>>>
>>>
>>> In a properly configured setup the SBA should take care of that. That's
>>> the whole point of HA services
>>>
>>
>> In some cases like power loss or hardware failure, there is no way to
>> start
>> the spm host, and the system cannot recover automatically.
>>
>
> There are always corner cases, no doubt. But in a normal situation. where
> an SPM host goes down because of a hardware failure, it gets fenced, other
> hosts contend for SPM and start it. No surprises there.
>
>
>>
>> Nir
>>
>>
>>>
>>>
>>> Nir
>>>
>>>
>>>>
>>>> On Sat, Apr 15, 2017 at 1:53 PM, Konstantin Raskoshnyi <
>>>> konra...@gmail.com> wrote:
>>>>
>>>>> Hi Fernando,
>>>>> I see each host has direct connection nfs mount, but yes, if main host
>>>>> to which I connected nfs storage going down the storage becomes 
>>>>> unavailable
>>>>> and all vms are down
>>>>>
>>>>>
>>>>> On Sat, Apr 15, 2017 at 10:37 AM FERNANDO FREDIANI <
>>>>> fernando.fredi...@upx.com> wrote:
>>>>>
>>>>>> Hello Konstantin.
>>>>>>
>>>>>> That doesn`t make much sense make a whole cluster depend on a single
>>>>>> host. From what I know any host talk directly to NFS Storage Array or
>>>>>> whatever other Shared Storage you have.
>>>>>> Have you tested that host going down if that affects the other with
>>>>>> the NFS mounted directlly in a NFS Storage array ?
>>>>>>
>>>>>> Fernando
>>>>>>
>>>>>> 2017-04-15 12:42 GMT-03:00 Konstantin Raskoshnyi <konra...@gmail.com>
>>>>>> :
>>>>>>
>>>>>>> In ovirt you have to attach storage through specific host.
>>>>>>> If host goes down storage is not available.
>>>>>>>
>>>>>>> On Sat, Apr 15, 2017 at 7:31 AM FERNANDO FREDIANI <
>>>>>>> fernando.fredi...@upx.com> wrote:
>>>>>>>
>>>>>>>> Well, make it not go through host1 and dedicate a storage server
>>>>>>>> for running NFS and make both hosts connect to it.
>>>>>>>> In my view NFS is much easier to manage than any other type of
>>>>>>>> storage, specially FC and iSCSI and performance is pretty much the 
>>>>>>>> same, so
>>>>>>>> you won`t get better results other than management going to other type.
>>>>>>>>
>>>>>>>> Fernando
>>>>>>>>
>>>>>>>> 2017-04-15 5:25 GMT-03:00 Konstantin Raskoshnyi <konra...@gmail.com
>>>>>>>> >:
>>>>>>>>
>>>>>>>>> Hi guys,
>>>>>>>>> I have one nfs storage,
>>>>>>>>> it's connected through host1.
>>>>>>>>> host2 also has access to it, I can easily migrate vms between them.
>>>>>>>>>
>>>>>>>>> The question is - if host1 is down - all infrastructure is down,
>>>>>>>>> since all traffic goes through host1,
>>>>>>>>> is there any way in oVirt to use redundant storage?
>>>>>>>>>
>>>>>>>>> Only glusterfs?
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> ___
>>>>>>>>> Users mailing list
>>>>>>>>> Users@ovirt.org
>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>>
>>>>>>>>>
>>>>>>
>>>>> ___
>>>>> Users mailing list
>>>>> Users@ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage redundancy in Ovirt

2017-04-16 Thread Konstantin Raskoshnyi
So what's the whole HA point then?

On Sun, Apr 16, 2017 at 4:29 AM Nir Soffer <nsof...@redhat.com> wrote:

> On Sun, Apr 16, 2017 at 2:05 PM Dan Yasny <dya...@redhat.com> wrote:
>
>>
>>
>> On Apr 16, 2017 7:01 AM, "Nir Soffer" <nsof...@redhat.com> wrote:
>>
>> On Sun, Apr 16, 2017 at 4:17 AM Dan Yasny <dya...@gmail.com> wrote:
>>
>>> When you set up a storage domain, you need to specify a host to perform
>>> the initial storage operations, but once the SD is defined, it's details
>>> are in the engine database, and all the hosts get connected to it directly.
>>> If the first host you used to define the SD goes down, all other hosts will
>>> still remain connected and work. SPM is an HA service, and if the current
>>> SPM host goes down, SPM gets started on another host in the DC. In short,
>>> unless your actual NFS exporting host goes down, there is no outage.
>>>
>>
>> There is no storage outage, but if you shutdown the spm host, the spm host
>> will not move to a new host until the spm host is online again, or you
>> confirm
>> manually that the spm host was rebooted.
>>
>>
>> In a properly configured setup the SBA should take care of that. That's
>> the whole point of HA services
>>
>
> In some cases like power loss or hardware failure, there is no way to start
> the spm host, and the system cannot recover automatically.
>
> Nir
>
>
>>
>>
>> Nir
>>
>>
>>>
>>> On Sat, Apr 15, 2017 at 1:53 PM, Konstantin Raskoshnyi <
>>> konra...@gmail.com> wrote:
>>>
>>>> Hi Fernando,
>>>> I see each host has direct connection nfs mount, but yes, if main host
>>>> to which I connected nfs storage going down the storage becomes unavailable
>>>> and all vms are down
>>>>
>>>>
>>>> On Sat, Apr 15, 2017 at 10:37 AM FERNANDO FREDIANI <
>>>> fernando.fredi...@upx.com> wrote:
>>>>
>>>>> Hello Konstantin.
>>>>>
>>>>> That doesn`t make much sense make a whole cluster depend on a single
>>>>> host. From what I know any host talk directly to NFS Storage Array or
>>>>> whatever other Shared Storage you have.
>>>>> Have you tested that host going down if that affects the other with
>>>>> the NFS mounted directlly in a NFS Storage array ?
>>>>>
>>>>> Fernando
>>>>>
>>>>> 2017-04-15 12:42 GMT-03:00 Konstantin Raskoshnyi <konra...@gmail.com>:
>>>>>
>>>>>> In ovirt you have to attach storage through specific host.
>>>>>> If host goes down storage is not available.
>>>>>>
>>>>>> On Sat, Apr 15, 2017 at 7:31 AM FERNANDO FREDIANI <
>>>>>> fernando.fredi...@upx.com> wrote:
>>>>>>
>>>>>>> Well, make it not go through host1 and dedicate a storage server for
>>>>>>> running NFS and make both hosts connect to it.
>>>>>>> In my view NFS is much easier to manage than any other type of
>>>>>>> storage, specially FC and iSCSI and performance is pretty much the 
>>>>>>> same, so
>>>>>>> you won`t get better results other than management going to other type.
>>>>>>>
>>>>>>> Fernando
>>>>>>>
>>>>>>> 2017-04-15 5:25 GMT-03:00 Konstantin Raskoshnyi <konra...@gmail.com>
>>>>>>> :
>>>>>>>
>>>>>>>> Hi guys,
>>>>>>>> I have one nfs storage,
>>>>>>>> it's connected through host1.
>>>>>>>> host2 also has access to it, I can easily migrate vms between them.
>>>>>>>>
>>>>>>>> The question is - if host1 is down - all infrastructure is down,
>>>>>>>> since all traffic goes through host1,
>>>>>>>> is there any way in oVirt to use redundant storage?
>>>>>>>>
>>>>>>>> Only glusterfs?
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>>
>>>>>>>> ___
>>>>>>>> Users mailing list
>>>>>>>> Users@ovirt.org
>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>
>>>>>>>>
>>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage redundancy in Ovirt

2017-04-15 Thread Konstantin Raskoshnyi
Hi Fernando,
I see each host has direct connection nfs mount, but yes, if main host to
which I connected nfs storage going down the storage becomes unavailable
and all vms are down


On Sat, Apr 15, 2017 at 10:37 AM FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> Hello Konstantin.
>
> That doesn`t make much sense make a whole cluster depend on a single host.
> From what I know any host talk directly to NFS Storage Array or whatever
> other Shared Storage you have.
> Have you tested that host going down if that affects the other with the
> NFS mounted directlly in a NFS Storage array ?
>
> Fernando
>
> 2017-04-15 12:42 GMT-03:00 Konstantin Raskoshnyi <konra...@gmail.com>:
>
>> In ovirt you have to attach storage through specific host.
>> If host goes down storage is not available.
>>
>> On Sat, Apr 15, 2017 at 7:31 AM FERNANDO FREDIANI <
>> fernando.fredi...@upx.com> wrote:
>>
>>> Well, make it not go through host1 and dedicate a storage server for
>>> running NFS and make both hosts connect to it.
>>> In my view NFS is much easier to manage than any other type of storage,
>>> specially FC and iSCSI and performance is pretty much the same, so you
>>> won`t get better results other than management going to other type.
>>>
>>> Fernando
>>>
>>> 2017-04-15 5:25 GMT-03:00 Konstantin Raskoshnyi <konra...@gmail.com>:
>>>
>>>> Hi guys,
>>>> I have one nfs storage,
>>>> it's connected through host1.
>>>> host2 also has access to it, I can easily migrate vms between them.
>>>>
>>>> The question is - if host1 is down - all infrastructure is down, since
>>>> all traffic goes through host1,
>>>> is there any way in oVirt to use redundant storage?
>>>>
>>>> Only glusterfs?
>>>>
>>>> Thanks
>>>>
>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] storage redundancy in Ovirt

2017-04-15 Thread Konstantin Raskoshnyi
In ovirt you have to attach storage through specific host.
If host goes down storage is not available.

On Sat, Apr 15, 2017 at 7:31 AM FERNANDO FREDIANI <fernando.fredi...@upx.com>
wrote:

> Well, make it not go through host1 and dedicate a storage server for
> running NFS and make both hosts connect to it.
> In my view NFS is much easier to manage than any other type of storage,
> specially FC and iSCSI and performance is pretty much the same, so you
> won`t get better results other than management going to other type.
>
> Fernando
>
> 2017-04-15 5:25 GMT-03:00 Konstantin Raskoshnyi <konra...@gmail.com>:
>
>> Hi guys,
>> I have one nfs storage,
>> it's connected through host1.
>> host2 also has access to it, I can easily migrate vms between them.
>>
>> The question is - if host1 is down - all infrastructure is down, since
>> all traffic goes through host1,
>> is there any way in oVirt to use redundant storage?
>>
>> Only glusterfs?
>>
>> Thanks
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] storage redundancy in Ovirt

2017-04-15 Thread Konstantin Raskoshnyi
Hi guys,
I have one nfs storage,
it's connected through host1.
host2 also has access to it, I can easily migrate vms between them.

The question is - if host1 is down - all infrastructure is down, since all
traffic goes through host1,
is there any way in oVirt to use redundant storage?

Only glusterfs?

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Importing existing KVM hosts to Ovirt

2017-04-14 Thread Konstantin Raskoshnyi
Hi guys, I just installed oVirt 4.1 - works great!

But the questing is, we have around 50 existing  kvm hosts, is it really
possible during adding them to oVirt add all VMs from them to oVirt?

Second options I see - import disks to oVirts and re-create machines

Thanks for the help.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Adding existing kvm hosts

2017-04-12 Thread Konstantin Raskoshnyi
+Users

We're using SCLinux 6.7, the latest version available in updates is 2.6.6

So I'm going to fix this

Thanks

On Wed, Apr 12, 2017 at 9:42 AM, Yaniv Kaul <yk...@redhat.com> wrote:

> Right. How did you end up with such an ancient version?
>
> Also, please email the users mailing list, not just me (so, for example,
> others will know what the issue is).
> Thanks,
> Y.
>
>
> On Apr 12, 2017 6:52 PM, "Konstantin Raskoshnyi" <konra...@gmail.com>
> wrote:
>
>> I just found this error on oVirt engine: Python version 2.6 is too old,
>> expecting at least 2.7.
>>
>> So going to upgrade python first
>>
>> On Wed, Apr 12, 2017 at 4:41 AM, Yaniv Kaul <yk...@redhat.com> wrote:
>>
>>> Can you share the vdsm log? The host deploy log (from the engine) ?
>>> Y.
>>>
>>>
>>> On Wed, Apr 12, 2017 at 8:13 AM, Konstantin Raskoshnyi <
>>> konra...@gmail.com> wrote:
>>>
>>>> Hi guys, We're never had mgmt for our kvm machines
>>>>
>>>> I installed oVirt 4.1 on CentOS73 and trying to add existing kvm hosts
>>>> but oVirt fails with this error
>>>>
>>>> 2017-04-12 05:08:46,430Z ERROR [org.ovirt.engine.core.uutils.ssh.SSHDialog]
>>>> (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Exception:
>>>> java.io.IOException: Command returned failure code 1 during SSH session
>>>> 'root@tank3'
>>>>
>>>> I don't experience any problems connecting to virtank3 under root.
>>>>
>>>> 2017-04-12 05:08:46,445Z ERROR [org.ovirt.engine.core.dal.dbb
>>>> roker.auditloghandling.AuditLogDirector] 
>>>> (org.ovirt.thread.pool-7-thread-21)
>>>> [4a1d5f35] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), Correlation
>>>> ID: 4a1d5f35, Call Stack: null, Custom Event ID: -1, Message: Failed to
>>>> install Host tank3. Command returned failure code 1 during SSH session
>>>> 'root@tank3'.
>>>> 2017-04-12 05:08:46,445Z ERROR 
>>>> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
>>>> (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Error during host tank3
>>>> install, prefering first exception: Unexpected connection termination
>>>> 2017-04-12 05:08:46,445Z ERROR [org.ovirt.engine.core.bll.hos
>>>> tdeploy.InstallVdsInternalCommand] (org.ovirt.thread.pool-7-thread-21)
>>>> [4a1d5f35] Host installation failed for host 
>>>> 'cec720ed-460a-48aa-a9fc-2262b6da5a83',
>>>> 'tank3': Unexpected connection termination
>>>> 2017-04-12 05:08:46,446Z INFO  
>>>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
>>>> (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] START,
>>>> SetVdsStatusVDSCommand(HostName = tank3, 
>>>> SetVdsStatusVDSCommandParameters:{runAsync='true',
>>>> hostId='cec720ed-460a-48aa-a9fc-2262b6da5a83', status='InstallFailed',
>>>> nonOperationalReason='NONE', stopSpmFailureLogged='false',
>>>> maintenanceReason='null'}), log id: 4bbc52f9
>>>> 2017-04-12 05:08:46,449Z INFO  
>>>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
>>>> (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] FINISH,
>>>> SetVdsStatusVDSCommand, log id: 4bbc52f9
>>>> 2017-04-12 05:08:46,457Z ERROR [org.ovirt.engine.core.dal.dbb
>>>> roker.auditloghandling.AuditLogDirector] 
>>>> (org.ovirt.thread.pool-7-thread-21)
>>>> [4a1d5f35] EVENT_ID: VDS_INSTALL_FAILED(505), Correlation ID: 4a1d5f35, Job
>>>> ID: 8af22af5-72a5-4ec4-b216-4e26ceaa48d6, Call Stack: null, Custom
>>>> Event ID: -1, Message: Host tank3 installation failed. Unexpected
>>>> connection termination.
>>>> 2017-04-12 05:08:46,496Z INFO  [org.ovirt.engine.core.bll.ho
>>>> stdeploy.InstallVdsInternalCommand] (org.ovirt.thread.pool-7-thread-21)
>>>> [4a1d5f35] Lock freed to object 'EngineLock:{exclusiveLocks='[
>>>> cec720ed-460a-48aa-a9fc-2262b6da5a83=<VDS,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>>>> 2017-04-12 05:09:02,742Z INFO  [org.ovirt.engine.core.bll.RemoveVdsCommand]
>>>> (default task-48) [13050988-bf00-4391-9862-a8ed8ade34dd] Lock Acquired
>>>> to object 
>>>> 'EngineLock:{exclusiveLocks='[cec720ed-460a-48aa-a9fc-2262b6da5a83=<VDS,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>>>> 2017-04-12 05:09:02,750Z INFO  [org.ovirt.engine.core.bll.RemoveVdsCommand]
>>>> (org.ovirt.thread

[ovirt-users] Adding existing kvm hosts

2017-04-12 Thread Konstantin Raskoshnyi
Hi guys, We're never had mgmt for our kvm machines

I installed oVirt 4.1 on CentOS73 and trying to add existing kvm hosts but
oVirt fails with this error

2017-04-12 05:08:46,430Z ERROR [org.ovirt.engine.core.uutils.ssh.SSHDialog]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Exception:
java.io.IOException: Command returned failure code 1 during SSH session
'root@tank3'

I don't experience any problems connecting to virtank3 under root.

2017-04-12 05:08:46,445Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] EVENT_ID:
VDS_INSTALL_IN_PROGRESS_ERROR(511), Correlation ID: 4a1d5f35, Call Stack:
null, Custom Event ID: -1, Message: Failed to install Host tank3. Command
returned failure code 1 during SSH session 'root@tank3'.
2017-04-12 05:08:46,445Z ERROR
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Error during host tank3
install, prefering first exception: Unexpected connection termination
2017-04-12 05:08:46,445Z ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Host installation failed for
host 'cec720ed-460a-48aa-a9fc-2262b6da5a83', 'tank3': Unexpected connection
termination
2017-04-12 05:08:46,446Z INFO
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] START,
SetVdsStatusVDSCommand(HostName = tank3,
SetVdsStatusVDSCommandParameters:{runAsync='true',
hostId='cec720ed-460a-48aa-a9fc-2262b6da5a83', status='InstallFailed',
nonOperationalReason='NONE', stopSpmFailureLogged='false',
maintenanceReason='null'}), log id: 4bbc52f9
2017-04-12 05:08:46,449Z INFO
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] FINISH,
SetVdsStatusVDSCommand, log id: 4bbc52f9
2017-04-12 05:08:46,457Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] EVENT_ID:
VDS_INSTALL_FAILED(505), Correlation ID: 4a1d5f35, Job ID:
8af22af5-72a5-4ec4-b216-4e26ceaa48d6, Call Stack: null, Custom Event ID:
-1, Message: Host tank3 installation failed. Unexpected connection
termination.
2017-04-12 05:08:46,496Z INFO
 [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Lock freed to object
'EngineLock:{exclusiveLocks='[cec720ed-460a-48aa-a9fc-2262b6da5a83=]', sharedLocks='null'}'
2017-04-12 05:09:02,742Z INFO  [org.ovirt.engine.core.bll.RemoveVdsCommand]
(default task-48) [13050988-bf00-4391-9862-a8ed8ade34dd] Lock Acquired to
object
'EngineLock:{exclusiveLocks='[cec720ed-460a-48aa-a9fc-2262b6da5a83=]', sharedLocks='null'}'
2017-04-12 05:09:02,750Z INFO  [org.ovirt.engine.core.bll.RemoveVdsCommand]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
Running command: RemoveVdsCommand internal: false. Entities affected :  ID:
cec720ed-460a-48aa-a9fc-2262b6da5a83 Type: VDSAction group DELETE_HOST with
role type ADMIN
2017-04-12 05:09:02,822Z INFO
 [org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
START, RemoveVdsVDSCommand( RemoveVdsVDSCommandParameters:{runAsync='true',
hostId='cec720ed-460a-48aa-a9fc-2262b6da5a83'}), log id: 26e68c12
2017-04-12 05:09:02,822Z INFO  [org.ovirt.engine.core.vdsbroker.VdsManager]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
vdsManager::disposing
2017-04-12 05:09:02,822Z INFO
 [org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
FINISH, RemoveVdsVDSCommand, log id: 26e68c12
2017-04-12 05:09:02,824Z WARN
 [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker)
[] Exception thrown during message processing
2017-04-12 05:09:02,848Z INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
EVENT_ID: USER_REMOVE_VDS(44), Correlation ID:
13050988-bf00-4391-9862-a8ed8ade34dd, Call Stack: null, Custom Event ID:
-1, Message: Host tank3 was removed by admin@internal-authz.
2017-04-12 05:09:02,848Z INFO  [org.ovirt.engine.core.bll.RemoveVdsCommand]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
Lock freed to object
'EngineLock:{exclusiveLocks='[cec720ed-460a-48aa-a9fc-2262b6da5a83=]', sharedLocks='null'}'
2017-04-12 05:10:56,139Z INFO
 [org.ovirt.engine.core.bll.storage.ovfstore.OvfDataUpdater]
(DefaultQuartzScheduler8) [] Attempting to update VMs/Templates Ovf.


Package vdsm-tool installed.

Any thoughts?

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users