Re: [ovirt-users] HA cluster

2015-11-26 Thread Simone Tiraboschi
On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju  wrote:

> Its a fresh setup ,I have deleted all the vms ,still am facing same issues
> .
>
>
Can you please paste the output of
 vdsClient -s 0 list
?
thanks


>
> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali 
> wrote:
>
>> Hi
>>
>> Seems like you have existing VMs running on the host (you can check that
>> by looking for qemu processes on your host).
>> Is that a clean deployment, or was the host used before for running VMs?
>> Perhaps you already ran the hosted engine setup, and the VM was left
>> there?
>>
>> CC-ing Sandro who is more familiar in that than me.
>>
>> Thanks,
>> Oved
>>
>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju 
>> wrote:
>>
>>> HI
>>>
>>> Getting below error while configuring Hosted engine,
>>>
>>> root@he ~]# hosted-engine --deploy
>>> [ INFO  ] Stage: Initializing
>>> [ INFO  ] Generating a temporary VNC password.
>>> [ INFO  ] Stage: Environment setup
>>>   Continuing will configure this host for serving as hypervisor
>>> and create a VM where you have to install oVirt Engine afterwards.
>>>   Are you sure you want to continue? (Yes, No)[Yes]: yes
>>>   Configuration files: []
>>>   Log file:
>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log
>>>   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
>>>   It has been detected that this program is executed through an
>>> SSH connection without using screen.
>>>   Continuing with the installation may lead to broken
>>> installation if the network connection fails.
>>>   It is highly recommended to abort the installation and run it
>>> inside a screen session using command "screen".
>>>   Do you want to continue anyway? (Yes, No)[No]: yes
>>> [WARNING] Cannot detect if hardware supports virtualization
>>> [ INFO  ] Bridge ovirtmgmt already created
>>> [ INFO  ] Stage: Environment packages setup
>>> [ INFO  ] Stage: Programs detection
>>> [ INFO  ] Stage: Environment setup
>>>
>>> *[ ERROR ] The following VMs has been found:
>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to execute stage
>>> 'Environment setup': Cannot setup Hosted Engine with other VMs running*
>>> [ INFO  ] Stage: Clean up
>>> [ INFO  ] Generating answer file
>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf'
>>> [ INFO  ] Stage: Pre-termination
>>> [ INFO  ] Stage: Termination
>>> [root@he ~]#
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Attach Export Domain (NFS) to multiple datacenter

2015-11-26 Thread Simone Tiraboschi
On Thu, Nov 26, 2015 at 7:06 AM, Punit Dambiwal  wrote:

> Hi Simone,
>
> Yes.. i can but i want to use the same NFS storage with OS template
> inside..to use all the local storage server to provision the guest VM's..
>
> Thanks,
> punit
>
>

Did you checked the glance integration?
http://www.ovirt.org/Features/Glance_Integration

Now on 3.6 you can also deploy and configure glance via docker from
engine-setup:
http://www.ovirt.org/CinderGlance_Docker_Integration



> On Wed, Nov 25, 2015 at 6:24 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Wed, Nov 25, 2015 at 5:50 AM, Punit Dambiwal 
>> wrote:
>>
>>> Hi,
>>>
>>> I want to attach the same nfs (export) volume to multiple datacenter in
>>> the ovirt..is it possible to do so..or any workaround for the same..
>>>
>>
>>
>> As far as I know not at the same time.
>> You have to detach and then attach do the new datacenter.
>>
>>
>>>
>>> Thanks.
>>> Punit
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA cluster

2015-11-26 Thread Budur Nagaraju
I have done a fresh installation and now am getting the below error,

[ INFO  ] Updating hosted-engine configuration
[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up
  The following network ports should be opened:
  tcp:5900
  tcp:5901
  udp:5900
  udp:5901
  An example of the required configuration for iptables can be
found at:
  /etc/ovirt-hosted-engine/iptables.example
  In order to configure firewalld, copy the files from
  /etc/ovirt-hosted-engine/firewalld to /etc/firewalld/services
  and execute the following commands:
  firewall-cmd -service hosted-console
[ INFO  ] Creating VM
[ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary
password for console connection. The VM may not have been created: please
check VDSM logs
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination



[root@he ovirt]# tail -f /var/log/vdsm/
backup/   connectivity.log  mom.log   supervdsm.log
vdsm.log
[root@he ovirt]# tail -f /var/log/vdsm/vdsm.log
Detector thread::DEBUG::2015-11-26
14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:42741
Detector thread::DEBUG::2015-11-26
14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 42741)
Detector thread::DEBUG::2015-11-26
14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
Adding connection from 127.0.0.1:42742
Detector thread::DEBUG::2015-11-26
14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
Connection removed from 127.0.0.1:42742
Detector thread::DEBUG::2015-11-26
14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:42742
Detector thread::DEBUG::2015-11-26
14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 42742)
Detector thread::DEBUG::2015-11-26
14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
Adding connection from 127.0.0.1:42743
Detector thread::DEBUG::2015-11-26
14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
Connection removed from 127.0.0.1:42743
Detector thread::DEBUG::2015-11-26
14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:42743
Detector thread::DEBUG::2015-11-26
14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 42743)


On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi 
wrote:

>
>
> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju  wrote:
>
>> Its a fresh setup ,I have deleted all the vms ,still am facing same
>> issues .
>>
>>
> Can you please paste the output of
>  vdsClient -s 0 list
> ?
> thanks
>
>
>>
>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali 
>> wrote:
>>
>>> Hi
>>>
>>> Seems like you have existing VMs running on the host (you can check that
>>> by looking for qemu processes on your host).
>>> Is that a clean deployment, or was the host used before for running VMs?
>>> Perhaps you already ran the hosted engine setup, and the VM was left
>>> there?
>>>
>>> CC-ing Sandro who is more familiar in that than me.
>>>
>>> Thanks,
>>> Oved
>>>
>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju 
>>> wrote:
>>>
 HI

 Getting below error while configuring Hosted engine,

 root@he ~]# hosted-engine --deploy
 [ INFO  ] Stage: Initializing
 [ INFO  ] Generating a temporary VNC password.
 [ INFO  ] Stage: Environment setup
   Continuing will configure this host for serving as hypervisor
 and create a VM where you have to install oVirt Engine afterwards.
   Are you sure you want to continue? (Yes, No)[Yes]: yes
   Configuration files: []
   Log file:
 /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log
   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
   It has been detected that this program is executed through an
 SSH connection without using screen.
   Continuing with the installation may lead to broken
 installation if the network connection fails.
   It is highly recommended to abort the installation and run it
 inside a screen session using command "screen".
   Do you want to continue anyway? (Yes, No)[No]: yes
 [WARNING] Cannot detect if hardware supports virtualization
 [ INFO  ] Bridge ovirtmgmt already created
 [ INFO  ] Stage: Environment packages setup
 [ INFO  ] Stage: Programs detection
 [ INFO  ] Stage: Environment setup

 *[ ERRO

Re: [ovirt-users] HA cluster

2015-11-26 Thread Simone Tiraboschi
On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju  wrote:

> I have done a fresh installation and now am getting the below error,
>
> [ INFO  ] Updating hosted-engine configuration
> [ INFO  ] Stage: Transaction commit
> [ INFO  ] Stage: Closing up
>   The following network ports should be opened:
>   tcp:5900
>   tcp:5901
>   udp:5900
>   udp:5901
>   An example of the required configuration for iptables can be
> found at:
>   /etc/ovirt-hosted-engine/iptables.example
>   In order to configure firewalld, copy the files from
>   /etc/ovirt-hosted-engine/firewalld to /etc/firewalld/services
>   and execute the following commands:
>   firewall-cmd -service hosted-console
> [ INFO  ] Creating VM
> [ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary
> password for console connection. The VM may not have been created: please
> check VDSM logs
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
>
>
>
> [root@he ovirt]# tail -f /var/log/vdsm/
> backup/   connectivity.log  mom.log   supervdsm.log
> vdsm.log
> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log
> Detector thread::DEBUG::2015-11-26
> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:42741
> Detector thread::DEBUG::2015-11-26
> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
> http detected from ('127.0.0.1', 42741)
> Detector thread::DEBUG::2015-11-26
> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
> Adding connection from 127.0.0.1:42742
> Detector thread::DEBUG::2015-11-26
> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
> Connection removed from 127.0.0.1:42742
> Detector thread::DEBUG::2015-11-26
> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:42742
> Detector thread::DEBUG::2015-11-26
> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
> http detected from ('127.0.0.1', 42742)
> Detector thread::DEBUG::2015-11-26
> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
> Adding connection from 127.0.0.1:42743
> Detector thread::DEBUG::2015-11-26
> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
> Connection removed from 127.0.0.1:42743
> Detector thread::DEBUG::2015-11-26
> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:42743
> Detector thread::DEBUG::2015-11-26
> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
> http detected from ('127.0.0.1', 42743)
>
>

It failed before, can you please attach the whole VDSM logs?


>
> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju 
>> wrote:
>>
>>> Its a fresh setup ,I have deleted all the vms ,still am facing same
>>> issues .
>>>
>>>
>> Can you please paste the output of
>>  vdsClient -s 0 list
>> ?
>> thanks
>>
>>
>>>
>>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali 
>>> wrote:
>>>
 Hi

 Seems like you have existing VMs running on the host (you can check
 that by looking for qemu processes on your host).
 Is that a clean deployment, or was the host used before for running VMs?
 Perhaps you already ran the hosted engine setup, and the VM was left
 there?

 CC-ing Sandro who is more familiar in that than me.

 Thanks,
 Oved

 On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju 
 wrote:

> HI
>
> Getting below error while configuring Hosted engine,
>
> root@he ~]# hosted-engine --deploy
> [ INFO  ] Stage: Initializing
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>   Continuing will configure this host for serving as
> hypervisor and create a VM where you have to install oVirt Engine
> afterwards.
>   Are you sure you want to continue? (Yes, No)[Yes]: yes
>   Configuration files: []
>   Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log
>   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
>   It has been detected that this program is executed through
> an SSH connection without using screen.
>   Continuing with the installation may lead to broken
> installation if the network connection fails.
>   It is highly recommended to abort the installation and run
> it inside a screen session using command "screen".
>

Re: [ovirt-users] HA cluster

2015-11-26 Thread Budur Nagaraju
Below are the logs,


[root@he ~]# tail -f /var/log/vdsm/vdsm.log
Detector thread::DEBUG::2015-11-26
15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:50944
Detector thread::DEBUG::2015-11-26
15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 50944)
Detector thread::DEBUG::2015-11-26
15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
Adding connection from 127.0.0.1:50945
Detector thread::DEBUG::2015-11-26
15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
Connection removed from 127.0.0.1:50945
Detector thread::DEBUG::2015-11-26
15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:50945
Detector thread::DEBUG::2015-11-26
15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 50945)
Detector thread::DEBUG::2015-11-26
15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
Adding connection from 127.0.0.1:50946
Detector thread::DEBUG::2015-11-26
15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
Connection removed from 127.0.0.1:50946
Detector thread::DEBUG::2015-11-26
15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:50946
Detector thread::DEBUG::2015-11-26
15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 50946)



On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi 
wrote:

>
>
> On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju 
> wrote:
>
>> I have done a fresh installation and now am getting the below error,
>>
>> [ INFO  ] Updating hosted-engine configuration
>> [ INFO  ] Stage: Transaction commit
>> [ INFO  ] Stage: Closing up
>>   The following network ports should be opened:
>>   tcp:5900
>>   tcp:5901
>>   udp:5900
>>   udp:5901
>>   An example of the required configuration for iptables can be
>> found at:
>>   /etc/ovirt-hosted-engine/iptables.example
>>   In order to configure firewalld, copy the files from
>>   /etc/ovirt-hosted-engine/firewalld to /etc/firewalld/services
>>   and execute the following commands:
>>   firewall-cmd -service hosted-console
>> [ INFO  ] Creating VM
>> [ ERROR ] Failed to execute stage 'Closing up': Cannot set temporary
>> password for console connection. The VM may not have been created: please
>> check VDSM logs
>> [ INFO  ] Stage: Clean up
>> [ INFO  ] Generating answer file
>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf'
>> [ INFO  ] Stage: Pre-termination
>> [ INFO  ] Stage: Termination
>>
>>
>>
>> [root@he ovirt]# tail -f /var/log/vdsm/
>> backup/   connectivity.log  mom.log   supervdsm.log
>> vdsm.log
>> [root@he ovirt]# tail -f /var/log/vdsm/vdsm.log
>> Detector thread::DEBUG::2015-11-26
>> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:42741
>> Detector thread::DEBUG::2015-11-26
>> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>> http detected from ('127.0.0.1', 42741)
>> Detector thread::DEBUG::2015-11-26
>> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>> Adding connection from 127.0.0.1:42742
>> Detector thread::DEBUG::2015-11-26
>> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>> Connection removed from 127.0.0.1:42742
>> Detector thread::DEBUG::2015-11-26
>> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:42742
>> Detector thread::DEBUG::2015-11-26
>> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>> http detected from ('127.0.0.1', 42742)
>> Detector thread::DEBUG::2015-11-26
>> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>> Adding connection from 127.0.0.1:42743
>> Detector thread::DEBUG::2015-11-26
>> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>> Connection removed from 127.0.0.1:42743
>> Detector thread::DEBUG::2015-11-26
>> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:42743
>> Detector thread::DEBUG::2015-11-26
>> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>> http detected from ('127.0.0.1', 42743)
>>
>>
>
> It failed before, can you please attach the whole VDSM logs?
>
>
>>
>> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju 
>>> wrote:
>>>
>

Re: [ovirt-users] HA cluster

2015-11-26 Thread Budur Nagaraju
*Below are the entire logs*




*[root@he ~]# tail -f /var/log/vdsm/vdsm.log *

Detector thread::DEBUG::2015-11-26
15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:50944
Detector thread::DEBUG::2015-11-26
15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 50944)
Detector thread::DEBUG::2015-11-26
15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
Adding connection from 127.0.0.1:50945
Detector thread::DEBUG::2015-11-26
15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
Connection removed from 127.0.0.1:50945
Detector thread::DEBUG::2015-11-26
15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:50945
Detector thread::DEBUG::2015-11-26
15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 50945)
Detector thread::DEBUG::2015-11-26
15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
Adding connection from 127.0.0.1:50946
Detector thread::DEBUG::2015-11-26
15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
Connection removed from 127.0.0.1:50946
Detector thread::DEBUG::2015-11-26
15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
Detected protocol xml from 127.0.0.1:50946
Detector thread::DEBUG::2015-11-26
15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
http detected from ('127.0.0.1', 50946)




*[root@he ~]# tail -f /var/log/vdsm/supervdsm.log *

MainProcess::DEBUG::2015-11-26
15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call readMultipathConf with () {}
MainProcess::DEBUG::2015-11-26
15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {',
'polling_interval5', 'getuid_callout
"/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
'no_path_retry   fail', 'user_friendly_names no', '
flush_on_last_del   yes', 'fast_io_fail_tmo5', '
dev_loss_tmo30', 'max_fds 4096', '}', '',
'devices {', 'device {', 'vendor  "HITACHI"', '
product "DF.*"', 'getuid_callout
"/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
'}', 'device {', 'vendor  "COMPELNT"', '
product "Compellent Vol"', 'no_path_retry
fail', '}', 'device {', '# multipath.conf.default', '
vendor  "DGC"', 'product ".*"', '
product_blacklist   "LUNZ"', 'path_grouping_policy
"group_by_prio"', 'path_checker"emc_clariion"', '
hardware_handler"1 emc"', 'prio"emc"', '
failbackimmediate', 'rr_weight
"uniform"', '# vdsm required configuration', '
getuid_callout  "/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n"', 'features"0"',
'no_path_retry   fail', '}', '}']
MainProcess|Thread-13::DEBUG::2015-11-26
15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call getHardwareInfo with () {}
MainProcess|Thread-13::DEBUG::2015-11-26
15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID':
'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise
Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)',
'systemManufacturer': 'Red Hat'}
MainProcess|Thread-21::DEBUG::2015-11-26
15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call validateAccess with ('qemu', ('qemu', 'kvm'),
'/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {}
MainProcess|Thread-21::DEBUG::2015-11-26
15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
return validateAccess with None
MainProcess|Thread-22::DEBUG::2015-11-26
15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call validateAccess with ('qemu', ('qemu', 'kvm'),
'/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {}
MainProcess|Thread-22::DEBUG::2015-11-26
15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
return validateAccess with None
MainProcess|PolicyEngine::DEBUG::2015-11-26
15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call ksmTune with ({'run': 0},) {}
MainProcess|PolicyEngine::DEBUG::2015-11-26
15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
return ksmTune with None



*[root@he ~]# tail -f /var/log/vdsm/connectivity.log *


2015-11-26 15:02:02,632:DEBUG:recent_client:False
2015-11-26 15:04:44,975:DEBUG:recent_client:True
2015-11-26 15:05:15,039:DEB

Re: [ovirt-users] HA cluster

2015-11-26 Thread Simone Tiraboschi
On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju  wrote:

>
>
>
> *Below are the entire logs*
>
>
Sorry, with the entire log I mean if you can attach or share somewhere the
whole /var/log/vdsm/vdsm.log  cause the latest ten lines are not enough to
point out the issue.

>
>
>
>
> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
>
> Detector thread::DEBUG::2015-11-26
> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:50944
> Detector thread::DEBUG::2015-11-26
> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
> http detected from ('127.0.0.1', 50944)
> Detector thread::DEBUG::2015-11-26
> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
> Adding connection from 127.0.0.1:50945
> Detector thread::DEBUG::2015-11-26
> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
> Connection removed from 127.0.0.1:50945
> Detector thread::DEBUG::2015-11-26
> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:50945
> Detector thread::DEBUG::2015-11-26
> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
> http detected from ('127.0.0.1', 50945)
> Detector thread::DEBUG::2015-11-26
> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
> Adding connection from 127.0.0.1:50946
> Detector thread::DEBUG::2015-11-26
> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
> Connection removed from 127.0.0.1:50946
> Detector thread::DEBUG::2015-11-26
> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:50946
> Detector thread::DEBUG::2015-11-26
> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
> http detected from ('127.0.0.1', 50946)
>
>
>
>
> *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log *
>
> MainProcess::DEBUG::2015-11-26
> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
> call readMultipathConf with () {}
> MainProcess::DEBUG::2015-11-26
> 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
> return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {',
> 'polling_interval5', 'getuid_callout
> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
> 'no_path_retry   fail', 'user_friendly_names no', '
> flush_on_last_del   yes', 'fast_io_fail_tmo5', '
> dev_loss_tmo30', 'max_fds 4096', '}', '',
> 'devices {', 'device {', 'vendor  "HITACHI"', '
> product "DF.*"', 'getuid_callout
> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
> '}', 'device {', 'vendor  "COMPELNT"', '
> product "Compellent Vol"', 'no_path_retry
> fail', '}', 'device {', '# multipath.conf.default', '
> vendor  "DGC"', 'product ".*"', '
> product_blacklist   "LUNZ"', 'path_grouping_policy
> "group_by_prio"', 'path_checker"emc_clariion"', '
> hardware_handler"1 emc"', 'prio"emc"', '
> failbackimmediate', 'rr_weight
> "uniform"', '# vdsm required configuration', '
> getuid_callout  "/lib/udev/scsi_id --whitelisted
> --replace-whitespace --device=/dev/%n"', 'features"0"',
> 'no_path_retry   fail', '}', '}']
> MainProcess|Thread-13::DEBUG::2015-11-26
> 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
> call getHardwareInfo with () {}
> MainProcess|Thread-13::DEBUG::2015-11-26
> 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
> return getHardwareInfo with {'systemProductName': 'KVM', 'systemUUID':
> 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red Hat Enterprise
> Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX + PIIX, 1996)',
> 'systemManufacturer': 'Red Hat'}
> MainProcess|Thread-21::DEBUG::2015-11-26
> 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
> call validateAccess with ('qemu', ('qemu', 'kvm'),
> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {}
> MainProcess|Thread-21::DEBUG::2015-11-26
> 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
> return validateAccess with None
> MainProcess|Thread-22::DEBUG::2015-11-26
> 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
> call validateAccess with ('qemu', ('qemu', 'kvm'),
> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {}
> MainProcess|Thread-22::DEBUG::2015-11-26
> 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
> return validateAccess with None
> MainProcess|PolicyEngine::DEBUG::2015-11-26
> 15:13:40,619::su

[ovirt-users] Node not talking NFS to Node.

2015-11-26 Thread admin
Hello 

Been trying to resolve a problem with my second node which does not want to
connect :

one of my 2 nodes are not connecting, the problem is that the Node cant talk
NFS to Node. the node used to function fine, after performing a hardware
upgrade and bringing the system back up it seems to have come up with this
issue. we also performed a yum update, updated around 447 packages.

Can someone please help me with this.

Thanks
Sol


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Node not talking NFS to Node.

2015-11-26 Thread Amador Pahim
Please provide us some logs from the affected hypervisor. vdsm.log and 
the result of "ip address show" should be a good starting point.


--
apahim


On 11/26/2015 08:04 AM, admin wrote:

Hello

Been trying to resolve a problem with my second node which does not 
want to connect :


one of my 2 nodes are not connecting, the problem is that the Node 
cant talk NFS to Node. the node used to function fine, after 
performing a hardware upgrade and bringing the system back up it seems 
to have come up with this issue. we also performed a yum update, 
updated around 447 packages.


Can someone please help me with this.

Thanks
Sol


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] ovirt-engine-sdk-python too slow

2015-11-26 Thread Nir Soffer
Thanks John, very interesting results.

On Thu, Nov 26, 2015 at 3:17 AM, John Hunter  wrote:

> Hi Juan,
>
> On Thu, Nov 26, 2015 at 2:15 AM, Juan Hernández 
> wrote:
>
>> On 11/25/2015 06:45 PM, Nir Soffer wrote:
>> > $ ./profile-stats -c myscript.prof
>> >
>> > Wed Nov 25 10:40:11 2015myscript.prof
>> >
>> >  7892315 function calls (7891054 primitive calls) in 7.940
>> seconds
>> >
>> >Ordered by: internal time
>> >List reduced from 1518 to 20 due to restriction <20>
>> >
>> >ncalls  tottime  percall  cumtime  percall filename:lineno(function)
>> >  90862.6930.0006.7060.001 inspect.py:247(getmembers)
>> >   19524941.3940.0001.8800.000 inspect.py:59(isclass)
>> >  90921.0300.0001.0300.000 {dir}
>> >   19526420.6000.0000.6000.000 {getattr}
>> >   19727650.5040.0000.5040.000 {isinstance}
>> > 30.3340.1110.3340.111 {method 'perform' of
>> > 'pycurl.Curl' objects}
>> >   18839180.2840.0000.2840.000 {method 'append' of 'list'
>> > objects}
>> >  90870.2210.0000.2210.000 {method 'sort' of 'list'
>> > objects}
>> >  90510.1720.0006.9110.001
>> > reflectionhelper.py:51(isModuleMember)
>> > 10.1240.1240.3540.354 errors.py:17()
>> > 10.0880.0880.2300.230 params.py:8()
>> >  88790.0700.0006.9980.001 params.py:367(__setattr__)
>> > 10.0470.0475.1825.182 api.py:23()
>> > 10.0250.0254.7434.743 brokers.py:22()
>> > 10.0230.0230.0300.030
>> > connectionspool.py:17()
>> > 10.0220.0220.0530.053
>> > lxml.etree.pyx:1(PyMODINIT_FUNC PyInit_etree(void))
>> >   1180.0190.0004.6840.040 params.py:45277(__init__)
>> > 50.0150.0030.0240.005 {built-in method strptime}
>> > 10.0120.0120.0130.013 socket.py:45()
>> >100.0110.0010.0150.002
>> collections.py:288(namedtuple)
>> >
>> > So it is not the classes, it is the code inspecting them on import.
>> >
>>
>> The script doesn't contain only the imports, it is also calling the
>> server, and we know parsing the result is slow, due to the excesive use
>> of "inspect", as I mentioned before:
>>
>>   [RFE][performance] - generate large scale list running to slow.
>>   https://bugzilla.redhat.com/show_bug.cgi?id=1221238#c2
>>
>> In the profiling information seems to corresponds to the script before
>> commenting out the part that lists all the VMs, as it looks like the
>> constructor of the VM class was called 21 times (you probably have 21
>> VMs):
>>
>>   21 0.004 1.308
>> build/bdist.linux-x86_64/egg/ovirtsdk/infrastructure/brokers.py:29139(VM)
>>
>> Actually I only have one VM running on the server.
> This time it contains only the import in the script, not calling the
> server. It shows:
> 210.0051.666  brokers.py:29139(VM)
>
> $./profile-stats -c myscript_contains_only_import.prof
>
> Thu Nov 26 09:11:59 2015myscript_contains_only_import.prof
>
>  5453977 function calls (5452849 primitive calls) in 5.463 seconds
>
>Ordered by: internal time
>List reduced from 1083 to 20 due to restriction <20>
>
>ncalls  tottime  percall  cumtime  percall filename:lineno(function)
>  74682.0290.0004.9140.001 inspect.py:247(getmembers)
>   13486780.9750.0001.3420.000 inspect.py:59(isclass)
>  74740.7370.0000.7370.000 {dir}
>   13488250.4330.0000.4330.000 {getattr}
>   13659700.3830.0000.3830.000 {isinstance}
>   12934550.2110.0000.2110.000 {method 'append' of 'list'
> objects}
>  74690.1630.0000.1630.000 {method 'sort' of 'list'
> objects}
>  74340.1390.0005.0820.001
> reflectionhelper.py:51(isModuleMember)
>  76700.0610.0005.1580.001 params.py:367(__setattr__)
> 10.0430.0435.4635.463 api.py:23()
> 10.0420.0420.1390.139 errors.py:17()
> 10.0270.0275.2485.248 brokers.py:22()
> 10.0260.0260.0970.097 params.py:8()
>   1180.0230.0005.1870.044 params.py:45277(__init__)
> 10.0180.0180.0250.025
> connectionspool.py:17()
> 10.0130.0130.0130.013 socket.py:45()
>100.0100.0010.0130.001
> collections.py:288(namedtuple)
> 136370.0090.0000.0090.000 {method 'lower' of 'str'
> objects}
>350.0090.0000.0350.001
> reflectionhelper.py:28(getClasses)
>173/270.0080.0000.0190.001 sre_parse.py:388(_parse)
>
>
>Ordered by: internal time
>List reduced from 1083 to 20 due to restriction <20>
>
> Function  

Re: [ovirt-users] Node not talking NFS to Node.

2015-11-26 Thread admin
Hi

Ok, a bit of background, I¹m using this node/host with local storage, I am
able to bring up the Virtual machines on this node/host using my node/host
1. I have 9 live virtual machines running of the local storage on the
affected node/host(Node2) but this host does not connect. On the ovirt
manager I get the following errors :

2015-Nov-26, 12:03

Failed to connect Host hosted_engine_2 to Storage Pool Default



2015-Nov-26, 12:03

Host
 hosted_engine_2 cannot access the Storage Domain(s) 
attached to the Data Center Default. Setting Host state to
Non-Operational.




the affected node/host(Node2) has the following vdsm.log

[root@ov2 ~]# tail /var/log/vdsm/vdsm.log -n100
Thread-19199::DEBUG::2015-11-26
11:53:27,551::hsm::2412::Storage.HSM::(__prefetchDomains) Found SD uuids:
(u'12837024-f29c-4169-8881-2f5c79225f96',)
Thread-19199::DEBUG::2015-11-26
11:53:27,552::hsm::2468::Storage.HSM::(connectStorageServer) knownSDs:
{12837024-f29c-4169-8881-2f5c79225f96: storage.nfsSD.findDomain,
55fa1e8e-c22c-42a9-b43d-7a260f856d2d: storage.nfsSD.findDomain,
4edadc96-b3de-4c8e-9cd4-29e294fe8f93: storage.nfsSD.findDomain,
40d91488-5ffa-434e-a5d3-cf8044b85856: storage.nfsSD.findDomain}
Thread-19199::DEBUG::2015-11-26
11:53:27,555::hsm::2388::Storage.HSM::(__prefetchDomains) nfs local path:
/rhev/data-center/mnt/ov3-nfs.iracknet.com:_OV3Storage2
Thread-19199::DEBUG::2015-11-26
11:53:27,557::hsm::2412::Storage.HSM::(__prefetchDomains) Found SD uuids:
(u'40d91488-5ffa-434e-a5d3-cf8044b85856',)
Thread-19199::DEBUG::2015-11-26
11:53:27,558::hsm::2468::Storage.HSM::(connectStorageServer) knownSDs:
{12837024-f29c-4169-8881-2f5c79225f96: storage.nfsSD.findDomain,
55fa1e8e-c22c-42a9-b43d-7a260f856d2d: storage.nfsSD.findDomain,
4edadc96-b3de-4c8e-9cd4-29e294fe8f93: storage.nfsSD.findDomain,
40d91488-5ffa-434e-a5d3-cf8044b85856: storage.nfsSD.findDomain}
Thread-19199::DEBUG::2015-11-26
11:53:27,561::hsm::2388::Storage.HSM::(__prefetchDomains) nfs local path:
/rhev/data-center/mnt/ov2-nfs.iracknet.com:_OV2Storage
Thread-19199::DEBUG::2015-11-26
11:53:27,563::hsm::2412::Storage.HSM::(__prefetchDomains) Found SD uuids:
(u'4edadc96-b3de-4c8e-9cd4-29e294fe8f93',)
Thread-19199::DEBUG::2015-11-26
11:53:27,563::hsm::2468::Storage.HSM::(connectStorageServer) knownSDs:
{12837024-f29c-4169-8881-2f5c79225f96: storage.nfsSD.findDomain,
55fa1e8e-c22c-42a9-b43d-7a260f856d2d: storage.nfsSD.findDomain,
4edadc96-b3de-4c8e-9cd4-29e294fe8f93: storage.nfsSD.findDomain,
40d91488-5ffa-434e-a5d3-cf8044b85856: storage.nfsSD.findDomain}
Thread-19199::DEBUG::2015-11-26
11:53:27,567::hsm::2388::Storage.HSM::(__prefetchDomains) nfs local path:
/rhev/data-center/mnt/ov3-nfs.iracknet.com:_OV3Storage
Thread-19199::DEBUG::2015-11-26
11:53:27,569::hsm::2412::Storage.HSM::(__prefetchDomains) Found SD uuids:
(u'55fa1e8e-c22c-42a9-b43d-7a260f856d2d',)
Thread-19199::DEBUG::2015-11-26
11:53:27,569::hsm::2468::Storage.HSM::(connectStorageServer) knownSDs:
{12837024-f29c-4169-8881-2f5c79225f96: storage.nfsSD.findDomain,
55fa1e8e-c22c-42a9-b43d-7a260f856d2d: storage.nfsSD.findDomain,
4edadc96-b3de-4c8e-9cd4-29e294fe8f93: storage.nfsSD.findDomain,
40d91488-5ffa-434e-a5d3-cf8044b85856: storage.nfsSD.findDomain}
Thread-19199::INFO::2015-11-26
11:53:27,569::logUtils::47::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 477,
'id': '2c710afa-ed31-4c00-bad9-b03edef40a36'}, {'status': 477, 'id':
'3c9c7e81-d7df-47c0-a975-dc61c73fd63b'}, {'status': 477, 'id':
'7ca4bf49-e7cf-4db6-82a9-32d337272d94'}, {'status': 477, 'id':
'8cf1103b-5902-49d2-afbe-efef5642e01e'}, {'status': 0, 'id':
'a1a7b3d9-b329-4d95-8ac8-6fd33bff6147'}, {'status': 0, 'id':
'a37d0921-830f-4521-a59a-fb32affc6061'}, {'status': 0, 'id':
'ba13122e-3adf-4bb5-840c-9fa93d374c83'}, {'status': 0, 'id':
'f7b2fd84-4f75-40c4-a0c8-a24b96c01da3'}]}
Thread-19199::DEBUG::2015-11-26
11:53:27,570::task::1191::Storage.TaskManager.Task::(prepare)
Task=`bc9db359-1bce-44e9-a538-5d34df98edb2`::finished: {'statuslist':
[{'status': 477, 'id': '2c710afa-ed31-4c00-bad9-b03edef40a36'}, {'status':
477, 'id': '3c9c7e81-d7df-47c0-a975-dc61c73fd63b'}, {'status': 477, 'id':
'7ca4bf49-e7cf-4db6-82a9-32d337272d94'}, {'status': 477, 'id':
'8cf1103b-5902-49d2-afbe-efef5642e01e'}, {'status': 0, 'id':
'a1a7b3d9-b329-4d95-8ac8-6fd33bff6147'}, {'status': 0, 'id':
'a37d0921-830f-4521-a59a-fb32affc6061'}, {'status': 0, 'id':
'ba13122e-3adf-4bb5-840c-9fa93d374c83'}, {'status': 0, 'id':
'f7b2fd84-4f75-40c4-a0c8-a24b96c01da3'}]}
Thread-19199::DEBUG::2015-11-26
11:53:27,570::task::595::Storage.TaskManager.Task::(_updateState)
Task=`bc9db359-1bce-44e9-a538-5d34df98edb2`::moving from state preparing
-> state finished
Thread-19199::DEBUG::2015-11-26
11:53:27,571::resourceManager::940::Storage.ResourceManager.Owner::(release
All) Owner.releaseAll requests {} resources {}
Thread-19199::DEBUG::2015-11-26
11:53:27,571::resourceManager::977::Storage.ResourceManager.Owner::(cancelA
ll) Owner.cancelAll requests {}

Re: [ovirt-users] Node not talking NFS to Node.

2015-11-26 Thread Julian De Marchi

On 26/11/2015 10:05 PM, admin wrote:

(cwd None)
Thread-19211::ERROR::2015-11-26
11:56:23,213::storageServer::213::Storage.StorageServer.MountConnection::(c
onnect) Mount failed: (32, ';mount.nfs: Connection timed out\n')
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/storageServer.py", line 211, in connect
 self._mount.mount(self.options, self._vfsType)
   File "/usr/share/vdsm/storage/mount.py", line 223, in mount
 return self._runcmd(cmd, timeout)
   File "/usr/share/vdsm/storage/mount.py", line 239, in _runcmd
 raise MountError(rc, ";".join((out, err)))
MountError: (32, ';mount.nfs: Connection timed out\n')
Thread-19211::ERROR::2015-11-26
11:56:23,214::hsm::2449::Storage.HSM::(connectStorageServer) Could not
connect to storageServer
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/hsm.py", line 2446, in connectStorageServer
 conObj.connect()
   File "/usr/share/vdsm/storage/storageServer.py", line 330, in connect
 return self._mountCon.connect()
   File "/usr/share/vdsm/storage/storageServer.py", line 219, in connect
 raise e
MountError: (32, ';mount.nfs: Connection timed out\n')


Tried to manually mount the NFS share? The above is a clue from your logs.

--julian
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error when trying to retrieve cluster, hosts via ovirt API

2015-11-26 Thread Juan Hernández
On 11/25/2015 11:13 AM, Jean-Pierre Ribeauville wrote:
> H,
> 
> Thanks for infos.
> 
> BTW,  as I need to retrieve cluster and datacenter to which my host belongs   
> within a software running on the host , I use a Python script 
> interacting with the ovirt-engine ; is there another way to get these  infos 
> "locally" on the host itself ?
> 

I'm not aware of any way to do that, I'd suggest that you stick to
interacting with the engine for that.

> 
> -Message d'origine-
> De : Juan Hernández [mailto:jhern...@redhat.com] 
> Envoyé : mercredi 18 novembre 2015 19:10
> À : Jean-Pierre Ribeauville; Karli Sjöberg
> Cc : users@ovirt.org
> Objet : Re: [ovirt-users] Error when trying to retrieve cluster, hosts via 
> ovirt API
> 
> On 11/18/2015 12:22 PM, Jean-Pierre Ribeauville wrote:
>> Hi,
>>
>> You were right .
>>
>> By setting correct URL and correct certificate file location , it's working.
>>
>> If I well understand , as this certificate file has to be on the 
>> client side , isn't a point if failure ?
>>
> 
> The certificate is needed to secure the SSL communication. You can do without 
> it, adding "insecure=True" to the constructor of the API object, but then the 
> identity of the server could be forged and you won't notice.
> 
>>
>> For example , for a cluster , it's possible to retrieve hosts 
>> belonging to the cluster via this call:
>>
>> hostoncluster = api.clusters.get(id=api.hosts.get(obj.name
>> ).get_cluster().get_id()).get_name()
>>
> 
> That should work, but when you are doing this kind of query it is usually 
> better to let the server do the search. You can achieve that using the same 
> query language that is used in the GUI search bar. For example, in the GUI 
> search bar you can type "Hosts: cluster=mycluster".
> With the SDK you can do the same, using the "list" method and the "query" 
> parameter:
> 
>   hosts = api.hosts.list(query="cluster=mycluster")
>   for host in hosts:
> print(host.get_name())
> 
>>
>> How may I know list of available "fields"  for  host, cluster, 
>> datacenters and so on .. .
>>
> 
> You can open the "params.py" file
> (/usr/lib/python2.7/site-packages/ovirtsdk/xml/params.py if you are using the 
> RPM packags) and look for the corrsponding class: Host, Cluster, DataCenter 
> etc. There you will see all the available "get_..."
> methods.
> 
>>
>> Thanks for help.
>>
>>  
>>
>> J.P.
>>
>>  
>>
>> *De :*Karli Sjöberg [mailto:karli.sjob...@slu.se] *Envoyé :* mardi 17 
>> novembre 2015 17:52 *À :* Jean-Pierre Ribeauville *Cc :* 
>> users@ovirt.org *Objet :* Re: [ovirt-users] Error when trying to 
>> retrieve cluster, hosts via ovirt API
>>
>>  
>>
>>
>> Den 17 nov. 2015 5:30 em skrev Jean-Pierre Ribeauville 
>> mailto:jpribeauvi...@axway.com>>:
>>>
>>> Hi,
>>>
>>>  
>>>
>>> By running python example got here ( :
>> http://website-humblec.rhcloud.com/ovirt-find-hosts-clusters-vm-runnin
>> g-status-ids-storage-domain-details-ovirt-dc-pythonovirt-sdk-part-3)
>>
>>> and modified with my connection  parameters, I got following error :
>>>
>>>  
>>>
>>>  
>>>
>>> Unexpected error: [ERROR]::oVirt API connection failure, (77, '')
>>>
>>>  
>>>
>>> How may I get error  codes meanings ?
>>
>> I don't know the meaning but I saw that APIURL was wrong, it should be:
>>
>> APIURL = "https://${ENGINE_ADDRESS}/ovirt-engine/api
>> "
>>
>> Could you correct that and try again?
>>
>> /K
>>
>>>
>>>  
>>>
>>> Thanks for help.
>>>
>>>  
>>>
>>>  
>>>
>>>  
>>>
>>> J.P. Ribeauville
>>>
>>>  
>>>
>>> P: +33.(0).1.47.17.20.49
>>>
>>> .
>>>
>>> Puteaux 3 Etage 5  Bureau 4
>>>
>>>  
>>>
>>> jpribeauvi...@axway.com  
>>> http://www.axway.com
>>>
>>>  
>>>
>>>  
>>>
>>> P Pensez à l'environnement avant d'imprimer.
>>>
>>>  
>>>
>>>  
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
> 
> 
> --
> Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3ºD, 
> 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid - C.I.F. 
> B82657941 - Red Hat S.L.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 


-- 
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] timeouts

2015-11-26 Thread p...@email.cz

Hello,
can anybody  help me with this timeouts ??
Volumes are not active yes ( bricks down )

desc. of gluster bellow ...

*/var/log/glusterfs/**etc-glusterfs-glusterd.vol.log*
[2015-11-26 14:44:47.174221] I [MSGID: 106004] 
[glusterd-handler.c:5065:__glusterd_peer_rpc_notify] 0-management: Peer 
<1hp1-SAN> (<87fc7db8-aba8-41f2-a1cd-b77e83b17436>), in state Cluster>, has disconnected from glusterd.
[2015-11-26 14:44:47.174354] W 
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c) 
[0x7fb7039d44dc] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162) 
[0x7fb7039de542] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a) 
[0x7fb703a79b4a] ) 0-management: Lock for vol 1HP12-P1 not held
[2015-11-26 14:44:47.17] W 
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c) 
[0x7fb7039d44dc] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162) 
[0x7fb7039de542] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a) 
[0x7fb703a79b4a] ) 0-management: Lock for vol 1HP12-P3 not held
[2015-11-26 14:44:47.174521] W 
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c) 
[0x7fb7039d44dc] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162) 
[0x7fb7039de542] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a) 
[0x7fb703a79b4a] ) 0-management: Lock for vol 2HP12-P1 not held
[2015-11-26 14:44:47.174662] W 
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c) 
[0x7fb7039d44dc] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162) 
[0x7fb7039de542] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a) 
[0x7fb703a79b4a] ) 0-management: Lock for vol 2HP12-P3 not held
[2015-11-26 14:44:47.174532] W [MSGID: 106118] 
[glusterd-handler.c:5087:__glusterd_peer_rpc_notify] 0-management: Lock 
not released for 2HP12-P1
[2015-11-26 14:44:47.174675] W [MSGID: 106118] 
[glusterd-handler.c:5087:__glusterd_peer_rpc_notify] 0-management: Lock 
not released for 2HP12-P3
[2015-11-26 14:44:49.423334] I [MSGID: 106488] 
[glusterd-handler.c:1472:__glusterd_handle_cli_get_volume] 0-glusterd: 
Received get vol req
The message "I [MSGID: 106488] 
[glusterd-handler.c:1472:__glusterd_handle_cli_get_volume] 0-glusterd: 
Received get vol req" repeated 4 times between [2015-11-26 
14:44:49.423334] and [2015-11-26 14:44:49.429781]
[2015-11-26 14:44:51.148711] I [MSGID: 106163] 
[glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack] 
0-management: using the op-version 30702
[2015-11-26 14:44:52.177266] W [socket.c:869:__socket_keepalive] 
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 12, Invalid 
argument
[2015-11-26 14:44:52.177291] E [socket.c:2965:socket_connect] 
0-management: Failed to set keep-alive: Invalid argument
[2015-11-26 14:44:53.180426] W [socket.c:869:__socket_keepalive] 
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 17, Invalid 
argument
[2015-11-26 14:44:53.180447] E [socket.c:2965:socket_connect] 
0-management: Failed to set keep-alive: Invalid argument
[2015-11-26 14:44:52.395468] I [MSGID: 106163] 
[glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack] 
0-management: using the op-version 30702
[2015-11-26 14:44:54.851958] I [MSGID: 106488] 
[glusterd-handler.c:1472:__glusterd_handle_cli_get_volume] 0-glusterd: 
Received get vol req
[2015-11-26 14:44:57.183969] W [socket.c:869:__socket_keepalive] 
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 19, Invalid 
argument
[2015-11-26 14:44:57.183990] E [socket.c:2965:socket_connect] 
0-management: Failed to set keep-alive: Invalid argument


After volumes creation all works fine ( volumes up ) , but then, after 
several reboots ( yum updates) volumes failed due timeouts .


Gluster description:

4 nodes with 4 volumes replica 2
oVirt 3.6 - the last
gluster 3.7.6 - the last
vdsm 4.17.999 - from git repo
oVirt - mgmt.nodes 172.16.0.0
oVirt - bricks 16.0.0.0 ( "SAN" - defined as "gluster" net)
Network works fine, no lost packets

# gluster volume status
Staging failed on 2hp1-SAN. Please check log file for details.
Staging failed on 1hp2-SAN. Please check log file for details.
Staging failed on 2hp2-SAN. Please check log file for details.

# gluster volume info

Volume Name: 1HP12-P1
Type: Replicate
Volume ID: 6991e82c-9745-4203-9b0a-df202060f455
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 1hp1-SAN:/STORAGE/p1/G
Brick2: 1hp2-SAN:/STORAGE/p1/G
Options Reconfigured:
performance.readdir-ahead: on

Volume Name: 1HP12-P3
Ty

Re: [ovirt-users] Windows 10

2015-11-26 Thread Yaniv Dary
Windows 10 guest will only work on 3.6 with fedora or el7.2 hosts. Might
work with el6 as some point. but currently doesn't.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Tue, Oct 6, 2015 at 8:53 AM, Koen Vanoppen 
wrote:

> Dear all,
>
> Yes, onther question :-). This time it's about windows 10.
> I'm running ovirt 3.5.4 and I don't manage to install windows 10 on it.
> Keeps giving me a blue screen (yes, I know, it's still a windows... ;-) )
> on reboot.
>
> Are there any special settings you need to enable when creating the vm?
> Which OS do I need to select? Or shall I just wait untile the relase of
> ovirt 3.6 :-) ?
>
> Kind regards,
>
> Koen
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM Network activity on RHEV-M UI

2015-11-26 Thread Yaniv Dary
You will need to install the guest agent on the VM.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Mon, Oct 12, 2015 at 2:00 AM, Marc Seward  wrote:

> I'm generating network activity on a RHEV 3.5 VM using iperf.The VM acts
> as an iperf client.On the client,iperf reports that data has been
> successfully sent to iperf server.The iperf server also shows that it's
> successfully receiving data from the iperf client.But,network is at 0% on
> the RHEV-M UI.The client and server are on different private networks.
>
> On the same VM,when I generate network activity by fetching a file from a
> public network using wget,the network column correctly shows activity on
> the RHEV-M UI for the VM.
>
> Could someone help me understand why I am unable to see network activity
> on the RHEV-M UI when iperf is used?
>
> Appreciate your help.TIA.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Archiving huge ovirt_engine_history table

2015-11-26 Thread Yaniv Dary
You can configure the DWH via the conf.d to save less records.
See the readme in that folder.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Wed, Oct 14, 2015 at 5:15 AM, Eric Wong  wrote:

> Hello oVirt guru out there:
>
> I notice our oVirt engine postgres db size is growing quite fast for past
> couple of months.  I checked the database size.  Found that our
> ovirt_engine_history is 73GB in size.
>
>
> engine=# \connect ovirt_engine_history
> You are now connected to database "ovirt_engine_history" as user
> "postgres".
> ovirt_engine_history=# SELECT pg_size_pretty( pg_database_size(
> current_database() ) ) As human_size
>, pg_database_size( current_database() ) As raw_size;
> human_size |  raw_size
> +-
> 73 GB  | 78444213368
> (1 row)
>
>
> Brief check the records, there are entries dated back 2014.
>
> I want to see if there is a safe way to archive and remove some of the
> older records?
>
> Thanks,
> Eric
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Corruped disks

2015-11-26 Thread Yaniv Dary
We will need logs and a bug to track the issue. Also info on the OS of the
guest will help.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Wed, Oct 14, 2015 at 3:42 PM, Koen Vanoppen 
wrote:

> Dear all,
>
> lately we are experience some strange behaviour on our vms...
> Every now and then we have disks that went corrupt. Is there a chance that
> ovirt is the issue here or...? It happens (luckily) on our DEV/UAT cluster.
> Since the last 4 weeks, we already had 6 vm's that went totaly corrupt...
>
> Kind regards,
>
> Koen
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Adding direct lun from API doesn't populate attributes like size, vendor, etc

2015-11-26 Thread Yaniv Dary
Please open a new bug if it still doesn't work.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Fri, Oct 16, 2015 at 7:54 PM, Groten, Ryan 
wrote:

> Using this python I am able to create a direct FC lun properly (and it
> works if the lun_id is valid).  But in the GUI after the disk is added none
> of the fields are populated except LUN ID (Size is <1GB, Serial, Vendor,
> Product ID are all blank).
>
>
>
> I see this Bugzilla [1] is very similar (for iSCSI) which says the issue
> was fixed in 3.5.0, but it seems to still be present in 3.5.1 for Fibre
> Channel Direct Luns at least.
>
>
>
> Here’s the python I used to test:
>
>
>
> lun_id = '3600a098038303053453f463045727654'
>
> lu = params.LogicalUnit()
>
> lu.set_id(lun_id)
>
> lus = list()
>
> lus.append(lu)
>
>
>
> storage_params = params.Storage()
>
> storage_params.set_id(lun_id)
>
> storage_params.set_logical_unit(lus)
>
> storage_params.set_type('fcp')
>
> disk_params = params.Disk()
>
> disk_params.set_format('raw')
>
> disk_params.set_interface('virtio')
>
> disk_params.set_alias(disk_name)
>
> disk_params.set_active(True)
>
> disk_params.set_lun_storage(storage_params)
>
> disk = api.disks.add(disk_params)
>
>
>
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1096217
>
>
>
> Thanks,
>
> Ryan
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Mix local and shared storage on ovirt3.6 rc?

2015-11-26 Thread Allon Mureinik
Not sure I understand the the issue - is this local storage (in the sense
that oVirt defines it as local, not in the sense of it being a drive on the
same host vdsm is running from) or glusterfs?

Could you add some logs please?

On Wed, Oct 7, 2015 at 7:03 AM, Liam Curtis  wrote:

> Hello all,
>
> Loving ovirt...Have reinstalled many a time trying to understand and
> thought I had this working, though now that everything operating properly
> it seems this functionality is not possible.
>
> I am running hosted engine over glusterfs and would also like to use some
> of the other bricks I have set up on the gluster host, but when I try to
> create a new gluster cluster in data center,  I get error message:
>
> Failed to connect host  to Storage Pool Default.
>
> I dont want to use just gluster shared storage. Any way to work around
> this?
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] Controlling UI table column visibility and position

2015-11-26 Thread Vojtech Szocs


- Original Message -
> From: "Eli Mesika" 
> To: "Vojtech Szocs" 
> Cc: "users" , "devel" 
> Sent: Wednesday, November 25, 2015 10:14:00 PM
> Subject: Re: [ovirt-devel] Controlling UI table column visibility and position
> 
> 
> 
> - Original Message -
> > From: "Oved Ourfali" 
> > To: "Vojtech Szocs" 
> > Cc: "users" , "devel" 
> > Sent: Wednesday, November 25, 2015 7:32:33 PM
> > Subject: Re: [ovirt-devel] Controlling UI table column visibility and
> > position
> > 
> > 
> > 
> > That's awesome!
> > Looking forward to playing around with it!
> > 
> > Thanks,
> > Oved
> > On Nov 25, 2015 6:42 PM, "Vojtech Szocs" < vsz...@redhat.com > wrote:
> > 
> > 
> > Dear developers and users,
> > 
> > it's now possible to tweak table column visibility and position
> > through header context menu in WebAdmin's main & sub tabs [1,2].
> > 
> > [1] https://gerrit.ovirt.org/#/c/43401/
> > [2] https://gerrit.ovirt.org/#/c/47542/
> > 
> > This allows you to turn "unwanted" columns off and re-arrange
> > those which are visible to match your personal preference.
> > 
> > Screenshot of customizing VM main tab:
> > 
> > https://imgur.com/5dfh8QA
> 
> Really useful

Thanks! :)

> However, I would filter out status columns since this column does not have a
> title and also  I can not see a use-case of removing this column from the
> view

For columns without explicit title like VM status (icon) column, I gave
these a "meaningful" title for context menu purposes, like "Status Icon"
in the screenshot above.

I wanted to give users full freedom as to which columns are visible and
in which order. Even if you turn off all columns, you can turn them on
again :) but I agree that VM status is quite essential info and turning
it off doesn't make much sense..

I've also created RFE to improve existing up/down arrow user experience
by replacing it with drag'n'drop behavior:

  https://bugzilla.redhat.com/1285499 

(Please note that this feature landed in 'master' branch, no backports.)

Vojtech

> 
> > 
> > There's also RFE [3] to persist (remember) such column settings
> > in the browser, similar to persisting other client-side options.
> > 
> > [3] https://bugzilla.redhat.com/1285456
> > 
> > Regards,
> > Vojtech
> > ___
> > Devel mailing list
> > de...@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> > 
> > ___
> > Devel mailing list
> > de...@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error when trying to retrieve cluster, hosts via ovirt API

2015-11-26 Thread Jean-Pierre Ribeauville
Hi,

I'll  follow your hint.

Thx a lot.

J.P.

-Message d'origine-
De : Juan Hernández [mailto:jhern...@redhat.com] 
Envoyé : jeudi 26 novembre 2015 15:26
À : Jean-Pierre Ribeauville; Karli Sjöberg
Cc : users@ovirt.org
Objet : Re: [ovirt-users] Error when trying to retrieve cluster, hosts via 
ovirt API

On 11/25/2015 11:13 AM, Jean-Pierre Ribeauville wrote:
> H,
> 
> Thanks for infos.
> 
> BTW,  as I need to retrieve cluster and datacenter to which my host belongs   
> within a software running on the host , I use a Python script 
> interacting with the ovirt-engine ; is there another way to get these  infos 
> "locally" on the host itself ?
> 

I'm not aware of any way to do that, I'd suggest that you stick to
interacting with the engine for that.

> 
> -Message d'origine-
> De : Juan Hernández [mailto:jhern...@redhat.com] 
> Envoyé : mercredi 18 novembre 2015 19:10
> À : Jean-Pierre Ribeauville; Karli Sjöberg
> Cc : users@ovirt.org
> Objet : Re: [ovirt-users] Error when trying to retrieve cluster, hosts via 
> ovirt API
> 
> On 11/18/2015 12:22 PM, Jean-Pierre Ribeauville wrote:
>> Hi,
>>
>> You were right .
>>
>> By setting correct URL and correct certificate file location , it's working.
>>
>> If I well understand , as this certificate file has to be on the 
>> client side , isn't a point if failure ?
>>
> 
> The certificate is needed to secure the SSL communication. You can do without 
> it, adding "insecure=True" to the constructor of the API object, but then the 
> identity of the server could be forged and you won't notice.
> 
>>
>> For example , for a cluster , it's possible to retrieve hosts 
>> belonging to the cluster via this call:
>>
>> hostoncluster = api.clusters.get(id=api.hosts.get(obj.name
>> ).get_cluster().get_id()).get_name()
>>
> 
> That should work, but when you are doing this kind of query it is usually 
> better to let the server do the search. You can achieve that using the same 
> query language that is used in the GUI search bar. For example, in the GUI 
> search bar you can type "Hosts: cluster=mycluster".
> With the SDK you can do the same, using the "list" method and the "query" 
> parameter:
> 
>   hosts = api.hosts.list(query="cluster=mycluster")
>   for host in hosts:
> print(host.get_name())
> 
>>
>> How may I know list of available "fields"  for  host, cluster, 
>> datacenters and so on .. .
>>
> 
> You can open the "params.py" file
> (/usr/lib/python2.7/site-packages/ovirtsdk/xml/params.py if you are using the 
> RPM packags) and look for the corrsponding class: Host, Cluster, DataCenter 
> etc. There you will see all the available "get_..."
> methods.
> 
>>
>> Thanks for help.
>>
>>  
>>
>> J.P.
>>
>>  
>>
>> *De :*Karli Sjöberg [mailto:karli.sjob...@slu.se] *Envoyé :* mardi 17 
>> novembre 2015 17:52 *À :* Jean-Pierre Ribeauville *Cc :* 
>> users@ovirt.org *Objet :* Re: [ovirt-users] Error when trying to 
>> retrieve cluster, hosts via ovirt API
>>
>>  
>>
>>
>> Den 17 nov. 2015 5:30 em skrev Jean-Pierre Ribeauville 
>> mailto:jpribeauvi...@axway.com>>:
>>>
>>> Hi,
>>>
>>>  
>>>
>>> By running python example got here ( :
>> http://website-humblec.rhcloud.com/ovirt-find-hosts-clusters-vm-runnin
>> g-status-ids-storage-domain-details-ovirt-dc-pythonovirt-sdk-part-3)
>>
>>> and modified with my connection  parameters, I got following error :
>>>
>>>  
>>>
>>>  
>>>
>>> Unexpected error: [ERROR]::oVirt API connection failure, (77, '')
>>>
>>>  
>>>
>>> How may I get error  codes meanings ?
>>
>> I don't know the meaning but I saw that APIURL was wrong, it should be:
>>
>> APIURL = "https://${ENGINE_ADDRESS}/ovirt-engine/api
>> "
>>
>> Could you correct that and try again?
>>
>> /K
>>
>>>
>>>  
>>>
>>> Thanks for help.
>>>
>>>  
>>>
>>>  
>>>
>>>  
>>>
>>> J.P. Ribeauville
>>>
>>>  
>>>
>>> P: +33.(0).1.47.17.20.49
>>>
>>> .
>>>
>>> Puteaux 3 Etage 5  Bureau 4
>>>
>>>  
>>>
>>> jpribeauvi...@axway.com  
>>> http://www.axway.com
>>>
>>>  
>>>
>>>  
>>>
>>> P Pensez à l'environnement avant d'imprimer.
>>>
>>>  
>>>
>>>  
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
> 
> 
> --
> Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3ºD, 
> 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid - C.I.F. 
> B82657941 - Red Hat S.L.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 


-- 
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid - C.I.F. B82657941 - Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Attach Export Domain (NFS) to multiple datacenter

2015-11-26 Thread Punit Dambiwal
Hi Simone,

Thanks..but how i can upload my existing OS template to docker glance ?? Is
there any good how to for the same..


On Thu, Nov 26, 2015 at 4:41 PM, Simone Tiraboschi 
wrote:

>
>
> On Thu, Nov 26, 2015 at 7:06 AM, Punit Dambiwal  wrote:
>
>> Hi Simone,
>>
>> Yes.. i can but i want to use the same NFS storage with OS template
>> inside..to use all the local storage server to provision the guest VM's..
>>
>> Thanks,
>> punit
>>
>>
>
> Did you checked the glance integration?
> http://www.ovirt.org/Features/Glance_Integration
>
> Now on 3.6 you can also deploy and configure glance via docker from
> engine-setup:
> http://www.ovirt.org/CinderGlance_Docker_Integration
>
>
>
>> On Wed, Nov 25, 2015 at 6:24 PM, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Wed, Nov 25, 2015 at 5:50 AM, Punit Dambiwal 
>>> wrote:
>>>
 Hi,

 I want to attach the same nfs (export) volume to multiple datacenter in
 the ovirt..is it possible to do so..or any workaround for the same..

>>>
>>>
>>> As far as I know not at the same time.
>>> You have to detach and then attach do the new datacenter.
>>>
>>>

 Thanks.
 Punit

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] timeouts

2015-11-26 Thread Sahina Bose

[+ gluster-users]

On 11/26/2015 08:37 PM, p...@email.cz wrote:

Hello,
can anybody  help me with this timeouts ??
Volumes are not active yes ( bricks down )

desc. of gluster bellow ...

*/var/log/glusterfs/**etc-glusterfs-glusterd.vol.log*
[2015-11-26 14:44:47.174221] I [MSGID: 106004] 
[glusterd-handler.c:5065:__glusterd_peer_rpc_notify] 0-management: 
Peer <1hp1-SAN> (<87fc7db8-aba8-41f2-a1cd-b77e83b17436>), in state 
, has disconnected from glusterd.
[2015-11-26 14:44:47.174354] W 
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c) 
[0x7fb7039d44dc] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162) 
[0x7fb7039de542] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a) 
[0x7fb703a79b4a] ) 0-management: Lock for vol 1HP12-P1 not held
[2015-11-26 14:44:47.17] W 
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c) 
[0x7fb7039d44dc] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162) 
[0x7fb7039de542] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a) 
[0x7fb703a79b4a] ) 0-management: Lock for vol 1HP12-P3 not held
[2015-11-26 14:44:47.174521] W 
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c) 
[0x7fb7039d44dc] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162) 
[0x7fb7039de542] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a) 
[0x7fb703a79b4a] ) 0-management: Lock for vol 2HP12-P1 not held
[2015-11-26 14:44:47.174662] W 
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c) 
[0x7fb7039d44dc] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162) 
[0x7fb7039de542] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a) 
[0x7fb703a79b4a] ) 0-management: Lock for vol 2HP12-P3 not held
[2015-11-26 14:44:47.174532] W [MSGID: 106118] 
[glusterd-handler.c:5087:__glusterd_peer_rpc_notify] 0-management: 
Lock not released for 2HP12-P1
[2015-11-26 14:44:47.174675] W [MSGID: 106118] 
[glusterd-handler.c:5087:__glusterd_peer_rpc_notify] 0-management: 
Lock not released for 2HP12-P3
[2015-11-26 14:44:49.423334] I [MSGID: 106488] 
[glusterd-handler.c:1472:__glusterd_handle_cli_get_volume] 0-glusterd: 
Received get vol req
The message "I [MSGID: 106488] 
[glusterd-handler.c:1472:__glusterd_handle_cli_get_volume] 0-glusterd: 
Received get vol req" repeated 4 times between [2015-11-26 
14:44:49.423334] and [2015-11-26 14:44:49.429781]
[2015-11-26 14:44:51.148711] I [MSGID: 106163] 
[glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack] 
0-management: using the op-version 30702
[2015-11-26 14:44:52.177266] W [socket.c:869:__socket_keepalive] 
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 12, Invalid 
argument
[2015-11-26 14:44:52.177291] E [socket.c:2965:socket_connect] 
0-management: Failed to set keep-alive: Invalid argument
[2015-11-26 14:44:53.180426] W [socket.c:869:__socket_keepalive] 
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 17, Invalid 
argument
[2015-11-26 14:44:53.180447] E [socket.c:2965:socket_connect] 
0-management: Failed to set keep-alive: Invalid argument
[2015-11-26 14:44:52.395468] I [MSGID: 106163] 
[glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack] 
0-management: using the op-version 30702
[2015-11-26 14:44:54.851958] I [MSGID: 106488] 
[glusterd-handler.c:1472:__glusterd_handle_cli_get_volume] 0-glusterd: 
Received get vol req
[2015-11-26 14:44:57.183969] W [socket.c:869:__socket_keepalive] 
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 19, Invalid 
argument
[2015-11-26 14:44:57.183990] E [socket.c:2965:socket_connect] 
0-management: Failed to set keep-alive: Invalid argument


After volumes creation all works fine ( volumes up ) , but then, after 
several reboots ( yum updates) volumes failed due timeouts .


Gluster description:

4 nodes with 4 volumes replica 2
oVirt 3.6 - the last
gluster 3.7.6 - the last
vdsm 4.17.999 - from git repo
oVirt - mgmt.nodes 172.16.0.0
oVirt - bricks 16.0.0.0 ( "SAN" - defined as "gluster" net)
Network works fine, no lost packets

# gluster volume status
Staging failed on 2hp1-SAN. Please check log file for details.
Staging failed on 1hp2-SAN. Please check log file for details.
Staging failed on 2hp2-SAN. Please check log file for details.

# gluster volume info

Volume Name: 1HP12-P1
Type: Replicate
Volume ID: 6991e82c-9745-4203-9b0a-df202060f455
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 1hp1-SAN:/STORAGE/p1/G
Brick2: 1hp2-SAN:/STORAGE/p1/G
Options Reconfigure

Re: [ovirt-users] [Gluster-users] timeouts

2015-11-26 Thread Atin Mukherjee


On 11/27/2015 10:52 AM, Sahina Bose wrote:
> [+ gluster-users]
> 
> On 11/26/2015 08:37 PM, p...@email.cz wrote:
>> Hello,
>> can anybody  help me with this timeouts ??
>> Volumes are not active yes ( bricks down )
>>
>> desc. of gluster bellow ...
>>
>> */var/log/glusterfs/**etc-glusterfs-glusterd.vol.log*
>> [2015-11-26 14:44:47.174221] I [MSGID: 106004]
>> [glusterd-handler.c:5065:__glusterd_peer_rpc_notify] 0-management:
>> Peer <1hp1-SAN> (<87fc7db8-aba8-41f2-a1cd-b77e83b17436>), in state
>> , has disconnected from glusterd.
>> [2015-11-26 14:44:47.174354] W
>> [glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
>> (-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
>> [0x7fb7039d44dc]
>> -->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
>> [0x7fb7039de542]
>> -->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
>> [0x7fb703a79b4a] ) 0-management: Lock for vol 1HP12-P1 not held
>> [2015-11-26 14:44:47.17] W
>> [glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
>> (-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
>> [0x7fb7039d44dc]
>> -->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
>> [0x7fb7039de542]
>> -->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
>> [0x7fb703a79b4a] ) 0-management: Lock for vol 1HP12-P3 not held
>> [2015-11-26 14:44:47.174521] W
>> [glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
>> (-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
>> [0x7fb7039d44dc]
>> -->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
>> [0x7fb7039de542]
>> -->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
>> [0x7fb703a79b4a] ) 0-management: Lock for vol 2HP12-P1 not held
>> [2015-11-26 14:44:47.174662] W
>> [glusterd-locks.c:681:glusterd_mgmt_v3_unlock]
>> (-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)
>> [0x7fb7039d44dc]
>> -->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162)
>> [0x7fb7039de542]
>> -->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a)
>> [0x7fb703a79b4a] ) 0-management: Lock for vol 2HP12-P3 not held
>> [2015-11-26 14:44:47.174532] W [MSGID: 106118]
>> [glusterd-handler.c:5087:__glusterd_peer_rpc_notify] 0-management:
>> Lock not released for 2HP12-P1
>> [2015-11-26 14:44:47.174675] W [MSGID: 106118]
>> [glusterd-handler.c:5087:__glusterd_peer_rpc_notify] 0-management:
>> Lock not released for 2HP12-P3
>> [2015-11-26 14:44:49.423334] I [MSGID: 106488]
>> [glusterd-handler.c:1472:__glusterd_handle_cli_get_volume] 0-glusterd:
>> Received get vol req
>> The message "I [MSGID: 106488]
>> [glusterd-handler.c:1472:__glusterd_handle_cli_get_volume] 0-glusterd:
>> Received get vol req" repeated 4 times between [2015-11-26
>> 14:44:49.423334] and [2015-11-26 14:44:49.429781]
>> [2015-11-26 14:44:51.148711] I [MSGID: 106163]
>> [glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack]
>> 0-management: using the op-version 30702
>> [2015-11-26 14:44:52.177266] W [socket.c:869:__socket_keepalive]
>> 0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 12, Invalid
>> argument
>> [2015-11-26 14:44:52.177291] E [socket.c:2965:socket_connect]
>> 0-management: Failed to set keep-alive: Invalid argument
>> [2015-11-26 14:44:53.180426] W [socket.c:869:__socket_keepalive]
>> 0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 17, Invalid
>> argument
>> [2015-11-26 14:44:53.180447] E [socket.c:2965:socket_connect]
>> 0-management: Failed to set keep-alive: Invalid argument
>> [2015-11-26 14:44:52.395468] I [MSGID: 106163]
>> [glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack]
>> 0-management: using the op-version 30702
>> [2015-11-26 14:44:54.851958] I [MSGID: 106488]
>> [glusterd-handler.c:1472:__glusterd_handle_cli_get_volume] 0-glusterd:
>> Received get vol req
>> [2015-11-26 14:44:57.183969] W [socket.c:869:__socket_keepalive]
>> 0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 19, Invalid
>> argument
>> [2015-11-26 14:44:57.183990] E [socket.c:2965:socket_connect]
>> 0-management: Failed to set keep-alive: Invalid argument
>>
>> After volumes creation all works fine ( volumes up ) , but then, after
>> several reboots ( yum updates) volumes failed due timeouts .
>>
>> Gluster description:
>>
>> 4 nodes with 4 volumes replica 2
>> oVirt 3.6 - the last
>> gluster 3.7.6 - the last
>> vdsm 4.17.999 - from git repo
>> oVirt - mgmt.nodes 172.16.0.0
>> oVirt - bricks 16.0.0.0 ( "SAN" - defined as "gluster" net)
>> Network works fine, no lost packets
>>
>> # gluster volume status
>> Staging failed on 2hp1-SAN. Please check log file for details.
>> Staging failed on 1hp2-SAN. Please check log file for details.
>> Staging failed on 2hp2-SAN. Please check log fi

Re: [ovirt-users] timeouts

2015-11-26 Thread knarra

Hi Paf1,

Looks like when you reboot the nodes, glusterd does not start up in 
one node and due to this the node gets disconnected from other node(that 
is what i see from logs). After reboot, once your systems are up and 
running , can you check if glusterd is running on all the nodes? Can you 
please let me know which build of gluster are you using ?


For more info please read, 
http://www.gluster.org/pipermail/gluster-users.old/2015-June/022377.html


Thanks
kasturi

On 11/27/2015 10:52 AM, Sahina Bose wrote:

[+ gluster-users]

On 11/26/2015 08:37 PM, p...@email.cz wrote:

Hello,
can anybody  help me with this timeouts ??
Volumes are not active yes ( bricks down )

desc. of gluster bellow ...

*/var/log/glusterfs/**etc-glusterfs-glusterd.vol.log*
[2015-11-26 14:44:47.174221] I [MSGID: 106004] 
[glusterd-handler.c:5065:__glusterd_peer_rpc_notify] 0-management: 
Peer <1hp1-SAN> (<87fc7db8-aba8-41f2-a1cd-b77e83b17436>), in state 
, has disconnected from glusterd.
[2015-11-26 14:44:47.174354] W 
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c) 
[0x7fb7039d44dc] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162) 
[0x7fb7039de542] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a) 
[0x7fb703a79b4a] ) 0-management: Lock for vol 1HP12-P1 not held
[2015-11-26 14:44:47.17] W 
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c) 
[0x7fb7039d44dc] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162) 
[0x7fb7039de542] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a) 
[0x7fb703a79b4a] ) 0-management: Lock for vol 1HP12-P3 not held
[2015-11-26 14:44:47.174521] W 
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c) 
[0x7fb7039d44dc] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162) 
[0x7fb7039de542] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a) 
[0x7fb703a79b4a] ) 0-management: Lock for vol 2HP12-P1 not held
[2015-11-26 14:44:47.174662] W 
[glusterd-locks.c:681:glusterd_mgmt_v3_unlock] 
(-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c) 
[0x7fb7039d44dc] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x162) 
[0x7fb7039de542] 
-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x58a) 
[0x7fb703a79b4a] ) 0-management: Lock for vol 2HP12-P3 not held
[2015-11-26 14:44:47.174532] W [MSGID: 106118] 
[glusterd-handler.c:5087:__glusterd_peer_rpc_notify] 0-management: 
Lock not released for 2HP12-P1
[2015-11-26 14:44:47.174675] W [MSGID: 106118] 
[glusterd-handler.c:5087:__glusterd_peer_rpc_notify] 0-management: 
Lock not released for 2HP12-P3
[2015-11-26 14:44:49.423334] I [MSGID: 106488] 
[glusterd-handler.c:1472:__glusterd_handle_cli_get_volume] 
0-glusterd: Received get vol req
The message "I [MSGID: 106488] 
[glusterd-handler.c:1472:__glusterd_handle_cli_get_volume] 
0-glusterd: Received get vol req" repeated 4 times between 
[2015-11-26 14:44:49.423334] and [2015-11-26 14:44:49.429781]
[2015-11-26 14:44:51.148711] I [MSGID: 106163] 
[glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack] 
0-management: using the op-version 30702
[2015-11-26 14:44:52.177266] W [socket.c:869:__socket_keepalive] 
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 12, Invalid 
argument
[2015-11-26 14:44:52.177291] E [socket.c:2965:socket_connect] 
0-management: Failed to set keep-alive: Invalid argument
[2015-11-26 14:44:53.180426] W [socket.c:869:__socket_keepalive] 
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 17, Invalid 
argument
[2015-11-26 14:44:53.180447] E [socket.c:2965:socket_connect] 
0-management: Failed to set keep-alive: Invalid argument
[2015-11-26 14:44:52.395468] I [MSGID: 106163] 
[glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack] 
0-management: using the op-version 30702
[2015-11-26 14:44:54.851958] I [MSGID: 106488] 
[glusterd-handler.c:1472:__glusterd_handle_cli_get_volume] 
0-glusterd: Received get vol req
[2015-11-26 14:44:57.183969] W [socket.c:869:__socket_keepalive] 
0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 19, Invalid 
argument
[2015-11-26 14:44:57.183990] E [socket.c:2965:socket_connect] 
0-management: Failed to set keep-alive: Invalid argument


After volumes creation all works fine ( volumes up ) , but then, 
after several reboots ( yum updates) volumes failed due timeouts .


Gluster description:

4 nodes with 4 volumes replica 2
oVirt 3.6 - the last
gluster 3.7.6 - the last
vdsm 4.17.999 - from git repo
oVirt - mgmt.nodes 172.16.0.0
oVirt - bricks 16.0.0.0 ( "SAN" - defined as 

[ovirt-users] Bug?

2015-11-26 Thread Koen Vanoppen
Hi All,

One of our users on ovirt who always was able to login with the AD account,
now all of a sudden can't login anymore... I already tried kicking him out
and putting him back in again, but no change... Following error is
appearing in the log file when he logs in:

2015-11-27 07:01:00,418 ERROR
[org.ovirt.engine.core.bll.aaa.LoginAdminUserCommand]
(ajp--127.0.0.1-8702-1) Error during CanDoActionFailure.: Class: class
org.ovirt.engine.core.extensions.mgr.ExtensionInvokeCommandFailedException
Input:
{Extkey[name=EXTENSION_INVOKE_CONTEXT;type=class
org.ovirt.engine.api.extensions.ExtMap;uuid=EXTENSION_INVOKE_CONTEXT[886d2ebb-312a-49ae-9cc3-e1f849834b7d];]={Extkey[name=EXTENSION_INTERFACE_VERSION_MAX;type=class
java.lang.Integer;uuid=EXTENSION_INTERFACE_VERSION_MAX[f4cff49f-2717-4901-8ee9-df362446e3e7];]=0,
Extkey[name=EXTENSION_LICENSE;type=class
java.lang.String;uuid=EXTENSION_LICENSE[8a61ad65-054c-4e31-9c6d-1ca4d60a4c18];]=ASL
2.0, Extkey[name=EXTENSION_NOTES;type=class
java.lang.String;uuid=EXTENSION_NOTES[2da5ad7e-185a-4584-aaff-97f66978e4ea];]=Display
name: ovirt-engine-extension-aaa-ldap-1.0.2-1.el6,
Extkey[name=EXTENSION_HOME_URL;type=class
java.lang.String;uuid=EXTENSION_HOME_URL[4ad7a2f4-f969-42d4-b399-72d192e18304];]=
http://www.ovirt.org, Extkey[name=EXTENSION_LOCALE;type=class
java.lang.String;uuid=EXTENSION_LOCALE[0780b112-0ce0-404a-b85e-8765d778bb29];]=en_US,
Extkey[name=EXTENSION_NAME;type=class
java.lang.String;uuid=EXTENSION_NAME[651381d3-f54f-4547-bf28-b0b01a103184];]=ovirt-engine-extension-aaa-ldap.authz,
Extkey[name=EXTENSION_INTERFACE_VERSION_MIN;type=class
java.lang.Integer;uuid=EXTENSION_INTERFACE_VERSION_MIN[2b84fc91-305b-497b-a1d7-d961b9d2ce0b];]=0,
Extkey[name=EXTENSION_CONFIGURATION;type=class
java.util.Properties;uuid=EXTENSION_CONFIGURATION[2d48ab72-f0a1-4312-b4ae-5068a226b0fc];]=***,
Extkey[name=EXTENSION_AUTHOR;type=class
java.lang.String;uuid=EXTENSION_AUTHOR[ef242f7a-2dad-4bc5-9aad-e07018b7fbcc];]=The
oVirt Project, Extkey[name=AAA_AUTHZ_QUERY_MAX_FILTER_SIZE;type=class
java.lang.Integer;uuid=AAA_AUTHZ_QUERY_MAX_FILTER_SIZE[2eb1f541-0f65-44a1-a6e3-014e247595f5];]=50,
Extkey[name=EXTENSION_INSTANCE_NAME;type=class
java.lang.String;uuid=EXTENSION_INSTANCE_NAME[65c67ff6-aeca-4bd5-a245-8674327f011b];]=BRU_AIR-authz,
Extkey[name=EXTENSION_BUILD_INTERFACE_VERSION;type=class
java.lang.Integer;uuid=EXTENSION_BUILD_INTERFACE_VERSION[cb479e5a-4b23-46f8-aed3-56a4747a8ab7];]=0,
Extkey[name=EXTENSION_CONFIGURATION_SENSITIVE_KEYS;type=interface
java.util.Collection;uuid=EXTENSION_CONFIGURATION_SENSITIVE_KEYS[a456efa1-73ff-4204-9f9b-ebff01e35263];]=[],
Extkey[name=EXTENSION_GLOBAL_CONTEXT;type=class
org.ovirt.engine.api.extensions.ExtMap;uuid=EXTENSION_GLOBAL_CONTEXT[9799e72f-7af6-4cf1-bf08-297bc8903676];]=*skip*,
Extkey[name=EXTENSION_VERSION;type=class
java.lang.String;uuid=EXTENSION_VERSION[fe35f6a8-8239-4bdb-ab1a-af9f779ce68c];]=1.0.2,
Extkey[name=AAA_AUTHZ_AVAILABLE_NAMESPACES;type=interface
java.util.Collection;uuid=AAA_AUTHZ_AVAILABLE_NAMESPACES[6dffa34c-955f-486a-bd35-0a272b45a711];]=[DC=brussels,DC=airport,
DC=airport], Extkey[name=EXTENSION_MANAGER_TRACE_LOG;type=interface
org.slf4j.Logger;uuid=EXTENSION_MANAGER_TRACE_LOG[863db666-3ea7-4751-9695-918a3197ad83];]=org.slf4j.impl.Slf4jLogger(org.ovirt.engine.core.extensions.mgr.ExtensionsManager.trace.ovirt-engine-extension-aaa-ldap.authz.BRU_AIR-authz),
Extkey[name=EXTENSION_PROVIDES;type=interface
java.util.Collection;uuid=EXTENSION_PROVIDES[8cf373a6-65b5-4594-b828-0e275087de91];]=[org.ovirt.engine.api.extensions.aaa.Authz],
Extkey[name=EXTENSION_CONFIGURATION_FILE;type=class
java.lang.String;uuid=EXTENSION_CONFIGURATION_FILE[4fb0ffd3-983c-4f3f-98ff-9660bd67af6a];]=/etc/ovirt-engine/extensions.d/BRU_AIR-authz.properties},
Extkey[name=AAA_AUTHZ_QUERY_FLAGS;type=class
java.lang.Integer;uuid=AAA_AUTHZ_QUERY_FLAGS[97d226e9-8d87-49a0-9a7f-af689320907b];]=3,
Extkey[name=EXTENSION_INVOKE_COMMAND;type=class
org.ovirt.engine.api.extensions.ExtUUID;uuid=EXTENSION_INVOKE_COMMAND[485778ab-bede-4f1a-b823-77b262a2f28d];]=AAA_AUTHZ_FETCH_PRINCIPAL_RECORD[5a5bf9bb-9336-4376-a823-26efe1ba26df],
Extkey[name=AAA_AUTHN_AUTH_RECORD;type=class
org.ovirt.engine.api.extensions.ExtMap;uuid=AAA_AUTHN_AUTH_RECORD[e9462168-b53b-44ac-9af5-f25e1697173e];]={Extkey[name=AAA_AUTHN_AUTH_RECORD_PRINCIPAL;type=class
java.lang.String;uuid=AAA_AUTHN_AUTH_RECORD_PRINCIPAL[c3498f07-11fe-464c-958c-8bd7490b119a];]=
us...@company.be}}
Output:
{Extkey[name=EXTENSION_INVOKE_RESULT;type=class
java.lang.Integer;uuid=EXTENSION_INVOKE_RESULT[0909d91d-8bde-40fb-b6c0-099c772ddd4e];]=2,
Extkey[name=EXTENSION_INVOKE_MESSAGE;type=class
java.lang.String;uuid=EXTENSION_INVOKE_MESSAGE[b7b053de-dc73-4bf7-9d26-b8bdb72f5893];]=No
search for principal 'us...@company.be'}

at
org.ovirt.engine.core.extensions.mgr.ExtensionProxy.invoke(ExtensionProxy.java:91)
[extensions-manager.jar:]
at
org.ovirt.engine.core.extensions.mgr.ExtensionProxy.invoke(ExtensionProxy.java:109)
[extensions-manager.j

Re: [ovirt-users] Corruped disks

2015-11-26 Thread Koen Vanoppen
If we have this behaviour again, I'll let you know with logs. The OS is
always Centos. But let us just wait, until it happens again so I can send
logs with it.
Thanks in advance!

Kind regards,

Koen

On 26 November 2015 at 16:39, Yaniv Dary  wrote:

> We will need logs and a bug to track the issue. Also info on the OS of the
> guest will help.
>
> Yaniv Dary
> Technical Product Manager
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th floor
> Ra'anana, Israel 4350109
>
> Tel : +972 (9) 7692306
> 8272306
> Email: yd...@redhat.com
> IRC : ydary
>
>
> On Wed, Oct 14, 2015 at 3:42 PM, Koen Vanoppen 
> wrote:
>
>> Dear all,
>>
>> lately we are experience some strange behaviour on our vms...
>> Every now and then we have disks that went corrupt. Is there a chance
>> that ovirt is the issue here or...? It happens (luckily) on our DEV/UAT
>> cluster. Since the last 4 weeks, we already had 6 vm's that went totaly
>> corrupt...
>>
>> Kind regards,
>>
>> Koen
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Windows 10

2015-11-26 Thread Koen Vanoppen
Thanks!

On 26 November 2015 at 16:27, Yaniv Dary  wrote:

> Windows 10 guest will only work on 3.6 with fedora or el7.2 hosts. Might
> work with el6 as some point. but currently doesn't.
>
> Yaniv Dary
> Technical Product Manager
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th floor
> Ra'anana, Israel 4350109
>
> Tel : +972 (9) 7692306
> 8272306
> Email: yd...@redhat.com
> IRC : ydary
>
>
> On Tue, Oct 6, 2015 at 8:53 AM, Koen Vanoppen 
> wrote:
>
>> Dear all,
>>
>> Yes, onther question :-). This time it's about windows 10.
>> I'm running ovirt 3.5.4 and I don't manage to install windows 10 on it.
>> Keeps giving me a blue screen (yes, I know, it's still a windows... ;-) )
>> on reboot.
>>
>> Are there any special settings you need to enable when creating the vm?
>> Which OS do I need to select? Or shall I just wait untile the relase of
>> ovirt 3.6 :-) ?
>>
>> Kind regards,
>>
>> Koen
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA cluster

2015-11-26 Thread Budur Nagaraju
I got only 10lines to in the vdsm logs and are below ,


[root@he /]# tail -f /var/log/vdsm/vdsm.log
Thread-100::DEBUG::2015-11-27
12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource 'Storage.HsmDomainMonitorLock'
Thread-100::DEBUG::2015-11-27
12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
Thread-100::DEBUG::2015-11-27
12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is
waiting for it.
Thread-100::DEBUG::2015-11-27
12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing
records.
Thread-100::INFO::2015-11-27
12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
stopMonitoringDomain, Return response: None
Thread-100::DEBUG::2015-11-27
12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
Thread-100::DEBUG::2015-11-27
12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState)
Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing ->
state finished
Thread-100::DEBUG::2015-11-27
12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-100::DEBUG::2015-11-27
12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-100::DEBUG::2015-11-27
12:58:57,361::task::993::Storage.TaskManager.Task::(_decref)
Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False



On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi 
wrote:

>
>
> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju 
> wrote:
>
>>
>>
>>
>> *Below are the entire logs*
>>
>>
> Sorry, with the entire log I mean if you can attach or share somewhere the
> whole /var/log/vdsm/vdsm.log  cause the latest ten lines are not enough to
> point out the issue.
>
>
>>
>>
>>
>>
>> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
>>
>> Detector thread::DEBUG::2015-11-26
>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:50944
>> Detector thread::DEBUG::2015-11-26
>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>> http detected from ('127.0.0.1', 50944)
>> Detector thread::DEBUG::2015-11-26
>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>> Adding connection from 127.0.0.1:50945
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>> Connection removed from 127.0.0.1:50945
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:50945
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>> http detected from ('127.0.0.1', 50945)
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
>> Adding connection from 127.0.0.1:50946
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
>> Connection removed from 127.0.0.1:50946
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
>> Detected protocol xml from 127.0.0.1:50946
>> Detector thread::DEBUG::2015-11-26
>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
>> http detected from ('127.0.0.1', 50946)
>>
>>
>>
>>
>> *[root@he ~]# tail -f /var/log/vdsm/supervdsm.log *
>>
>> MainProcess::DEBUG::2015-11-26
>> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
>> call readMultipathConf with () {}
>> MainProcess::DEBUG::2015-11-26
>> 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
>> return readMultipathConf with ['# RHEV REVISION 1.1', '', 'defaults {',
>> 'polling_interval5', 'getuid_callout
>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
>> 'no_path_retry   fail', 'user_friendly_names no', '
>> flush_on_last_del   yes', 'fast_io_fail_tmo5', '
>> dev_loss_tmo30', 'max_fds 4096', '}', '',
>> 'devices {', 'device {', 'vendor  "HITACHI"', '
>> product "DF.*"', 'getuid_callout
>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
>> '}', 'device {', 'vendor  "COMPELNT"', '
>> product "Compellent Vol"', 'no_path_retry
>> fail', '}', 'device {', '# multipath.conf.default', '
>> ven