Re: [ovirt-users] VM Portal looking for translators

2017-08-16 Thread Nicolás

Hi,

Why do I see a red lock on the translation page claiming "This 
project-version is readonly. It cannot be edited"? Do I have to be 
granted access specifically on the language site to translate?


(Not sure if related: Requested access to some language groups like 
described in [3], but it's still pending).


Thanks.

Nicolás

El 14/08/17 a las 19:37, Jakub Niedermertl escribió:

Hi all,

new VM Portal project [1] - a replacement of oVirt userportal -  is 
looking for community translators. If you know any of


* Chinese (Simplified)
* French
* German
* Italian
* Japanese
* Korean
* Portuguese
* Russian
* Spanish

and want to join translation effort, please

* sign up to Zanata translation environment [2]
* request an access to language group of your choice [3]
* and join us at [4]

Thank you

Regards
Jakub

[1]: https://github.com/oVirt/ovirt-web-ui
[2]: https://translate.zanata.org
[3]: https://translate.zanata.org/language/list
[4]: 
https://translate.zanata.org/iteration/view/ovirt-web-ui/1.2.0/languages



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node with bcache

2017-08-16 Thread Yaniv Kaul
On Wed, Aug 16, 2017 at 4:37 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> Hello
>
> I just wanted to share a scenario with you and perhaps exchange more
> information with other people that may also have a similar scenario.
>
> For a couple of months I have been running a oVirt Node (CentOS 7.3
> Minimal) with bcache (https://bcache.evilpiepirate.org/) for caching a
> SSD with HDD disks. The setup is simple and was made for a prof of concept
> and since them has been working better than expected.
> This is a standalone host with 4 disks being: 1 for Operating System, 2 x
> 2TB 7200 RPM in software RAID 1 and 1 x PCI-E NVMe 400GB SSD which plays
> the caching device for both reads and writes. The VM storage folder is
> mounted as a ext4 partition on the logical device created by bcache
> (/dev/bcache0). All this is transparent to oVirt as all it sees is a
> /folder to put the VMs.
>
> We monitor the IOPS on all block devices individually and see the behavior
> exactly as expected: random writes are all done on the SSD first and them
> streamed sequentially to the mechanical drives with pretty impressive
> performance. Also in the beginning while the total amount of data was less
> than 400GB ALL read used to come from the caching device and therefore
> didn't use IOPS from the mechanical drives leaving it free to do basically
> writes. Finally at sequential IOPS (as described by bcache) are
> intelligently passed directly to the mechanical drives (but they are not
> much).
>
> Although bcache is present on kernel 3.10 I had to use kernel-ml 4.12
> (from Elrepo) and I had also to compile the bcache-tools as I could not
> find it available in any repository.
>

Nice!
It'd be great if you could write an ovirt.org blog about setting it up and
how it worked for you in.
Have you considered using dm-cache?
Y.


>
> Regards
> Fernando
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs E­rror message constantly b­eing reported

2017-08-16 Thread Sahina Bose
Can you check if you have vdsm-gluster rpm installed on the hosts?

On Wed, Aug 16, 2017 at 7:08 PM, Vadim  wrote:

> In vdsm.log
>
> 2017-08-16 16:34:15,314+0300 ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer]
> Internal server error (__init__:577)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 572, in _handle_request
> res = method(**params)
>   File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198,
> in _dynamicMethod
> result = fn(*methodArgs)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py",
> line 117, in status
> return self._gluster.volumeStatus(volumeName, brick, statusOption)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89,
> in wrapper
> rv = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 411,
> in volumeStatus
> data = self.svdsmProxy.glusterVolumeStatvfs(volumeName)
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in
> __call__
> return callMethod()
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50, in
> 
> getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
> AttributeError: 'AutoProxy[instance]' object has no attribute
> 'glusterVolumeStatvfs'
>
>
> 2017-08-16 16:37:39,566+0300 ERROR (jsonrpc/3) [jsonrpc.JsonRpcServer]
> Internal server error (__init__:577)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 572, in _handle_request
> res = method(**params)
>   File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198,
> in _dynamicMethod
> result = fn(*methodArgs)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py",
> line 109, in list
> return self._gluster.tasksList(taskIds)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89,
> in wrapper
> rv = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 507,
> in tasksList
> status = self.svdsmProxy.glusterTasksList(taskIds)
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in
> __call__
> return callMethod()
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50, in
> 
> getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
> AttributeError: 'AutoProxy[instance]' object has no attribute
> 'glusterTasksList'
>
>
> Срд 16 Авг 2017 16:08:24 +0300, Vadim  написал:
> > Hi, All
> >
> > ovirt 4.1.4 fresh install
> > Constantly seeing this message in the logs, how to fix this:
> >
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed:
> 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
> > VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed:
> 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> >
> > --
> > Thanks,
> > Vadim
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
> --
> Thanks,
> Vadim
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt Network NAT

2017-08-16 Thread Simms, Peter
Hi all,

I am trying to setup a NAT network for my networks so my VMs can just use NAT 
instead of getting their own IP.  I seem to be struggling on how to do this, I 
found the below link, however can't seem to get this working.  Is this still 
the best way to achieve this or what is the best way to achieve a NAT network 
that my VM's can use?

http://www.ovirt.org/develop/developer-guide/vdsm/hook/network-nat/

Any help would be appreciated, thank you



This e-mail contains privileged and confidential information intended for the 
use of the addressees named above. If you are not the intended recipient of 
this e-mail, you are hereby notified that you must not disseminate, copy or 
take any action in respect of any information contained in it. If you have 
received this e-mail in error, please notify the sender immediately by e-mail 
and immediately destroy this e-mail and its attachments.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Communication Problems between Engine and Hosts

2017-08-16 Thread Piotr Kliczewski
Fernando,

Do you know how log it took when you had connection issues between
data centers? Please collect the logs when it will happen again.

Thanks,
Piotr

On Wed, Aug 16, 2017 at 3:20 PM, FERNANDO FREDIANI
 wrote:
> Hello Piotr. Thanks for your reply
>
> I was running version 4.1.1, but since that day I have upgraded to 4.1.5
> (the Engine because the hosts remain on 4.1.1). I am not sure the logs still
> exists (how long they are kept normally).
>
> Just to clarify the hosts didn't become unresponsive, but the communication
> between the Engine and the Hosts in question (each in a different Datacenter
> was interrupted - but locally the hosts were fine and accessible). What was
> strange was that since the Hosts could not talk to the Engine they seem to
> have got 'confused' and started several VM live migrations which was not
> expected. As a note I don't have any Fencing policy enabled.
>
> Regards
> Fernando
>
>
>
> On 16/08/2017 07:00, Piotr Kliczewski wrote:
>>
>> Fernando,
>>
>> Which ovirt version are you running? Please share the logs so I could
>> check what caused the hosts to become unresponsive.
>>
>> Thanks,
>> Piotr
>>
>> On Wed, Aug 2, 2017 at 5:11 PM, FERNANDO FREDIANI
>>  wrote:
>>>
>>> Hello.
>>>
>>> Yesterday I had a pretty strange problem in one of our architectures. My
>>> oVirt which runs in one Datacenter and controls Nodes locally and also
>>> remotelly lost communication with the remote Nodes in another Datacenter.
>>> To this point nothing wrong as the Nodes can continue working as expected
>>> and running their Virtual Machines each without dependency of the oVirt
>>> Engine.
>>>
>>> What happened at some point is that when the communication between Engine
>>> and Hosts came back Hosts got confused and initiated a Live Migration of
>>> ALL
>>> VMs from one of the other. I had also to restart vdsmd agent on all Hosts
>>> in
>>> order to get sanity my environment.
>>> What adds up even more strangeness to this scenario is that one of the
>>> Hosts
>>> affected doesn't belong to the same Cluster as the others and had to have
>>> the vdsmd restarted.
>>>
>>> I understand the Hosts can survive without the Engine online with reduced
>>> possibilities but can communicated between them, but without affecting
>>> the
>>> VMs or even needing to do what happened in this scenario.
>>>
>>> Am I wrong on any of the assumptions ?
>>>
>>> Fernando
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ldap authentification filtering with custom attribute

2017-08-16 Thread Jean-mathieu CHANTREIN
Hello. 

Here is a way to filtering a group of ldap user by one of custom attribute and 
not by groups ? By example, I tryed (without success) to put this entry in 
/etc/ovirt-engine/extensions.d/my-ldap-authz.properties : 

search.simple-resolve-groups-memberOf.search-request.filter = 
&(myCustomAttribute=nameOfAttributeToFilter) 

And if it's possible, can I have filtering with more than one attribute (i.e.: 
each attribute will be discriminate like a group) ? 

Thanks for your help. 

Regards. 

Jean-Mathieu 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs E­rror message constantly b­eing reported

2017-08-16 Thread Vadim
In vdsm.log

2017-08-16 16:34:15,314+0300 ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer] Internal 
server error (__init__:577)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, in 
_handle_request
res = method(**params)
  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in 
_dynamicMethod
result = fn(*methodArgs)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line 117, 
in status
return self._gluster.volumeStatus(volumeName, brick, statusOption)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89, in 
wrapper
rv = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 411, in 
volumeStatus
data = self.svdsmProxy.glusterVolumeStatvfs(volumeName)
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in 
__call__
return callMethod()
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50, in 

getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
AttributeError: 'AutoProxy[instance]' object has no attribute 
'glusterVolumeStatvfs'


2017-08-16 16:37:39,566+0300 ERROR (jsonrpc/3) [jsonrpc.JsonRpcServer] Internal 
server error (__init__:577)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, in 
_handle_request
res = method(**params)
  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in 
_dynamicMethod
result = fn(*methodArgs)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line 109, 
in list
return self._gluster.tasksList(taskIds)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89, in 
wrapper
rv = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 507, in 
tasksList
status = self.svdsmProxy.glusterTasksList(taskIds)
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in 
__call__
return callMethod()
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50, in 

getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
AttributeError: 'AutoProxy[instance]' object has no attribute 'glusterTasksList'


Срд 16 Авг 2017 16:08:24 +0300, Vadim  написал:
> Hi, All
> 
> ovirt 4.1.4 fresh install
> Constantly seeing this message in the logs, how to fix this:
> 
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object 
> has no attribute 'glusterTasksList'
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object 
> has no attribute 'glusterTasksList'
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object 
> has no attribute 'glusterTasksList'
> VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: 
> 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
> VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: 
> 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object 
> has no attribute 'glusterTasksList'
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object 
> has no attribute 'glusterTasksList'
> VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object 
> has no attribute 'glusterTasksList'
> 
> --
> Thanks,
> Vadim
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

--
Thanks,
Vadim
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Node with bcache

2017-08-16 Thread FERNANDO FREDIANI

Hello

I just wanted to share a scenario with you and perhaps exchange more 
information with other people that may also have a similar scenario.


For a couple of months I have been running a oVirt Node (CentOS 7.3 
Minimal) with bcache (https://bcache.evilpiepirate.org/) for caching a 
SSD with HDD disks. The setup is simple and was made for a prof of 
concept and since them has been working better than expected.
This is a standalone host with 4 disks being: 1 for Operating System, 2 
x 2TB 7200 RPM in software RAID 1 and 1 x PCI-E NVMe 400GB SSD which 
plays the caching device for both reads and writes. The VM storage 
folder is mounted as a ext4 partition on the logical device created by 
bcache (/dev/bcache0). All this is transparent to oVirt as all it sees 
is a /folder to put the VMs.


We monitor the IOPS on all block devices individually and see the 
behavior exactly as expected: random writes are all done on the SSD 
first and them streamed sequentially to the mechanical drives with 
pretty impressive performance. Also in the beginning while the total 
amount of data was less than 400GB ALL read used to come from the 
caching device and therefore didn't use IOPS from the mechanical drives 
leaving it free to do basically writes. Finally at sequential IOPS (as 
described by bcache) are intelligently passed directly to the mechanical 
drives (but they are not much).


Although bcache is present on kernel 3.10 I had to use kernel-ml 4.12 
(from Elrepo) and I had also to compile the bcache-tools as I could not 
find it available in any repository.


Regards
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Communication Problems between Engine and Hosts

2017-08-16 Thread FERNANDO FREDIANI

Hello Piotr. Thanks for your reply

I was running version 4.1.1, but since that day I have upgraded to 4.1.5 
(the Engine because the hosts remain on 4.1.1). I am not sure the logs 
still exists (how long they are kept normally).


Just to clarify the hosts didn't become unresponsive, but the 
communication between the Engine and the Hosts in question (each in a 
different Datacenter was interrupted - but locally the hosts were fine 
and accessible). What was strange was that since the Hosts could not 
talk to the Engine they seem to have got 'confused' and started several 
VM live migrations which was not expected. As a note I don't have any 
Fencing policy enabled.


Regards
Fernando


On 16/08/2017 07:00, Piotr Kliczewski wrote:

Fernando,

Which ovirt version are you running? Please share the logs so I could
check what caused the hosts to become unresponsive.

Thanks,
Piotr

On Wed, Aug 2, 2017 at 5:11 PM, FERNANDO FREDIANI
 wrote:

Hello.

Yesterday I had a pretty strange problem in one of our architectures. My
oVirt which runs in one Datacenter and controls Nodes locally and also
remotelly lost communication with the remote Nodes in another Datacenter.
To this point nothing wrong as the Nodes can continue working as expected
and running their Virtual Machines each without dependency of the oVirt
Engine.

What happened at some point is that when the communication between Engine
and Hosts came back Hosts got confused and initiated a Live Migration of ALL
VMs from one of the other. I had also to restart vdsmd agent on all Hosts in
order to get sanity my environment.
What adds up even more strangeness to this scenario is that one of the Hosts
affected doesn't belong to the same Cluster as the others and had to have
the vdsmd restarted.

I understand the Hosts can survive without the Engine online with reduced
possibilities but can communicated between them, but without affecting the
VMs or even needing to do what happened in this scenario.

Am I wrong on any of the assumptions ?

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and FreeNAS

2017-08-16 Thread Juan Pablo
Using it here on production since the last 3 years, no problems so far.
iscsi and nfs shares.

2 servers on supermicro x10srl with xeon E5-2603 v3 @ 1.60GHz , 128Gb ram
each, 2 intel ssd for zil 2 ssd for l2arc, 16 sata disks. using ibm
1015hba's flashed as 'it' mode.
I had to tune it a lot to get best performance both on the nodes and on the
storage side, sysctl. no rule of tumb. also many changes at hand to get
full multipath, failover, queue depth, reqs, io schedulers, etc.
network cards, all Intel.

regards,

2017-08-15 5:50 GMT-03:00 Latchezar Filtchev :

> Dear oVirt-ers,
>
>
>
> Just curious – did someone uses FreeNAS as storage  for oVirt.  My staging
> environment is - two virtualization nodes, hosted engine, FreeNAS as
> storage (iSCSI hosted storage, iSCSI Data(Master) domain and NFS shares as
> ISO and export domains)
>
>
>
> Thank you!
>
>
>
> Best,
>
> Latcho
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] glusterfs Error message constantly being reported

2017-08-16 Thread Vadim
Hi, All

ovirt 4.1.4 fresh install
Constantly seeing this message in the logs, how to fix this:


VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has 
no attribute 'glusterTasksList'
VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has 
no attribute 'glusterTasksList'
VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has 
no attribute 'glusterTasksList'
VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: 
'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed: 
'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has 
no attribute 'glusterTasksList'
VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has 
no attribute 'glusterTasksList'
VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]' object has 
no attribute 'glusterTasksList'

--
Thanks,
Vadim
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NTP

2017-08-16 Thread Sahina Bose
On Thu, Aug 10, 2017 at 7:09 PM, Sandro Bonazzola 
wrote:

>
>
> 2017-08-10 15:21 GMT+02:00 Moacir Ferreira :
>
>> Hi Sandro,
>>
>>
>> I found that I can install ntpd enabling the CentOS base repository that
>> comes disabled by default in oVirt. This said, the GUI gdeploy's generated
>> script for deploying the hosted-engine + GlusterFS is still expecting to
>> disable chronny by enabling ntpd. So my question now is if we
>> need/should keep ntpd or if we should just keep chronnyd.
>>
>>
>>
> Looks like a gdeploy bug. Adding Sahina and Sacchi. chronyd should be used
> instead of ntpd.
>

https://bugzilla.redhat.com/show_bug.cgi?id=1450152 - was fixed to use
chronyd instead of ntpd in gdeploy.

If you're still seeing the issue, can you re-open the bug with version
details of cockpit-ovirt?

thanks!


>
>
>
>
>> Moacir
>>
>>
>> --
>> *From:* Sandro Bonazzola 
>> *Sent:* Thursday, August 10, 2017 2:06 PM
>> *To:* Moacir Ferreira
>> *Cc:* users@ovirt.org
>> *Subject:* Re: [ovirt-users] NTP
>>
>>
>>
>> 2017-08-07 16:53 GMT+02:00 Moacir Ferreira :
>>
>>> I found that NTP does not get installed on oVirt node on the latest
>>> version ovirt-node-ng-installer-ovirt-4.1-2017052309
>>> <%28201%29%20705-2309>.iso.
>>>
>>>
>>> Also the installed repositories does not have it. So, is this a bug or
>>> NTP is not considered appropriated anymore?
>>>
>>>
>>> vdsm is now requiring chronyd but we have re-added ntpd in ovirt-node
>> for 4.1.5 RC3 (https://bugzilla.redhat.com/1476650)
>> I'm finishing to test the release before announcing it today.
>>
>>
>>
>>
>>
>>> Thanks.
>>>
>>> Moacir
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>
>> Red Hat EMEA 
>> 
>> TRIED. TESTED. TRUSTED. 
>>
>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Recovering from a multi-node failure

2017-08-16 Thread Sahina Bose
On Sun, Aug 6, 2017 at 4:42 AM, Jim Kusznir  wrote:

> Well, after a very stressful weekend, I think I have things largely
> working.  Turns out that most of the above issues were caused by the linux
> permissions of the exports for all three volumes (they had been reset to
> 600; setting them to 774 or 770 fixed many of the issues).  Of course, I
> didn't find that until a much more harrowing outage, and hours and hours of
> work, including beginning to look at rebuilding my cluster
>
> So, now my cluster is operating again, and everything looks good EXCEPT
> for one major Gluster issue/question that I haven't found any references or
> info on.
>
> my host ovirt2, one of the replica gluster servers, is the one that lost
> its storage and had to reinitialize it from the cluster.  the iso volume is
> perfectly fine and complete, but the engine and data volumes are smaller on
> disk on this node than on the other node (and this node before the crash).
> On the engine store, the entire cluster reports the smaller utilization on
> mounted gluster filesystems; on the data partition, it reports the larger
> size (rest of cluster).  Here's some df statments to help clarify:
>
> (brick1 = engine; brick2=data, brick4=iso):
> Filesystem Size  Used Avail Use% Mounted on
> /dev/mapper/gluster-engine  25G   12G   14G  47% /gluster/brick1
> /dev/mapper/gluster-data   136G  125G   12G  92% /gluster/brick2
> /dev/mapper/gluster-iso 25G  7.3G   18G  29% /gluster/brick4
> 192.168.8.11:/engine15G  9.7G  5.4G  65%
> /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine
> 192.168.8.11:/data 136G  125G   12G  92%
> /rhev/data-center/mnt/glusterSD/192.168.8.11:_data
> 192.168.8.11:/iso   13G  7.3G  5.8G  56%
> /rhev/data-center/mnt/glusterSD/192.168.8.11:_iso
>
> View from ovirt2:
> Filesystem Size  Used Avail Use% Mounted on
> /dev/mapper/gluster-engine  15G  9.7G  5.4G  65% /gluster/brick1
> /dev/mapper/gluster-data   174G  119G   56G  69% /gluster/brick2
> /dev/mapper/gluster-iso 13G  7.3G  5.8G  56% /gluster/brick4
> 192.168.8.11:/engine15G  9.7G  5.4G  65%
> /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine
> 192.168.8.11:/data 136G  125G   12G  92%
> /rhev/data-center/mnt/glusterSD/192.168.8.11:_data
> 192.168.8.11:/iso   13G  7.3G  5.8G  56%
> /rhev/data-center/mnt/glusterSD/192.168.8.11:_iso
>
> As you can see, in the process of rebuilding the hard drive for ovirt2, I
> did resize some things to give more space to data, where I desperately need
> it.  If this goes well and the storage is given a clean bill of health at
> this time, then I will take ovirt1 down and resize to match ovirt2, and
> thus score a decent increase in storage for data.  I fully realize that
> right now the gluster mounted volumes should have the total size as the
> least common denominator.
>
> So, is this size reduction appropriate?  A big part of me thinks data is
> missing, but I even went through and shut down ovirt2's gluster daemons,
> wiped all the gluster data, and restarted gluster to allow it a fresh heal
> attempt, and it again came back to the exact same size.  This cluster was
> originally built about the time ovirt 4.0 came out, and has been upgraded
> to 'current', so perhaps some new gluster features are making more
> efficient use of space (dedupe or something)?
>

The used capacity should be consistent on all nodes - I see you have a
discrepancy with the data volume brick. What does "gluster vol heal data
info" tell you? Are there entries to be healed?

Can you provide the glustershd logs?



>
> Thank  you for your assistance!
> --JIm
>
> On Fri, Aug 4, 2017 at 7:49 PM, Jim Kusznir  wrote:
>
>> Hi all:
>>
>> Today has been rough.  two of my three nodes went down today, and self
>> heal has not been healing well.  4 hours later, VMs are running.  but the
>> engine is not happy.  It claims the storage domain is down (even though it
>> is up on all hosts and VMs are running).  I'm getting a ton of these
>> messages logging:
>>
>> VDSM engine3 command HSMGetAllTasksStatusesVDS failed: Not SPM
>>
>> Aug 4, 2017 7:23:00 PM
>>
>> VDSM engine3 command SpmStatusVDS failed: Error validating master storage
>> domain: ('MD read error',)
>>
>> Aug 4, 2017 7:22:49 PM
>>
>> VDSM engine3 command ConnectStoragePoolVDS failed: Cannot find master
>> domain: u'spUUID=5868392a-0148-02cf-014d-0121,
>> msdUUID=cdaf180c-fde6-4cb3-b6e5-b6bd869c8770'
>>
>> Aug 4, 2017 7:22:47 PM
>>
>> VDSM engine1 command ConnectStoragePoolVDS failed: Cannot find master
>> domain: u'spUUID=5868392a-0148-02cf-014d-0121,
>> msdUUID=cdaf180c-fde6-4cb3-b6e5-b6bd869c8770'
>>
>> Aug 4, 2017 7:22:46 PM
>>
>> VDSM engine2 command SpmStatusVDS failed: Error validating master storage
>> domain: ('MD read error',)
>>
>> Aug 4, 2017 7:22:44 PM
>>
>> VDSM engine2 command 

Re: [ovirt-users] oVirt and FreeNAS

2017-08-16 Thread Uwe Laverenz

Hi,

Am 15.08.2017 um 13:35 schrieb Latchezar Filtchev:


1. Is it in production?


Not really, just for testing purposes to provide some kind of shared 
storage for OVirt. I like FreeNAS, it's a very nice system but for 
production we use a setup with distributed/mirrored storage that 
tolerates the loss of a storage device or even a complete server room 
(Datacore on FC infrastructure). I haven't tested OVirt with Datacore 
yet, maybe I'll have time and hardware for this next year.


2. Can you share details about your FreeNAS installation - hardware 
used, RAM installed, Type of Disks - SATA, SAS, SSD, network cards 
used? Do you have SSD for ZIL/L2ARC? 3. The size of your data

domain? Number of virtual machines? .


Nothing spectacular: HP microservers or white boxes with 16-32 GB ECC 
Ram and 4-6 Sata disks (500GB - 2TB), 2x1 Gbit/s (Intel) for iSCSI. The 
network is the limiting factor, no extra SSDs used/needed.


cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Communication Problems between Engine and Hosts

2017-08-16 Thread Piotr Kliczewski
Fernando,

Which ovirt version are you running? Please share the logs so I could
check what caused the hosts to become unresponsive.

Thanks,
Piotr

On Wed, Aug 2, 2017 at 5:11 PM, FERNANDO FREDIANI
 wrote:
> Hello.
>
> Yesterday I had a pretty strange problem in one of our architectures. My
> oVirt which runs in one Datacenter and controls Nodes locally and also
> remotelly lost communication with the remote Nodes in another Datacenter.
> To this point nothing wrong as the Nodes can continue working as expected
> and running their Virtual Machines each without dependency of the oVirt
> Engine.
>
> What happened at some point is that when the communication between Engine
> and Hosts came back Hosts got confused and initiated a Live Migration of ALL
> VMs from one of the other. I had also to restart vdsmd agent on all Hosts in
> order to get sanity my environment.
> What adds up even more strangeness to this scenario is that one of the Hosts
> affected doesn't belong to the same Cluster as the others and had to have
> the vdsmd restarted.
>
> I understand the Hosts can survive without the Engine online with reduced
> possibilities but can communicated between them, but without affecting the
> VMs or even needing to do what happened in this scenario.
>
> Am I wrong on any of the assumptions ?
>
> Fernando
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM with attached host USB device failed to run

2017-08-16 Thread Yaniv Kaul
On Wed, Aug 16, 2017 at 12:05 PM, Дровалев Роман 
wrote:

> Hello,
>
> I connected a USB to the VM, writes this error: VM Win2012 is down with
> error. Exit message: local variable 'device' referenced before assignment.
>

Would you like to share with us your version of oVirt?


>
> I found this bug - https://bugzilla.redhat.com/show_bug.cgi?id=1261075


This is a duplicate of a bug that was fixed >1 year ago.
Y.


>
>
> Does anyone have a working USB forwarding in a virtual machine? If
> "YES", how did you solve this problem?
>
> If this problem is not solved, unfortunately, the ovirt will have to
> completely abandon. ((
>
>
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VM with attached host USB device failed to run

2017-08-16 Thread Дровалев Роман
Hello,

I connected a USB to the VM, writes this error: VM Win2012 is down with
error. Exit message: local variable 'device' referenced before assignment.

I found this bug - https://bugzilla.redhat.com/show_bug.cgi?id=1261075

Does anyone have a working USB forwarding in a virtual machine? If
"YES", how did you solve this problem?

If this problem is not solved, unfortunately, the ovirt will have to
completely abandon. ((







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] import qcow2 and sparse

2017-08-16 Thread Marcin Kruk
I have an iSCSI storage.

2017-08-15 18:04 GMT+02:00 Michal Skrivanek :

> > On 14 Aug 2017, at 13:19, Marcin Kruk  wrote:
> >
> > After import machine from KVM which was based on qemu disk 2GB phisical
> and 50 GB virtual size
> > I have got disk machine which occupy 50GB and even sparse option does
> not work. It still ocupy 50GB?
>
> Depends on what kind of storage you have on the ovirt side. Is it file
> based?
>
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] FORCE DELETE DATACENTER

2017-08-16 Thread Sven Kieske
On Di, 2017-08-15 at 14:17 -0400, Erick Vogeler wrote:
> Hello
> 
> i had a host that now i dont have access to it, cant put it on maintenance,
> cant force remove, cant remove old vms. How do i force delete this
> datacenter?

Hi,

right click the dead host and select "confirm host has been rebooted". this 
clears
stale db entries and old vms from it. Then you can put this host into 
maintenance
and force delete the dc/cluster/host.

HTH


-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator

Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp

T: +495772 293100
F: +495772 29

https://www.mittwald.de

Geschäftsführer: Robert Meyer, Maik Behring

St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217
HRA 6640, AG Bad Oeynhausen

Komplementärin: Robert Meyer Verwaltungs GmbH
HRB 13260, AG Bad Oeynhausen

signature.asc
Description: This is a digitally signed message part
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users