Re: [ovirt-users] datacenter and cluster UUID retrieving .....

2015-09-09 Thread Jean-Pierre Ribeauville
Hi,

Thanks .
I'll have a look to that.

Regards,

J.P.

-Message d'origine-
De : Adam Litke [mailto:ali...@redhat.com] 
Envoyé : mercredi 9 septembre 2015 17:16
À : Jean-Pierre Ribeauville
Cc : users@ovirt.org
Objet : Re: [ovirt-users] datacenter and cluster UUID retrieving .

On 08/09/15 10:45 +, Jean-Pierre Ribeauville wrote:
>Hi,
>
>I'm not sure it's the right place to post following question :
>
>By using a RHEV Cluster , how may I , programmatically within a node of the 
>cluster , retrieve relevant Datacenter and Cluster UUID ?

I think the only way would be to write a script using the oVirt SDK.
See http://www.ovirt.org/Python-sdk for some examples.

-- 
Adam Litke
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] moving storage and importing vms issue

2015-09-09 Thread Jiří Sléžka

Hello,

I am working on some consolidation of our RHEV/oVirt servers and I moved 
one storage to new oVirt datacenter (put it into maintenance, detached 
it from old and imported into new datacenter) which worked pretty good.


Then I tried to import all the vms which worked also great except for 
three of them.


These vms are stucked in VM Import sub-tab and are quietly failing 
import attempts (I can only see failed task "Importing VM clavius-winxp 
from configuration to Cluster CIT-oVirt" but no related event and/or 
explanation)


There is only one host in this datacenter/cluster which is SPM. I can't 
find anything interesting in vdsm.log (short span of import time is in 
attachment).


Could you point me where should I look, please?

Storage (FC) was formerly attached to RHEV3.5.3 on RHEL6.7 and was 
imported into oVirt3.5.4 on CentOS7.1


Thanks in advance,

Jiri Slezka

[root@ovirt04 ~]# tailf /var/log/vdsm/vdsm.log
Thread-209541::INFO::2015-09-10 00:10:34,590::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'name': 'No Description', 'isoprefix': '', 'pool_status': 'connected', 'lver': 4L, 'domains': u'088e7ed9-84c7-4fbd-a570-f37fa986a772:Active', 'master_uuid': '088e7ed9-84c7-4fbd-a570-f37fa986a772', 'version': '3', 'spm_id': 1, 'type': 'FCP', 'master_ver': 1}, 'dominfo': {u'088e7ed9-84c7-4fbd-a570-f37fa986a772': {'status': u'Active', 'diskfree': '4668629450752', 'isoprefix': '', 'alerts': [], 'disktotal': '11711973163008', 'version': 3}}}
Thread-209542::INFO::2015-09-10 00:10:34,659::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)
Thread-209542::INFO::2015-09-10 00:10:34,659::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u'088e7ed9-84c7-4fbd-a570-f37fa986a772': {'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay': '0.000370046', 'lastCheck': '5.1', 'valid': True}}
VM Channels Listener::INFO::2015-09-10 00:10:35,728::guestagent::180::vm.Vm::(_handleAPIVersion) vmId=`c1279a24-06de-470a-8b9f-3f3cfc24f58b`::Guest API version changed from 2 to 1
VM Channels Listener::INFO::2015-09-10 00:10:37,976::guestagent::180::vm.Vm::(_handleAPIVersion) vmId=`884dd325-4429-4150-8aa2-a473691f100b`::Guest API version changed from 2 to 1
Thread-206354::INFO::2015-09-10 00:10:40,039::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID=u'088e7ed9-84c7-4fbd-a570-f37fa986a772', spUUID=u'0002-0002-0002-0002-02b9', imgUUID=u'9d18dc91-f312-4f6c-9142-57e0b9f1aa7e', volUUID=u'1eed50b3-356b-4895-921e-61bad55f6a03', options=None)
Thread-206354::INFO::2015-09-10 00:10:40,039::logUtils::47::dispatcher::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '21474836480', 'apparentsize': '21474836480'}
VM Channels Listener::INFO::2015-09-10 00:10:40,754::guestagent::180::vm.Vm::(_handleAPIVersion) vmId=`c1279a24-06de-470a-8b9f-3f3cfc24f58b`::Guest API version changed from 2 to 1
Thread-206478::INFO::2015-09-10 00:10:42,178::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID=u'088e7ed9-84c7-4fbd-a570-f37fa986a772', spUUID=u'0002-0002-0002-0002-02b9', imgUUID=u'd566b616-1159-4fa8-8f3f-48b32556e1c0', volUUID=u'3a3c885e-1846-49f0-ad5d-00cede0242fc', options=None)
Thread-206478::INFO::2015-09-10 00:10:42,185::logUtils::47::dispatcher::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '21474836480', 'apparentsize': '21474836480'}
VM Channels Listener::INFO::2015-09-10 00:10:42,990::guestagent::180::vm.Vm::(_handleAPIVersion) vmId=`884dd325-4429-4150-8aa2-a473691f100b`::Guest API version changed from 2 to 1
Thread-209547::INFO::2015-09-10 00:10:44,625::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID=u'0002-0002-0002-0002-02b9', options=None)
Thread-209547::INFO::2015-09-10 00:10:44,635::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 4L}}
Thread-209548::INFO::2015-09-10 00:10:44,648::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID=u'0002-0002-0002-0002-02b9', options=None)
Thread-209548::INFO::2015-09-10 00:10:44,656::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'name': 'No Description', 'isoprefix': '', 'pool_status': 'connected', 'lver': 4L, 'domains': u'088e7ed9-84c7-4fbd-a570-f37fa986a772:Active', 'master_uuid': '088e7ed9-84c7-4fbd-a570-f37fa986a772', 'version': '3', 'spm_id': 1, 'type': 'FCP', 'master_ver': 1}, 'dominfo': {u'088e7ed9-84c7-4fbd-a570-f37fa986a772': {'status': u'Active', 'diskfree': '4668629450752', 'isoprefix': '', 'alerts': [], 'disktotal': '11711973163008', 'version': 3}}}
Thread-209549::INFO::2015-09-10 00:10:44,869::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeInfo(sdUUID=u'088e7ed9-84c7-4fbd-a570-f37fa986a772', spUUID=u'0002-0002-000

Re: [ovirt-users] strange iscsi issue

2015-09-09 Thread Yaniv Kaul
On 10/09/15 01:16, Raymond wrote:
> I've my homelab connected via 10Gb Direct Attached Cables (DAC)
> Use x520 cards and Cisco 2m cables.
>
> Did some tuning on servers and storage (HPC background :) )
> Here is a short copy paste from my personal install doc.
>
> Whole HW config and speeds you to trust me on, but I can achieve between 700 
> and 950MB/s for 4GB files.
> Again this is for my homelab, power over performance, 115w average power 
> usage for the whole stack.
>
> ++
> *All nodes*
> install CentOS
>
> Put eth in correct order
>
> MTU=9000
>
> reboot
>
> /etc/sysctl.conf
>   net.core.rmem_max=16777216
>   net.core.wmem_max=16777216
>   # increase Linux autotuning TCP buffer limit
>   net.ipv4.tcp_rmem=4096 87380 16777216
>   net.ipv4.tcp_wmem=4096 65536 16777216
>   # increase the length of the processor input queue
>   net.core.netdev_max_backlog=3
>
> *removed detailed personal info*
>
> *below is storage only*
> /etc/fstab
>   ext4defaults,barrier=0,noatime,nodiratime
> /etc/sysconfig/nfs
>   RPCNFSDCOUNT=16

All looks quite good.
Do you have multipathing for iSCSI? I highly recommend it, and then
reduce the number of requests (via multipath.conf) down as low as
possible (against high-end all flash array - 1 is good too! I reckon
against homelabs the default is OK too).

Regardless, I also recommend increasing the number of TCP sessions -
assuming your storage is not a bottleneck, you should be able to get to
~1100MB/sec.
node.session./nr_sessions /in iscsi.conf should be set to 2, for example.
Y.

> ++
>
> - Original Message -
> From: "Michal Skrivanek" 
> To: "Karli Sjöberg" , "Demeter Tibor" 
> 
> Cc: "users" 
> Sent: Tuesday, September 8, 2015 10:18:54 AM
> Subject: Re: [ovirt-users] strange iscsi issue
>
> On 8 Sep 2015, at 07:45, Karli Sjöberg wrote:
>
>> tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor:
>>> Hi,
>>> Thank you for your reply.
>>> I'm sorry but I don't think so. This storage is fast, because it is a SSD 
>>> based storage, and I can read/write to it with fast performance.
>>> I know, in virtual environment the I/O always slowest than on physical, but 
>>> here I have a very large difference. 
>>> Also, I use ext4 FS.
>> My suggestion would be to use a filesystem benchmarking tool like bonnie
>> ++ to first test the performance locally on the storage server and then
>> redo the same test inside of a virtual machine. Also make sure the VM is
>> using VirtIO disk (either block or SCSI) for best performance. I have
> also note new 3.6 support for virtio-blk dataplane[1]. Not sure how will it 
> look using artificial stress tools, but in general it improves storage 
> performance a lot.
>
> Thanks,
> michal
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1214311
>
>> tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work
>> in theory as well as practice.
>>
>> Oh, and for the record. IO doesn´t have to be bound by the speed of
>> storage, if the host caches in RAM before sending it over the wire. But
>> that in my opinion is dangerous and as far as I know, it´s not actived
>> in oVirt, please correct me if I´m wrong.
>>
>> /K
>>
>>> Thanks
>>>
>>> Tibor
>>>
>>>
>>> - 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhir...@triadic.us írta:
>>>
 Unless you're using a caching filesystem like zfs, then you're going to be
 limited by how fast your storage back end can actually right to disk. 
 Unless
 you have a quite large storage back end, 10gbe is probably faster than your
 disks can read and write.

 On Sep 7, 2015 4:26 PM, Demeter Tibor  wrote:
> Hi All,
>
> I have to create a test environment for testing purposes, because we need 
> to
> testing our new 10gbe infrastructure.
> One server that have a 10gbe nic - this is the vdsm host and ovirt portal.
> One server that have a 10gbe nic - this is the storage.
>
> Its connected to each other throught a dlink 10gbe switch.
>
> Everything good and nice, the server can connect to storage, I can make 
> and run
> VMs, but the storage performance from inside VM seems to be 1Gb/sec only.
> I did try the iperf command for testing connections beetwen servers, and 
> it was
> 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and 
> also it
> was 400-450 MB/sec. I've got same result on storage server.
>
> So:
>
> - hdparm test on local storage ~ 400 mb/sec
> - hdparm test on ovirt node server through attached iscsi device ~ 400 
> Mb/sec
> - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec
>
> The question is : Why?
>
> ps. I Have only one ovirtmgmt device, so there are no other networks. The 
> router
> is only 1gbe/sec, but i've tested and the traffic does not going through  
> this.
>
> Thanks in a

Re: [ovirt-users] strange iscsi issue

2015-09-09 Thread Raymond
I've my homelab connected via 10Gb Direct Attached Cables (DAC)
Use x520 cards and Cisco 2m cables.

Did some tuning on servers and storage (HPC background :) )
Here is a short copy paste from my personal install doc.

Whole HW config and speeds you to trust me on, but I can achieve between 700 
and 950MB/s for 4GB files.
Again this is for my homelab, power over performance, 115w average power usage 
for the whole stack.

++
*All nodes*
install CentOS

Put eth in correct order

MTU=9000

reboot

/etc/sysctl.conf
  net.core.rmem_max=16777216
  net.core.wmem_max=16777216
  # increase Linux autotuning TCP buffer limit
  net.ipv4.tcp_rmem=4096 87380 16777216
  net.ipv4.tcp_wmem=4096 65536 16777216
  # increase the length of the processor input queue
  net.core.netdev_max_backlog=3

*removed detailed personal info*

*below is storage only*
/etc/fstab
  ext4defaults,barrier=0,noatime,nodiratime
/etc/sysconfig/nfs
  RPCNFSDCOUNT=16
++

- Original Message -
From: "Michal Skrivanek" 
To: "Karli Sjöberg" , "Demeter Tibor" 

Cc: "users" 
Sent: Tuesday, September 8, 2015 10:18:54 AM
Subject: Re: [ovirt-users] strange iscsi issue

On 8 Sep 2015, at 07:45, Karli Sjöberg wrote:

> tis 2015-09-08 klockan 06:59 +0200 skrev Demeter Tibor:
>> Hi,
>> Thank you for your reply.
>> I'm sorry but I don't think so. This storage is fast, because it is a SSD 
>> based storage, and I can read/write to it with fast performance.
>> I know, in virtual environment the I/O always slowest than on physical, but 
>> here I have a very large difference. 
>> Also, I use ext4 FS.
> 
> My suggestion would be to use a filesystem benchmarking tool like bonnie
> ++ to first test the performance locally on the storage server and then
> redo the same test inside of a virtual machine. Also make sure the VM is
> using VirtIO disk (either block or SCSI) for best performance. I have

also note new 3.6 support for virtio-blk dataplane[1]. Not sure how will it 
look using artificial stress tools, but in general it improves storage 
performance a lot.

Thanks,
michal

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1214311

> tested speeds over 1Gb/s with bonded 1Gb NICS so I know it should work
> in theory as well as practice.
> 
> Oh, and for the record. IO doesn´t have to be bound by the speed of
> storage, if the host caches in RAM before sending it over the wire. But
> that in my opinion is dangerous and as far as I know, it´s not actived
> in oVirt, please correct me if I´m wrong.
> 
> /K
> 
>> 
>> Thanks
>> 
>> Tibor
>> 
>> 
>> - 2015. szept.. 8., 0:40, Alex McWhirter alexmcwhir...@triadic.us írta:
>> 
>>> Unless you're using a caching filesystem like zfs, then you're going to be
>>> limited by how fast your storage back end can actually right to disk. Unless
>>> you have a quite large storage back end, 10gbe is probably faster than your
>>> disks can read and write.
>>> 
>>> On Sep 7, 2015 4:26 PM, Demeter Tibor  wrote:
 
 Hi All,
 
 I have to create a test environment for testing purposes, because we need 
 to
 testing our new 10gbe infrastructure.
 One server that have a 10gbe nic - this is the vdsm host and ovirt portal.
 One server that have a 10gbe nic - this is the storage.
 
 Its connected to each other throught a dlink 10gbe switch.
 
 Everything good and nice, the server can connect to storage, I can make 
 and run
 VMs, but the storage performance from inside VM seems to be 1Gb/sec only.
 I did try the iperf command for testing connections beetwen servers, and 
 it was
 9.40 GB/sec. I have try to use hdparm -tT /dev/mapper/iscsidevice and also 
 it
 was 400-450 MB/sec. I've got same result on storage server.
 
 So:
 
 - hdparm test on local storage ~ 400 mb/sec
 - hdparm test on ovirt node server through attached iscsi device ~ 400 
 Mb/sec
 - hdparm test from inside vm on local virtual disk - 93-102 Mb /sec
 
 The question is : Why?
 
 ps. I Have only one ovirtmgmt device, so there are no other networks. The 
 router
 is only 1gbe/sec, but i've tested and the traffic does not going through  
 this.
 
 Thanks in advance,
 
 Regards,
 Tibor
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Question about upgrading ovirt 3.5.3 to 3.5.4

2015-09-09 Thread Jason Keltz


On 09/09/2015 03:22 PM, Alon Bar-Lev wrote:


- Original Message -

From: "Jason Keltz" 
To: "users" 
Sent: Wednesday, September 9, 2015 10:08:31 PM
Subject: [ovirt-users] Question about upgrading ovirt 3.5.3 to 3.5.4

Hi.

I have a system consisting of an engine + several hosts running 3.5.3,
and I want to upgrade everything to 3.5.4.   According to the release
notes, all I should do is:


# yum update "ovirt-engine-setup*"
# engine-setup

I did this with engine, and it seemed to upgrade okay.

I'm puzzled whether this applies to the hosts as well?  The release
notes aren't clear to me in that respect.

Thanks for any assistance!

At host you can run "yum update" or "yum update vdsm" if you like to update 
specific.

Thanks!  The maintainer of the release notes should probably clarify 
this point in the notes.  Now I know! :)


Jas.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Question about upgrading ovirt 3.5.3 to 3.5.4

2015-09-09 Thread Markus Stockhausen
"yum update" on the hosts only.

Von: users-boun...@ovirt.org [users-boun...@ovirt.org]" im Auftrag von 
"Jason Keltz [j...@cse.yorku.ca]
Gesendet: Mittwoch, 9. September 2015 21:08
An: users
Betreff: [ovirt-users] Question about upgrading ovirt 3.5.3 to 3.5.4

Hi.

I have a system consisting of an engine + several hosts running 3.5.3,
and I want to upgrade everything to 3.5.4.   According to the release
notes, all I should do is:

> # yum update "ovirt-engine-setup*"
> # engine-setup

I did this with engine, and it seemed to upgrade okay.

I'm puzzled whether this applies to the hosts as well?  The release
notes aren't clear to me in that respect.

Thanks for any assistance!

Jason.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Question about upgrading ovirt 3.5.3 to 3.5.4

2015-09-09 Thread Alon Bar-Lev


- Original Message -
> From: "Jason Keltz" 
> To: "users" 
> Sent: Wednesday, September 9, 2015 10:08:31 PM
> Subject: [ovirt-users] Question about upgrading ovirt 3.5.3 to 3.5.4
> 
> Hi.
> 
> I have a system consisting of an engine + several hosts running 3.5.3,
> and I want to upgrade everything to 3.5.4.   According to the release
> notes, all I should do is:
> 
> > # yum update "ovirt-engine-setup*"
> > # engine-setup
> 
> I did this with engine, and it seemed to upgrade okay.
> 
> I'm puzzled whether this applies to the hosts as well?  The release
> notes aren't clear to me in that respect.
> 
> Thanks for any assistance!

At host you can run "yum update" or "yum update vdsm" if you like to update 
specific.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Question about upgrading ovirt 3.5.3 to 3.5.4

2015-09-09 Thread Jason Keltz

Hi.

I have a system consisting of an engine + several hosts running 3.5.3, 
and I want to upgrade everything to 3.5.4.   According to the release 
notes, all I should do is:



# yum update "ovirt-engine-setup*"
# engine-setup


I did this with engine, and it seemed to upgrade okay.

I'm puzzled whether this applies to the hosts as well?  The release 
notes aren't clear to me in that respect.


Thanks for any assistance!

Jason.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] urgent issue

2015-09-09 Thread Chris Liebman
Ok - I think I'm going to switch to local storage - I've had way to many
unexplainable issue with glusterfs  :-(.  Is there any reason I cant add
local storage to the existing shared-storage cluster?  I see that the menu
item is greyed out





On Tue, Sep 8, 2015 at 4:19 PM, Chris Liebman  wrote:

> Its possible that this is specific to just one gluster volume...  I've
> moved a few VM disks off of that volume and am able to start them fine.  My
> recolection is that any VM started on the "bad" volume causes it to be
> disconnected and forces the ovirt node to be marked down until
> Maint->Activate.
>
> On Tue, Sep 8, 2015 at 3:52 PM, Chris Liebman  wrote:
>
>> In attempting to put an ovirt cluster in production I'm running into some
>> off errors with gluster it looks like.  Its 12 hosts each with one brick in
>> distributed-replicate.  (actually 2 bricks but they are separate volumes)
>>
>> [root@ovirt-node268 glusterfs]# rpm -qa | grep vdsm
>>
>> vdsm-jsonrpc-4.16.20-0.el6.noarch
>>
>> vdsm-gluster-4.16.20-0.el6.noarch
>>
>> vdsm-xmlrpc-4.16.20-0.el6.noarch
>>
>> vdsm-yajsonrpc-4.16.20-0.el6.noarch
>>
>> vdsm-4.16.20-0.el6.x86_64
>>
>> vdsm-python-zombiereaper-4.16.20-0.el6.noarch
>>
>> vdsm-python-4.16.20-0.el6.noarch
>>
>> vdsm-cli-4.16.20-0.el6.noarch
>>
>>
>>Everything was fine last week, however, today various clients in the
>> gluster cluster seem get "client quorum not met" periodically - when they
>> get this they take one of the bricks offline - this causes VM's to be
>> attempted to move - sometimes 20 at a time.  That takes a long time :-(.
>> I've tried disabling automatic migration and teh VM's get paused when this
>> happens - resuming gets nothing at that point as the volumes mount on the
>> server hosting the VM is not connected:
>>
>> from
>> rhev-data-center-mnt-glusterSD-ovirt-node268.la.taboolasyndication.com:
>> _LADC-TBX-V02.log:
>>
>> [2015-09-08 21:18:42.920771] W [MSGID: 108001]
>> [afr-common.c:4043:afr_notify] 2-LADC-TBX-V02-replicate-2: Client-quorum is 
>> not
>> met
>>
>> [2015-09-08 21:18:42.931751] I [fuse-bridge.c:4900:fuse_thread_proc]
>> 0-fuse: unmounting
>> /rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:
>> _LADC-TBX-V02
>>
>> [2015-09-08 21:18:42.931836] W [glusterfsd.c:1219:cleanup_and_exit]
>> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f1bebc84a51]
>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x
>>
>> 65) [0x4059b5] ) 0-: received signum (15), shutting down
>>
>> [2015-09-08 21:18:42.931858] I [fuse-bridge.c:5595:fini] 0-fuse:
>> Unmounting
>> '/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:
>> _LADC-TBX-V02'.
>>
>>
>> And the mount is broken at that point:
>>
>> [root@ovirt-node267 ~]# df
>>
>> *df:
>> `/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02':
>> Transport endpoint is not connected*
>>
>> Filesystem1K-blocks  Used  Available Use% Mounted on
>>
>> /dev/sda3  51475068   1968452   46885176   5% /
>>
>> tmpfs 132210244 0  132210244   0% /dev/shm
>>
>> /dev/sda2487652 32409 429643   8% /boot
>>
>> /dev/sda1204580   260 204320   1% /boot/efi
>>
>> /dev/sda51849960960 156714056 1599267616   9% /data1
>>
>> /dev/sdb11902274676  18714468 1786923588   2% /data2
>>
>> ovirt-node268.la.taboolasyndication.com:/LADC-TBX-V01
>>
>>  9249804800 727008640 8052899712   9%
>> /rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:
>> _LADC-TBX-V01
>>
>> ovirt-node251.la.taboolasyndication.com:/LADC-TBX-V03
>>
>>  1849960960 73728 1755907968   1%
>> /rhev/data-center/mnt/glusterSD/ovirt-node251.la.taboolasyndication.com:
>> _LADC-TBX-V03
>>
>> The fix for that is to put the server in maintenance mode then activate
>> it again. But all VM's need to be migrated or stopped for that to work.
>>
>> I'm not seeing any obvious network or disk errors..
>>
>> Are their configuration options I'm missing?
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] datacenter and cluster UUID retrieving .....

2015-09-09 Thread Adam Litke

On 08/09/15 10:45 +, Jean-Pierre Ribeauville wrote:

Hi,

I'm not sure it's the right place to post following question :

By using a RHEV Cluster , how may I , programmatically within a node of the 
cluster , retrieve relevant Datacenter and Cluster UUID ?


I think the only way would be to write a script using the oVirt SDK.
See http://www.ovirt.org/Python-sdk for some examples.

--
Adam Litke
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to use ovirt-guest-agent without VDSM?

2015-09-09 Thread Adam Litke

On 09/09/15 18:58 +0800, Nick Xiao wrote:

In my environment i want to use ovirt-ga replace qemu-ga.
 Hypervisor is Ubuntu 14.04 (OpenStack Icehouse) and the virtual machine
is Ubuntu 14.04 cloudimage.

In Hypervisor launch a instance and attach a chardev, like:
# ps -ef | grep guest_agent
libvirt+   2649  1  1 Sep08 ?00:15:36
/usr/bin/qemu-system-x86_64 ...
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/org.qemu.guest_agent.0.instance-0210.sock,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
...

In virtual machine was installed ovirt-guest-agent 1.0.11-1.1 and service
running OK.

But i don't know how to get memory usage and other information in
hypervisor(OpenStack compute node)? I have failed when i try to use linux
command 'socat' and python socket.

I need some sample or link.


I took a look at the libvirt domain XML from one of my running oVirt
VMs and found the following device:

   
 
 
 
 
   

You might try adding this to your VM.  I'm almost certain that
ovirt-ga searches for a device with the name 'com.redhat.rhevm.vdsm'
when starting up.  Once you are connected, take a look at
vdsm/virt/guestagent.py in the vdsm source code for hints about how to
talk to the agent.

--
Adam Litke
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Test ovirt 3.6 beta 4 : hypervisor very slow

2015-09-09 Thread Simone Tiraboschi
On Tue, Sep 8, 2015 at 7:50 PM, wodel youchi  wrote:

> Hi again,
>
> I did modify the file and restart vdsmd ha-agent broker-agent
> I installed all available new updates on host and on VM engine.
>
> I can start the VM engine, the problem of CPU overloaded disappear, but
> the ha-agent started to crash
>

Sorry again,
this will solve https://gerrit.ovirt.org/#/c/45940/

It will be included in 3.6 beta5


> MainThread::INFO::2015-09-08
> 18:39:35,524::hosted_engine::238::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname)
> Found certificate common name: noveria.wodel.wd
> MainThread::INFO::2015-09-08
> 18:39:35,524::hosted_engine::587::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> Initializing VDSM
> MainThread::INFO::2015-09-08
> 18:39:35,720::hosted_engine::630::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> Connecting the storage
> MainThread::INFO::2015-09-08
> 18:39:35,721::storage_server::110::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Connecting storage server
> MainThread::INFO::2015-09-08
> 18:39:35,721::storage_server::135::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Connecting storage server
> MainThread::INFO::2015-09-08
> 18:39:35,744::hosted_engine::634::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> Preparing images
> MainThread::INFO::2015-09-08
> 18:39:35,744::image::61::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
> Preparing images
> MainThread::INFO::2015-09-08
> 18:39:36,339::hosted_engine::642::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> Reloading vm.conf from the shared storage domain
> MainThread::INFO::2015-09-08
> 18:39:36,365::hosted_engine::492::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
> Initializing ha-broker connection
> MainThread::INFO::2015-09-08
> 18:39:36,365::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Starting monitor ping, options {'addr': '192.168.1.1'}
> MainThread::INFO::2015-09-08
> 18:39:36,366::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Success, id 140597217540688
> MainThread::INFO::2015-09-08
> 18:39:36,367::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Starting monitor mgmt-bridge, options {'use_ssl': 'true', 'bridge_name':
> 'ovirtmgmt', 'address': '0'}
> MainThread::INFO::2015-09-08
> 18:39:36,371::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Success, id 140597217541456
> MainThread::INFO::2015-09-08
> 18:39:36,371::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Starting monitor mem-free, options {'use_ssl': 'true', 'address': '0'}
> MainThread::INFO::2015-09-08
> 18:39:36,373::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Success, id 140597217616592
> MainThread::INFO::2015-09-08
> 18:39:36,373::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Starting monitor cpu-load-no-engine, options {'use_ssl': 'true', 'vm_uuid':
> 'd10e531a-4397-442b-ba2e-f70858b501f3', 'address': '0'}
> MainThread::INFO::2015-09-08
> 18:39:36,394::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Success, id 140597186629072
> MainThread::INFO::2015-09-08
> 18:39:36,394::brokerlink::129::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Starting monitor engine-health, options {'use_ssl': 'true', 'vm_uuid':
> 'd10e531a-4397-442b-ba2e-f70858b501f3', 'address': '0'}
> MainThread::INFO::2015-09-08
> 18:39:36,416::brokerlink::140::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
> Success, id 140597195989520
> MainThread::INFO::2015-09-08
> 18:39:36,489::brokerlink::178::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_domain)
> Success, id 140597196204048
> MainThread::INFO::2015-09-08
> 18:39:36,489::hosted_engine::584::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
> Broker initialized, all submonitors started
> MainThread::INFO::2015-09-08
> 18:39:36,764::hosted_engine::688::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
> Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file:
> /var/run/vdsm/storage/367b3d7b-8540-41c2-92e6-c60088224472/ef07eee6-1b5e-4ab1-a023-a44bc78a191d/a16c50f3-437f-4804-bfad-fd5d59667095)
> MainThread::INFO::2015-09-08
> 18:39:36,765::upgrade::833::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(upgrade)
> Host configuration is already up-to-date
> MainThread::INFO::2015-09-08
> 18:39:36,765::hosted_engine::411::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Reloading vm.conf from the shared st

Re: [ovirt-users] Some VMs in status "not responding" in oVirt interface

2015-09-09 Thread Christian Hailer
Hello,

 

unfortunately I still have this problem… 

Last week I checked all the hardware components. It’s a HP DL580 Gen8 Server, 
128GB RAM, 4TB storage.

The firmware of all components is up to date.

I ran a full check of all harddrives, CPUs etc., no problems detected.

 

This night 3 VMs stopped responding again, so I had to reboot the server this 
morning to regain access. Some minutes ago 2 VMs stopped responding…

 

The logs just show that the VMs aren’t responding anymore, nothing else… does 
anybody have an idea how I can debug this issue any further?

 

OS: CentOS Linux release 7.1.1503

 

>rpm -qa|grep ovirt

ovirt-iso-uploader-3.5.2-1.el7.centos.noarch

ovirt-engine-setup-3.5.4.2-1.el7.centos.noarch

ovirt-guest-tools-iso-3.5-7.noarch

ovirt-log-collector-3.5.4-2.el7.centos.noarch

ovirt-engine-userportal-3.5.4.2-1.el7.centos.noarch

ovirt-engine-cli-3.5.0.6-1.el7.centos.noarch

ovirt-engine-tools-3.5.4.2-1.el7.centos.noarch

ovirt-release35-005-1.noarch

ovirt-engine-lib-3.5.4.2-1.el7.centos.noarch

ovirt-engine-setup-plugin-ovirt-engine-common-3.5.4.2-1.el7.centos.noarch

ovirt-host-deploy-java-1.3.2-1.el7.centos.noarch

ovirt-engine-extensions-api-impl-3.5.4.2-1.el7.centos.noarch

ovirt-engine-webadmin-portal-3.5.4.2-1.el7.centos.noarch

ovirt-engine-restapi-3.5.4.2-1.el7.centos.noarch

ovirt-engine-setup-base-3.5.4.2-1.el7.centos.noarch

ovirt-engine-backend-3.5.4.2-1.el7.centos.noarch

ovirt-engine-setup-plugin-websocket-proxy-3.5.4.2-1.el7.centos.noarch

ovirt-host-deploy-1.3.2-1.el7.centos.noarch

ovirt-engine-websocket-proxy-3.5.4.2-1.el7.centos.noarch

ovirt-engine-dbscripts-3.5.4.2-1.el7.centos.noarch

ovirt-engine-jboss-as-7.1.1-1.el7.x86_64

ovirt-engine-sdk-python-3.5.4.0-1.el7.centos.noarch

ovirt-engine-setup-plugin-ovirt-engine-3.5.4.2-1.el7.centos.noarch

ovirt-image-uploader-3.5.1-1.el7.centos.noarch

ovirt-engine-3.5.4.2-1.el7.centos.noarch

 

>rpm -qa|grep vdsm

vdsm-python-4.16.26-0.el7.centos.noarch

vdsm-jsonrpc-java-1.0.15-1.el7.noarch

vdsm-jsonrpc-4.16.26-0.el7.centos.noarch

vdsm-yajsonrpc-4.16.26-0.el7.centos.noarch

vdsm-xmlrpc-4.16.26-0.el7.centos.noarch

vdsm-cli-4.16.26-0.el7.centos.noarch

vdsm-4.16.26-0.el7.centos.x86_64

vdsm-python-zombiereaper-4.16.26-0.el7.centos.noarch

 

>rpm -qa|grep kvm

qemu-kvm-ev-2.1.2-23.el7_1.8.1.x86_64

qemu-kvm-common-ev-2.1.2-23.el7_1.8.1.x86_64

libvirt-daemon-kvm-1.2.8-16.el7_1.3.x86_64

qemu-kvm-tools-ev-2.1.2-23.el7_1.8.1.x86_64

 

>uname -a 

Linux ovirt 3.10.0-229.11.1.el7.x86_64 #1 SMP Thu Aug 6 01:06:18 UTC 2015 
x86_64 x86_64 x86_64 GNU/Linux

 

Any feedback is much appreciated!!

 

Best regards, Christian

 

Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von 
Christian Hailer
Gesendet: Samstag, 29. August 2015 22:48
An: users@ovirt.org
Betreff: [ovirt-users] Some VMs in status "not responding" in oVirt interface

 

Hello,

 

last Wednesday I wanted to update my oVirt 3.5 hypervisor. It is a single 
Centos 7 server, so I started by suspending the VMs in order to set the oVirt 
engine host to maintenance mode. During the process of suspending the VMs the 
server crashed, kernel panic…

After restarting the server I installed the updates via yum an restarted the 
server again. Afterwards, all the VMs could be started again. Some hours later 
my monitoring system registered some unresponsive hosts, I had a look in the 
oVirt interface, 3 of the VMs were in the state “not responding”, marked by a 
question mark. 

I tried to shut down the VMs, but oVirt wasn’t able to do so. I tried to reset 
the status in the database with the sql statement

 

update vm_dynamic set status = 0 where vm_guid = (select vm_guid from vm_static 
where vm_name = 'MYVMNAME');

 

but that didn’t help, either. Only rebooting the whole hypervisor helped… 
afterwards everything worked again. But only for a few hours, then one of the 
VMs entered the “not responding” state again… again only a reboot helped. 
Yesterday it happened again:

 

2015-08-28 17:44:22,664 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-60) [4ef90b12] VM DC 
0f3d1f06-e516-48ce-aa6f-7273c33d3491 moved from Up --> NotResponding

2015-08-28 17:44:22,692 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-60) [4ef90b12] Correlation ID: null, Call Stack: 
null, Custom Event ID: -1, Message: VM DC is not responding.

 

Does anybody know what I can do? Where should I have a look? Hints are greatly 
appreciated!

 

Thanks,

Christian

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host loses all network configuration on update to oVirt 3.5.4

2015-09-09 Thread Ondřej Svoboda
Hi everyone,

it turns out that ifcfg files can be lost even in this very simple scenario:

1) Install/upgrade to VDSM 4.16.21/oVirt 3.5.4
2) Setup a network over eth0
   vdsClient -s 0 setupNetworks 
'networks={pokus:{nic:eth0,bootproto:dhcp,blockingdhcp:true,bridged:false}}'
3) Persist the configuration (declare it safe)
   vdsClient -s 0 setSafeNetworkConfig
4) Add a placeholder in /var/lib/vdsm/netconfback/ifcfg-eth0 with:
# original file did not exist
5) Reboot

I created a fix [1] and prepared it for backport to 3.6 [2] and 3.5 branches 
[3] (so as to appear in 3.5.5) and linked it to 
https://bugzilla.redhat.com/show_bug.cgi?id=1256252

Patrick, to apply the patch you can also run the two commands and paste it (the 
line after "nicFile.writelines(l)" is a single space, so please add it if it 
gets eaten by e-mail goblins):

cd /usr/share/vdsm/
patch -p1

diff --git vdsm/network/configurators/ifcfg.py 
vdsm/network/configurators/ifcfg.py
index 161a3b2..8332224 100644
--- vdsm/network/configurators/ifcfg.py
+++ vdsm/network/configurators/ifcfg.py
@@ -647,11 +647,21 @@ class ConfigWriter(object):
 def removeNic(self, nic):
 cf = netinfo.NET_CONF_PREF + nic
 self._backup(cf)
-with open(cf) as nicFile:
-hwlines = [line for line in nicFile if line.startswith('HWADDR=')]
+try:
+with open(cf) as nicFile:
+hwlines = [line for line in nicFile if line.startswith(
+'HWADDR=')]
+except IOError as e:
+logging.warning("%s couldn't be read (errno %s)", cf, e.errno)
+try:
+hwlines = ['HWADDR=%s\n' % netinfo.gethwaddr(nic)]
+except IOError as e:
+logging.exception("couldn't determine hardware address of %s "
+  "(errno %s)", nic, e.errno)
+hwlines = []
 l = [self.CONFFILE_HEADER + '\n', 'DEVICE=%s\n' % nic, 'ONBOOT=yes\n',
  'MTU=%s\n' % netinfo.DEFAULT_MTU] + hwlines
-l += 'NM_CONTROLLED=no\n'
+l.append('NM_CONTROLLED=no\n')
 with open(cf, 'w') as nicFile:
 nicFile.writelines(l)
 

Michael, will you please give it a try as well?

Thanks,
Ondra

[1] https://gerrit.ovirt.org/#/c/45893/
[2] https://gerrit.ovirt.org/#/c/45932/
[3] https://gerrit.ovirt.org/#/c/45933/

- Original Message -
> From: "Patrick Hurrelmann" 
> To: "Dan Kenigsberg" 
> Cc: "oVirt Mailing List" 
> Sent: Monday, September 7, 2015 2:46:05 PM
> Subject: Re: [ovirt-users] Host loses all network configuration on update to 
> oVirt 3.5.4
> 
> On 07.09.2015 14:44, Patrick Hurrelmann wrote:
> > On 07.09.2015 13:54, Dan Kenigsberg wrote:
> >> On Mon, Sep 07, 2015 at 11:47:48AM +0200, Patrick Hurrelmann wrote:
> >>> On 06.09.2015 11:30, Dan Kenigsberg wrote:
>  On Fri, Sep 04, 2015 at 10:26:39AM +0200, Patrick Hurrelmann wrote:
> > Hi all,
> >
> > I just updated my existing oVirt 3.5.3 installation (iSCSI
> > hosted-engine on
> > CentOS 7.1). The engine update went fine. Updating the hosts succeeds
> > until the
> > first reboot. After a reboot the host does not come up again. It is
> > missing all
> > network configuration. All network cfgs in
> > /etc/sysconfig/network-scripts are
> > missing except ifcfg-lo. The host boots up without working networking.
> > Using
> > IPMI and config backups, I was able to restore the lost network
> > configs. Once
> > these are restored and the host is rebooted again all seems to be back
> > to good.
> > This has now happend to 2 updated hosts (this installation has a total
> > of 4
> > hosts, so 2 more to debug/try). I'm happy to assist in furter
> > debugging.
> >
> > Before updating the second host, I gathered some information. All these
> > hosts
> > have 3 physical nics. One is used for the ovirtmgmt bridge and the
> > other 2 are
> > used for iSCSI storage vlans.
> >
> > ifcfgs before update:
> >
> > /etc/sysconfig/network-scripts/ifcfg-em1
> > # Generated by VDSM version 4.16.20-0.el7.centos
> > DEVICE=em1
> > HWADDR=d0:67:e5:f0:e5:c6
> > BRIDGE=ovirtmgmt
> > ONBOOT=yes
> > NM_CONTROLLED=no
>  /etc/sysconfig/network-scripts/ifcfg-lo
> > DEVICE=lo
> > IPADDR=127.0.0.1
> > NETMASK=255.0.0.0
> > NETWORK=127.0.0.0
> > # If you're having problems with gated making 127.0.0.0/8 a martian,
> > # you can change this to something else (255.255.255.255, for example)
> > BROADCAST=127.255.255.255
> > ONBOOT=yes
> > NAME=loopback
> >
> > /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt
> > # Generated by VDSM version 4.16.20-0.el7.centos
> > DEVICE=ovirtmgmt
> > TYPE=Bridge
> > DELAY=0
> > STP=off
> > ONBOOT=yes
> > IPADDR=1.2.3.16
> > NETMASK=255.255.255.0
> > GATEWAY=1.2.3.11
> > BOOTPROTO=none
> > DEFROUTE=yes
>

Re: [ovirt-users] why replica 3

2015-09-09 Thread Richard Neuboeck
On 09/09/2015 10:54 AM, Simone Tiraboschi wrote:
> 
> 
> On Wed, Sep 9, 2015 at 10:14 AM, Richard Neuboeck
> mailto:h...@tbi.univie.ac.at>> wrote:
> 
> On 04.09.15 10:02, Simone Tiraboschi wrote:
> > Is there a reason why it has to be exactly replica 3?
> >
> >
> > To have a valid quorum having the system being able to decide witch is
> > the right and safe copy avoiding an issue called split brain.
> > Under certain circumstances/issues (network issue, hosts down or
> > whatever could happen) the data on different replica could diverge: if
> > you have two and just two different hosts that claim each other
> that its
> > copy is the right one there is no way to automatically take the right
> > decision. Having three hosts and setting the quorum according to that
> > solves/mitigates the issue.
> 
> 
> Thanks for the explanation. I do understand the problem but since
> I'm somewhat limited in my hardware options is there a way to
> override this requirement? Meaning if I change the checks for
> replica 3 in the installation scripts does something else fail on
> the way?
> 
> 
> I'm advising that it's not a safe configuration so it's not
> recommended for a production environment.
> Having said that, as far as I know it's enforced only in the setup
> script so tweaking it should be enough.
> Otherwise, if you have enough disk space, you can also have a
> different trick: you could create a replica 3 volume with 2 bricks
> from a single host.

I've thought about that but since that would obviously only help to
fool the installation script there is nothing else in this setup
that would improve the situation. Worse the read/write overhead on
the second machine would be a performance downgrade.

> It's not a safe procedure at all cause you still have only 2 hosts,
> so it's basically just replica 2, and in case of split brain the
> host with two copies will win by configuration which is not always
> the right decision.

Right. I'm thinking of trying to add a dummy node as mentioned in
the RHEL documentation. This would (in theory) prevent the read only
state in the split brain scenario and make it possible to access the
storage. But still the installation requirement of replica 3 would
not be satisfied.

> 
> In my case coherence checks would come from outside the storage and
> vm host setup and fencing would be applied appropriately.
> 
> 
> Can I ask how?

Multiple machines separated from the storage and virtualization
machines that will check communication (in general and of several
services) and try to intervene if there is something going awry
first by accessing the machines directly (if possible) and then by
deactivating those machines by remote management.

Cheers
Richard

> 
> I would very much appreciate it if the particulars of the storage
> setup could be either selected from a list of possibilities or be
> ignored and just a warning be issued that this setup is not
> recommended.
> 
> Thanks!
> Richard
> 
> 
> --
> /dev/null
> 
> 


-- 
/dev/null



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to use ovirt-guest-agent without VDSM?

2015-09-09 Thread Nick Xiao
In my environment i want to use ovirt-ga replace qemu-ga.
  Hypervisor is Ubuntu 14.04 (OpenStack Icehouse) and the virtual machine
is Ubuntu 14.04 cloudimage.

In Hypervisor launch a instance and attach a chardev, like:
# ps -ef | grep guest_agent
libvirt+   2649  1  1 Sep08 ?00:15:36
/usr/bin/qemu-system-x86_64 ...
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/org.qemu.guest_agent.0.instance-0210.sock,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
...

In virtual machine was installed ovirt-guest-agent 1.0.11-1.1 and service
running OK.

But i don't know how to get memory usage and other information in
hypervisor(OpenStack compute node)? I have failed when i try to use linux
command 'socat' and python socket.

I need some sample or link.

Thanks a lot!

Nick Xiao
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] [QE] Bugzilla updates for oVirt Product

2015-09-09 Thread Sandro Bonazzola
The oVirt team is pleased to announce that today oVirt moved to its own
classification within our Bugzilla system as previously anticipated [1].
No longer limited as a set of sub-projects, each building block
(sub-project) of oVirt will be a Bugzilla product.
This will allow tracking of package versions and target releases based on
their own versioning schema.
Each maintainer, for example, will have administrative rights on his or her
Bugzilla sub-project and will be able to change flags,
versions, targets, and components.

As part of the improvements of the Bugzilla tracking system, a flag system
has been added to the oVirt product in order to ease its management [2].
The changes will go into affect in stages, please review the wiki for more
details.

We invite you to review the new tracking system and get involved with oVirt
QA [3] to make oVirt better than ever!

[1] http://community.redhat.com/blog/2015/06/moving-focus-to-the-upstream/
[2] http://www.ovirt.org/Bugzilla_rework
[3] http://www.ovirt.org/OVirt_Quality_Assurance

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] why replica 3

2015-09-09 Thread Simone Tiraboschi
On Wed, Sep 9, 2015 at 10:49 AM, Jorick Astrego 
wrote:

>
>
> On 09/09/2015 10:14 AM, Richard Neuboeck wrote:
> > On 04.09.15 10:02, Simone Tiraboschi wrote:
> >> Is there a reason why it has to be exactly replica 3?
> >>
> >>
> >> To have a valid quorum having the system being able to decide witch is
> >> the right and safe copy avoiding an issue called split brain.
> >> Under certain circumstances/issues (network issue, hosts down or
> >> whatever could happen) the data on different replica could diverge: if
> >> you have two and just two different hosts that claim each other
> > that its
> >> copy is the right one there is no way to automatically take the right
> >> decision. Having three hosts and setting the quorum according to that
> >> solves/mitigates the issue.
> >
> > Thanks for the explanation. I do understand the problem but since
> > I'm somewhat limited in my hardware options is there a way to
> > override this requirement? Meaning if I change the checks for
> > replica 3 in the installation scripts does something else fail on
> > the way?
> >
> > In my case coherence checks would come from outside the storage and
> > vm host setup and fencing would be applied appropriately.
> >
> > I would very much appreciate it if the particulars of the storage
> > setup could be either selected from a list of possibilities or be
> > ignored and just a warning be issued that this setup is not recommended.
> >
> > Thanks!
> > Richard
> >
> >
> As a side question, in Glusterfs 3.7 there is an "AFR arbiter volume"
> (
> http://gluster.readthedocs.org/en/release-3.7.0-1/Features/afr-arbiter-volumes/)
>
> that will only contain the metadata.
>
> Will ovirt support this in 3.6?
>

Not explicitly and I never tried it with hosted-engine but I suspect it
works as far it will report  replica level = 3.

By the way you can save disk space on the third host but you still need
three hosts.


> Kind regards,
>
> Jorick Astrego
>
>
>
>
> Met vriendelijke groet, With kind regards,
>
> Jorick Astrego
>
> *Netbulae Virtualization Experts *
> --
> Tel: 053 20 30 270 i...@netbulae.eu Staalsteden 4-3A KvK 08198180 Fax:
> 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
> --
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] why replica 3

2015-09-09 Thread Simone Tiraboschi
On Wed, Sep 9, 2015 at 10:14 AM, Richard Neuboeck 
wrote:

> On 04.09.15 10:02, Simone Tiraboschi wrote:
> > Is there a reason why it has to be exactly replica 3?
> >
> >
> > To have a valid quorum having the system being able to decide witch is
> > the right and safe copy avoiding an issue called split brain.
> > Under certain circumstances/issues (network issue, hosts down or
> > whatever could happen) the data on different replica could diverge: if
> > you have two and just two different hosts that claim each other
> that its
> > copy is the right one there is no way to automatically take the right
> > decision. Having three hosts and setting the quorum according to that
> > solves/mitigates the issue.
>
>
> Thanks for the explanation. I do understand the problem but since
> I'm somewhat limited in my hardware options is there a way to
> override this requirement? Meaning if I change the checks for
> replica 3 in the installation scripts does something else fail on
> the way?
>

I'm advising that it's not a safe configuration so it's not recommended for
a production environment.
Having said that, as far as I know it's enforced only in the setup script
so tweaking it should be enough.
Otherwise, if you have enough disk space, you can also have a different
trick: you could create a replica 3 volume with 2 bricks from a single host.
It's not a safe procedure at all cause you still have only 2 hosts, so it's
basically just replica 2, and in case of split brain the host with two
copies will win by configuration which is not always the right decision.


> In my case coherence checks would come from outside the storage and
> vm host setup and fencing would be applied appropriately.
>

Can I ask how?


> I would very much appreciate it if the particulars of the storage
> setup could be either selected from a list of possibilities or be
> ignored and just a warning be issued that this setup is not recommended.
>
> Thanks!
> Richard
>
>
> --
> /dev/null
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] why replica 3

2015-09-09 Thread Jorick Astrego


On 09/09/2015 10:14 AM, Richard Neuboeck wrote:
> On 04.09.15 10:02, Simone Tiraboschi wrote:
>> Is there a reason why it has to be exactly replica 3?
>>
>>
>> To have a valid quorum having the system being able to decide witch is
>> the right and safe copy avoiding an issue called split brain.
>> Under certain circumstances/issues (network issue, hosts down or
>> whatever could happen) the data on different replica could diverge: if
>> you have two and just two different hosts that claim each other
> that its
>> copy is the right one there is no way to automatically take the right
>> decision. Having three hosts and setting the quorum according to that
>> solves/mitigates the issue.
>
> Thanks for the explanation. I do understand the problem but since
> I'm somewhat limited in my hardware options is there a way to
> override this requirement? Meaning if I change the checks for
> replica 3 in the installation scripts does something else fail on
> the way?
>
> In my case coherence checks would come from outside the storage and
> vm host setup and fencing would be applied appropriately.
>
> I would very much appreciate it if the particulars of the storage
> setup could be either selected from a list of possibilities or be
> ignored and just a warning be issued that this setup is not recommended.
>
> Thanks!
> Richard
>
>
As a side question, in Glusterfs 3.7 there is an "AFR arbiter volume"
(http://gluster.readthedocs.org/en/release-3.7.0-1/Features/afr-arbiter-volumes/)
that will only contain the metadata.

Will ovirt support this in 3.6?

Kind regards,

Jorick Astrego




Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] why replica 3

2015-09-09 Thread Richard Neuboeck
On 04.09.15 10:02, Simone Tiraboschi wrote:
> Is there a reason why it has to be exactly replica 3?
>
>
> To have a valid quorum having the system being able to decide witch is
> the right and safe copy avoiding an issue called split brain.
> Under certain circumstances/issues (network issue, hosts down or
> whatever could happen) the data on different replica could diverge: if
> you have two and just two different hosts that claim each other
that its
> copy is the right one there is no way to automatically take the right
> decision. Having three hosts and setting the quorum according to that
> solves/mitigates the issue.


Thanks for the explanation. I do understand the problem but since
I'm somewhat limited in my hardware options is there a way to
override this requirement? Meaning if I change the checks for
replica 3 in the installation scripts does something else fail on
the way?

In my case coherence checks would come from outside the storage and
vm host setup and fencing would be applied appropriately.

I would very much appreciate it if the particulars of the storage
setup could be either selected from a list of possibilities or be
ignored and just a warning be issued that this setup is not recommended.

Thanks!
Richard


-- 
/dev/null



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt vs an external existing Ldap

2015-09-09 Thread Ezio Paglia (Comune GR)

Hi all.

What is the best way to follow in order to attach Ovirt AAA mechanisms 
to an external existing uncrypted Ldap Server ?
Such ldap is currently used by Tomcat and Samba authentication and 
authorization, its schemas should be compatible with Ovirt attributes, 
at least I suppose so.
I had thought of a port redirection through iptables, that is the local 
389 could be transformed in the remote 389 of the existing ldap, yet I'd 
like to know and learn from your experience.


Thank you in advance.
Ezio

--
Ezio Paglia
Comune di Grosseto
Grosseto (Italy)

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users