Re: [ovirt-users] Prefered Host to start VM instead of Pinned Host

2016-08-24 Thread Yaniv Dary
You can define a default cluster for this use case.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Fri, Aug 19, 2016 at 8:09 AM, Matt .  wrote:

> Hi Guys,
>
> Is it an idea to have an option, not the first boot option, to set a
> prefered host for a VM to start on ?
>
> If you remove this host that it also does not complain about a pinned
> VM as it should faillback on "any host in cluster" in that way ?
>
> It's nice for static VM's on hosts that might be started on other
> hosts when the prefered host is gone, dead or whatever.
>
> Cheers,
>
> Matt
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Adding iSCSI Storage

2016-08-24 Thread Elad Ben Aharon
Is this the first domain in the DC?
If so, it sounds like you hit
https://bugzilla.redhat.com/show_bug.cgi?id=1359788.
Also, please provide engine.log and vdsm.log

Thanks


On Wed, Aug 24, 2016 at 8:11 PM, Anantha Raghava <
rag...@exzatechconsulting.com> wrote:

> Hi,
>
> Trying to add iSCSI storage domain to oVirt 4.0.2 but failing to attach
> the Storage Domain to DC. Added storage domain is constantly locked and DC
> is constantly in maintenance. How do I get the Storage Domain activated and
> initialise DC?
> --
>
> Thanks & Regards,
>
>
> Anantha Raghava eXza Technology Consulting & Services
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to start service 'ovirt-imageio-proxy' after engine-setup

2016-08-24 Thread Yedidyah Bar David
On Wed, Aug 24, 2016 at 9:57 PM, Ralf Schenk  wrote:

> Hello List,
>
> After upgrading ovirt-hosted-engine setup failed with "Failed to start
> service 'ovirt-imageio-proxy'".
>
> If try to start it manually via
> journalctl -xe shows following error:
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
> self._secure_server(config, server)
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]: File
> "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/server.py", line
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
> server_side=True)
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]: File
> "/usr/lib64/python2.7/ssl.py", line 913, in wrap_socket
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
> ciphers=ciphers)
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]: File
> "/usr/lib64/python2.7/ssl.py", line 526, in __init__
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
> self._context.load_cert_chain(certfile, keyfile)
> Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]: IOError:
> [Errno 2] No such file or directory
> Aug 24 20:51:10 engine.mydomain.local systemd[1]:
> ovirt-imageio-proxy.service: main process exited, code=exited,
> status=1/FAILURE
> Aug 24 20:51:10 engine.mydomain.local systemd[1]: Failed to start oVirt
> ImageIO Proxy.
>
> Config (/etc/ovirt-imageio-proxy/ovirt-imageio-proxy.conf) points to a
> non-existing cert
> /etc/pki/ovirt-engine/certs/imageio-proxy.cer
> and nonexisting key:
> /etc/pki/ovirt-engine/keys/imageio-proxy.key.nopass
>

Right, will be fixed in 4.0.3:

https://bugzilla.redhat.com/show_bug.cgi?id=1365451


>
> How to generate a correct cert and key for correct startup ?
>

There is no simple single one-liner to do that for now.

Search for "pki-enroll-pkcs12.sh" for examples if you really want to.

You can copy e.g. websocket-proxy key/cert.

You can use the 4.0 nightly repo - bugs in MODIFIED are already fixed there:

http://www.ovirt.org/develop/dev-process/install-nightly-snapshot/

You can answer 'no' to 'configure image i/o proxy?', if you don't need it.
It's only needed for the new image uploader:

http://www.ovirt.org/develop/release-management/features/storage/image-upload/

Or you can wait for 4.0.3.

Best,


>
> Bye
> --
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> fax +49 (0) 24 05 / 40 83 759
> mail *r...@databay.de* 
>
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* 
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
> Philipp Hermanns
> Aufsichtsratsvorsitzender: Klaus Scholzen (RA)
> --
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade datacenter from 3.6 to 4.0

2016-08-24 Thread Michal Skrivanek

> On 24 Aug 2016, at 05:09, Barak Korren  wrote:
> 
> 
> 
> On 23 August 2016 at 19:09, Christophe TREFOIS  > wrote:
> Small add. 
> 
> Shouldnt one update the engine first ?
> 
> Nope. 
> You should always update the hosts first. The other way around can work but 
> AFAIK this is what is being tested.

not really.
we do not define the order, and AFAIR last time we tried to check it was about 
50/50;-)
It is fully compatible both ways

> 
> 
> -- 
> Barak Korren
> bkor...@redhat.com 
> RHEV-CI Team
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Failed to start service 'ovirt-imageio-proxy' after engine-setup

2016-08-24 Thread Ralf Schenk
Hello List,

After upgrading ovirt-hosted-engine setup failed with "Failed to start
service 'ovirt-imageio-proxy'".

If try to start it manually via

journalctl -xe shows following error:
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
self._secure_server(config, server)
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]: File
"/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/server.py", line
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
server_side=True)
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]: File
"/usr/lib64/python2.7/ssl.py", line 913, in wrap_socket
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
ciphers=ciphers)
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]: File
"/usr/lib64/python2.7/ssl.py", line 526, in __init__
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
self._context.load_cert_chain(certfile, keyfile)
Aug 24 20:51:10 engine.mydomain.local ovirt-imageio-proxy[7635]:
IOError: [Errno 2] No such file or directory
Aug 24 20:51:10 engine.mydomain.local systemd[1]:
ovirt-imageio-proxy.service: main process exited, code=exited,
status=1/FAILURE
Aug 24 20:51:10 engine.mydomain.local systemd[1]: Failed to start oVirt
ImageIO Proxy.

Config (/etc/ovirt-imageio-proxy/ovirt-imageio-proxy.conf) points to a
non-existing cert
/etc/pki/ovirt-engine/certs/imageio-proxy.cer
and nonexisting key:
/etc/pki/ovirt-engine/keys/imageio-proxy.key.nopass

How to generate a correct cert and key for correct startup ?

Bye
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* 

*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Klaus Scholzen (RA)


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ETL service sampling

2016-08-24 Thread knarra

Thank you Fernando and shirly  for the explanation

On 08/24/2016 11:46 PM, Shirly Radco wrote:


The ETL sampling process collects the configurations and statistics 
for the new dashboards.
The engine has a heartbeat that should update every 15 seconds. 
Sampling runs every 20 seconds.
If the engine is busy and the heartbeat does not update for some 
reason , the dwh will send an error. It compares the last sync with 
the heartbeat.

Heartbeat should be newer than the last sync...

Best,
Shirly Radco


On Aug 24, 2016 12:03 PM, "knarra" > wrote:


Hi All,

 I see the  event below getting logged in the events tab .
What is this event related to ? Why does this get logged as an error ?

ETL service sampling has encountered an error. Please consult the
service log for more details.

Thanks
kasturi

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ETL service sampling

2016-08-24 Thread Shirly Radco
The ETL sampling process collects the configurations and statistics for the
new dashboards.
The engine has a heartbeat that should update every 15 seconds. Sampling
runs every 20 seconds.
If the engine is busy and the heartbeat does not update for some reason ,
the dwh will send an error. It compares the last sync with the heartbeat.
Heartbeat should be newer than the last sync...

Best,
Shirly Radco

On Aug 24, 2016 12:03 PM, "knarra"  wrote:

> Hi All,
>
>  I see the  event below getting logged in the events tab . What is
> this event related to ? Why does this get logged as an error ?
> ETL service sampling has encountered an error. Please consult the service
> log for more details.
>
> Thanks
> kasturi
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] DeprecationWarning

2016-08-24 Thread Wolfgang Bucher
Hello,



i get every minute a message in var log messages:



momd: /usr/lib/python2.7/site-packages/mom/Collectors/GuestMemory.py:52: 
DeprecationWarning: BaseException.message has been deprecated as of Python 2.6

momd: self.stats_error('getVmMemoryStats(): %s' % e.message)



Installation is : oVirt Engine Version: 4.0.2.7-1.el7.centos





Thanks



Wolfgang












___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Adding iSCSI Storage

2016-08-24 Thread Anantha Raghava

Hi,

Trying to add iSCSI storage domain to oVirt 4.0.2 but failing to attach 
the Storage Domain to DC. Added storage domain is constantly locked and 
DC is constantly in maintenance. How do I get the Storage Domain 
activated and initialise DC?


--

Thanks & Regards,


Anantha Raghava

eXza Technology Consulting & Services


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem starting VMs

2016-08-24 Thread Wolfgang Bucher
Hello



using the kernel-lt from elrepo solves all problems!!

Maybe it is a problem with the current kernel from centos ??



Thanks 



Wolfgang



-Ursprüngliche Nachricht-
Von: Wolfgang Bucher 
Gesendet: Die 23 August 2016 20:14
An: Yaniv Kaul 
CC: users@ovirt.org (users@ovirt.org) 
Betreff: Re: [ovirt-users] Problem starting VMs

Hello



i just changed xfs to ext4 and defragmantion is much better then before, i will 
also try kernel lts from elrepo and test it again.



Thanks



Wolfgang



-Ursprüngliche Nachricht-
Von: Yaniv Kaul 
Gesendet: Die 23 August 2016 19:46
An: Wolfgang Bucher 
CC: Charles Gruener 
Betreff: Re: AW: [ovirt-users] Problem starting VMs


On Aug 23, 2016 7:33 PM, "Wolfgang Bucher"  > wrote:
 >
 > Hello
 >
 >
 > I am using local storage with adaptec raid controller, disk format is raw

Raw is always raw-sparse, so that may explain this somehow, yet still odd that 
Windows installation would cause so much fragmentation.
 I wonder if using the discard hook (and IDE or virtio-scsi) would help - or 
perhaps using a qcow2 makes more sense (create a snapshot right after disk 
creation for example).

>
 >
 > image: e4d797d1-5719-48d0-891e-a36cd4a79c33
 > file format: raw
 > virtual size: 50G (53687091200 bytes)
 > disk size: 8.5G
 >
 >
 > this is a fresh installation of W2012 after installation i got with xfs_db 
 > -c frag -r  /dev/sdb1:
 >
 > aktuell 407974, ideal 35, Unterteilungsfaktor 99,99%

And the XFS formatted with default parameters?
 Y.

>
 >
 > Thanks
 >
 >
 > Wolfgang
 >
 >
 >> -Ursprüngliche Nachricht-
 >> Von: Yaniv Kaul  >
 >> Gesendet: Die 23 August 2016 17:56
 >> An: Wolfgang Bucher >  >
 >> CC: Michal Skrivanek >  >; users@ovirt.org 
 >>  (users@ovirt.org  ) 
 >>  >
 >>
 >> Betreff: Re: [ovirt-users] Problem starting VMs
 >>
 >>
 >>
 >> On Tue, Aug 23, 2016 at 6:40 PM, Wolfgang Bucher 
 >>  > 
 >> wrote:
 >>>
 >>> Hello
 >>>
 >>>
 >>> in var log messages i get following :
 >>>
 >>>
 >>> kernel: XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
 >>>
 >>>
 >>> I have this problem on 4 different hosts.
 >>>
 >>>
 >>> This happens during copying files from network to a thin provisioned disk, 
 >>> no problems with preallocated disks.
 >>
 >>
 >> What kind of storage are you using? local storage? Even though, it makes 
 >> little sense to me - the disk is a qcow2 disk, which shouldn't be very 
 >> fragmented as you might think (qcow2 grows in 64K chunks).
 >> It may grow and grow and grow (until you sparsify it), but that's not going 
 >> to cause fragmentation. What causes it to be fragmented? Perhaps the 
 >> internal qcow2 mapping is quite fragmented?
 >> Y.
 >>  
 >>>
 >>>
 >>> Thanks
 >>>
 >>> Wolfgang
 >>>
 >>>
  -Ursprüngliche Nachricht-
  Von: Michal Skrivanek  >
  Gesendet: Die 23 August 2016 17:11
  An: Wolfgang Bucher  >
  CC: Milan Zamazal  >; 
  users@ovirt.org  (users@ovirt.org 
   )  >
  Betreff: Re: [ovirt-users] Problem starting VMs
 
 
 > On 23 Aug 2016, at 11:06, Wolfgang Bucher   > wrote:
 >
 > Thank's
 >
 > but what do you mean with "initialization is finished”
 
 
  until it gets from WaitForLaunch to PoweringUp state, effectively until 
  the qemu process properly starts up
 
 >
 > sometimes the vm crashes while copying files!
 
 
  when exactly? Can you describe exactly what you are doing and what is 
  reported as a reason for crash. When exactly does it crash and how?
 
  Thanks,
  michal
 
 >
 >
 >
 > Wolfgang
 >
 >
 >> -Ursprüngliche Nachricht-
 >> Von: Milan Zamazal  >
 >> Gesendet: Die 23 August 2016 16:59
 >> An: Wolfgang Bucher >  >
 >> CC: users@ovirt.org  (users@ovirt.org 
 >>  )  >
 >> Betreff: Re: AW: 

Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-24 Thread InterNetX - Juergen Gotteswinter
iSCSI & Ovirt is an awful combination, no matter if multipathed or
bonded. its always gambling how long it will work, and when it fails why
did it fail.

its supersensitive to latency, and superfast with setting an host to
inactive because the engine thinks something is wrong with it. in most
cases there was no real reason for.

we had this in several different hardware combinations, self built
filers up on FreeBSD/Illumos & ZFS, Equallogic SAN, Nexenta Filer

Been there, done that, wont do again.

Am 24.08.2016 um 16:04 schrieb Uwe Laverenz:
> Hi Elad,
> 
> thank you very much for clearing things up.
> 
> Initiator/iface 'a' tries to connect target 'b' and vice versa. As 'a'
> and 'b' are in completely separate networks this can never work as long
> as there is no routing between the networks.
> 
> So it seems the iSCSI-bonding feature is not useful for my setup. I
> still wonder how and where this feature is supposed to be used?
> 
> thank you,
> Uwe
> 
> Am 24.08.2016 um 15:35 schrieb Elad Ben Aharon:
>> Thanks.
>>
>> You're getting an iSCSI connection timeout [1], [2]. It means the host
>> cannot connect to the targets from iface: enp9s0f1 nor iface: enp9s0f0.
>>
>> This causes the host to loose its connection to the storage and also,
>> the connection to the engine becomes inactive. Therefore, the host
>> changes its status to Non-responsive [3] and since it's the SPM, the
>> whole DC, with all its storage domains become inactive.
>>
>>
>> vdsm.log:
>> [1]
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/storage/hsm.py", line 2400, in
>> connectStorageServer
>> conObj.connect()
>>   File "/usr/share/vdsm/storage/storageServer.py", line 508, in connect
>> iscsi.addIscsiNode(self._iface, self._target, self._cred)
>>   File "/usr/share/vdsm/storage/iscsi.py", line 204, in addIscsiNode
>> iscsiadm.node_login(iface.name , portalStr,
>> target.iqn)
>>   File "/usr/share/vdsm/storage/iscsiadm.py", line 336, in node_login
>> raise IscsiNodeError(rc, out, err)
>> IscsiNodeError: (8, ['Logging in to [iface: enp9s0f0, target:
>> iqn.2005-10.org.freenas.ctl:tgtb, portal: 10.0.132.121,3260]
>> (multiple)'], ['iscsiadm: Could not login to [iface: enp9s0f0, targ
>> et: iqn.2005-10.org.freenas.ctl:tgtb, portal: 10.0.132.121,3260].',
>> 'iscsiadm: initiator reported error (8 - connection timed out)',
>> 'iscsiadm: Could not log into all portals'])
>>
>>
>>
>> vdsm.log:
>> [2]
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/storage/hsm.py", line 2400, in
>> connectStorageServer
>> conObj.connect()
>>   File "/usr/share/vdsm/storage/storageServer.py", line 508, in connect
>> iscsi.addIscsiNode(self._iface, self._target, self._cred)
>>   File "/usr/share/vdsm/storage/iscsi.py", line 204, in addIscsiNode
>> iscsiadm.node_login(iface.name , portalStr,
>> target.iqn)
>>   File "/usr/share/vdsm/storage/iscsiadm.py", line 336, in node_login
>> raise IscsiNodeError(rc, out, err)
>> IscsiNodeError: (8, ['Logging in to [iface: enp9s0f1, target:
>> iqn.2005-10.org.freenas.ctl:tgta, portal: 10.0.131.121,3260]
>> (multiple)'], ['iscsiadm: Could not login to [iface: enp9s0f1, target:
>> iqn.2005-10.org.freenas.ctl:tgta, portal: 10.0.131.121,3260].',
>> 'iscsiadm: initiator reported error (8 - connection timed out)',
>> 'iscsiadm: Could not log into all portals'])
>>
>>
>> engine.log:
>> [3]
>>
>>
>> 2016-08-24 14:10:23,222 WARN
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-25) [15d1637f] Correlation ID: 15d1637f, Call Stack: null,
>> Custom Event ID:
>>  -1, Message: iSCSI bond 'iBond' was successfully created in Data Center
>> 'Default' but some of the hosts encountered connection issues.
>>
>>
>>
>> 2016-08-24 14:10:23,208 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>
>> (org.ovirt.thread.pool-8-thread-25) [15d1637f] Command
>> 'org.ovirt.engine.core.vdsbrok
>> er.vdsbroker.ConnectStorageServerVDSCommand' return value '
>> ServerConnectionStatusReturnForXmlRpc:{status='StatusForXmlRpc
>> [code=5022, message=Message timeout which can be caused by communication
>> issues]'}
>>
>>
>>
>> On Wed, Aug 24, 2016 at 4:04 PM, Uwe Laverenz > > wrote:
>>
>> Hi Elad,
>>
>> I sent you a download message.
>>
>> thank you,
>> Uwe
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users
>> 
>>
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade datacenter from 3.6 to 4.0

2016-08-24 Thread Dan Yasny
Doesn't it make more sense for this major upgrade to be a simple new
install of engine v4, and importing the existing SDs?

On Wed, Aug 24, 2016 at 6:34 AM, Barak Korren  wrote:

>
>
> On 24 August 2016 at 12:32, Christophe TREFOIS 
> wrote:
>
>> Really? That’s strange, as the engine should be backward compatible with
>> previous versions no?
>>
>>
> Think about it like this - When an update comes out it gets pushed to the
> repos/channels. If you then go and provision a new host with some automated
> system like Foreman, you end up with the new VDSM. Also, since you can have
> a large amount of hosts, you may reasonably choose to use some automated
> system to keep them up to date. It is more reasonable to expect the engine
> machine to have a steadier/slower life cycle.
>
>
>
> --
> *Barak Korren*
> bkor...@redhat.com
> RHEV-CI Team
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to debug no audio in guest?

2016-08-24 Thread Michal Skrivanek

> On 11 Jun 2016, at 05:42, Gianluca Cecchi  wrote:
> 
> Hello,
> I'm testing video and audio capacity inside a guest.
> Guest chosen is CentOS 6 with latest updates.
> oVirt is 3.6.6 on an intel NUC6i5SYH with CentOS 7.2 OS
> 
> BTW: is it of any importance audio adapter on the host?
> In case lspci on host gives
> 00:1f.3 Audio device: Intel Corporation Device 9d70 (rev 21)
> 
> Client connecting from user portal is Fedora 23 on an Asus laptop U36SD where 
> audio works and lspci gives
> 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family 
> High Definition Audio Controller (rev 05)
> 
> On CentOS 6 guest the audio adapter detected by OS with lspci is
> 00:08.0 Audio device: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) 
> High Definition Audio Controller (rev 01)
> 
> It seems all ok, apart that on guest actually I don't get any audio, also 
> from test speakers page ;-)
> Both on guest and on client the volume bar is near the maximum.
> 
> Any hints about debugging?
> From host point of view the qemu-kvm command line is this one below
> 
> I don't understand the env variable
> QEMU_AUDIO_DRV=none
> ???
> 
> If it can be of any importance, I initially configured the guest without 
> sound card and in fact in gnome I saw the audio card as "dummy".
> Then I powered off the guest and enabled sound card from user portal edit vm 
> (I see it enabled also from admin portal btw...) and then powered on the VM.
> Now the sound card seems to be present but no audio

Sorry for late response:/
This would be best answered by spice guys I guess
Latest virt-viewer?

> 
> Thanks in advance,
> Gianluca
> 
> 2016-06-11 09:22:13.698+: starting up libvirt version: 1.2.17, package: 
> 13.el7_2.4 (CentOS BuildSystem  >, 2016-03-
> 31-16:56:26, worker1.bsys.centos.org ), qemu 
> version: 2.3.0 (qemu-kvm-ev-2.3.0-31.el7_2.10.1)
> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin 
> QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name c6desktop -S -machine pc-i440
> fx-rhel7.2.0,accel=kvm,usb=off -cpu Broadwell-noTSX -m 
> size=3145728k,slots=16,maxmem=4294967296k -realtime mlock=off -smp 
> 1,maxcpus=16,socket
> s=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=3072 -uuid 
> 68a82ada-a1d2-44d0-85b0-f3a08cc2f579 -smbios type=1,manufacturer=oVirt,produ
> ct=oVirt 
> Node,version=7-2.1511.el7.centos.2.10,serial=AC1EDDD3-CAF1-2711-EE16-B8AEED7F1711,uuid=68a82ada-a1d2-44d0-85b0-f3a08cc2f579
>  -no-user
> -config -nodefaults -chardev 
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-c6desktop/monitor.sock,server,nowait
>  -mon chardev=charmo
> nitor,id=monitor,mode=control -rtc base=2016-06-11T09:22:13,driftfix=slew 
> -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boo
> t menu=on,strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x9.0x7 
> -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x
> 9.0x2 -device 
> ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x9.0x1 -device 
> ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,
> multifunction=on,addr=0x9 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 
> -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pc
> i.0,addr=0x4 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw -device 
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive 
> file=/rhev/data-center/0001-0001-0001-0001-0343/572eabe7-15d0-42c2-8fa9-0bd773e22e2e/images/aff55e62-6a41-4f75-bbd3-78561eae18f3/
> f520473e-8fbe-4886-bb64-921b42edf499,if=none,id=drive-virtio-disk0,format=raw,serial=aff55e62-6a41-4f75-bbd3-78561eae18f3,cache=none,werror=s
> top,rerror=stop,aio=threads -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>  -netdev t
> ap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device 
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:51,bus=pci.0,addr=0x7
>  -chardev soc
> ket,id=charserial0,path=/var/run/ovirt-vmconsole-console/68a82ada-a1d2-44d0-85b0-f3a08cc2f579.soc
> k,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -chardev 
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/68a82ada-a1d2-44d0-85b0-f3a08cc2f579.com.redhat.rhevm.vdsm,server,nowait
>  -device 
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
>  -chardev 
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/68a82ada-a1d2-44d0-85b0-f3a08cc2f579.org.qemu.guest_agent.0,server,nowait
>  -device 
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
>  -chardev spicevmc,id=charchannel2,name=vdagent -device 
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
>  -spice 
> 

Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-24 Thread Uwe Laverenz

Hi Elad,

thank you very much for clearing things up.

Initiator/iface 'a' tries to connect target 'b' and vice versa. As 'a' 
and 'b' are in completely separate networks this can never work as long 
as there is no routing between the networks.


So it seems the iSCSI-bonding feature is not useful for my setup. I 
still wonder how and where this feature is supposed to be used?


thank you,
Uwe

Am 24.08.2016 um 15:35 schrieb Elad Ben Aharon:

Thanks.

You're getting an iSCSI connection timeout [1], [2]. It means the host
cannot connect to the targets from iface: enp9s0f1 nor iface: enp9s0f0.

This causes the host to loose its connection to the storage and also,
the connection to the engine becomes inactive. Therefore, the host
changes its status to Non-responsive [3] and since it's the SPM, the
whole DC, with all its storage domains become inactive.


vdsm.log:
[1]
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2400, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 508, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
  File "/usr/share/vdsm/storage/iscsi.py", line 204, in addIscsiNode
iscsiadm.node_login(iface.name , portalStr,
target.iqn)
  File "/usr/share/vdsm/storage/iscsiadm.py", line 336, in node_login
raise IscsiNodeError(rc, out, err)
IscsiNodeError: (8, ['Logging in to [iface: enp9s0f0, target:
iqn.2005-10.org.freenas.ctl:tgtb, portal: 10.0.132.121,3260]
(multiple)'], ['iscsiadm: Could not login to [iface: enp9s0f0, targ
et: iqn.2005-10.org.freenas.ctl:tgtb, portal: 10.0.132.121,3260].',
'iscsiadm: initiator reported error (8 - connection timed out)',
'iscsiadm: Could not log into all portals'])



vdsm.log:
[2]
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2400, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 508, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
  File "/usr/share/vdsm/storage/iscsi.py", line 204, in addIscsiNode
iscsiadm.node_login(iface.name , portalStr,
target.iqn)
  File "/usr/share/vdsm/storage/iscsiadm.py", line 336, in node_login
raise IscsiNodeError(rc, out, err)
IscsiNodeError: (8, ['Logging in to [iface: enp9s0f1, target:
iqn.2005-10.org.freenas.ctl:tgta, portal: 10.0.131.121,3260]
(multiple)'], ['iscsiadm: Could not login to [iface: enp9s0f1, target:
iqn.2005-10.org.freenas.ctl:tgta, portal: 10.0.131.121,3260].',
'iscsiadm: initiator reported error (8 - connection timed out)',
'iscsiadm: Could not log into all portals'])


engine.log:
[3]


2016-08-24 14:10:23,222 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-25) [15d1637f] Correlation ID: 15d1637f, Call Stack: null,
Custom Event ID:
 -1, Message: iSCSI bond 'iBond' was successfully created in Data Center
'Default' but some of the hosts encountered connection issues.



2016-08-24 14:10:23,208 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [15d1637f] Command
'org.ovirt.engine.core.vdsbrok
er.vdsbroker.ConnectStorageServerVDSCommand' return value '
ServerConnectionStatusReturnForXmlRpc:{status='StatusForXmlRpc
[code=5022, message=Message timeout which can be caused by communication
issues]'}



On Wed, Aug 24, 2016 at 4:04 PM, Uwe Laverenz > wrote:

Hi Elad,

I sent you a download message.

thank you,
Uwe
___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-24 Thread Elad Ben Aharon
Thanks.

You're getting an iSCSI connection timeout [1], [2]. It means the host
cannot connect to the targets from iface: enp9s0f1 nor iface: enp9s0f0.

This causes the host to loose its connection to the storage and also, the
connection to the engine becomes inactive. Therefore, the host changes its
status to Non-responsive [3] and since it's the SPM, the whole DC, with all
its storage domains become inactive.


vdsm.log:
[1]
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2400, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 508, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
  File "/usr/share/vdsm/storage/iscsi.py", line 204, in addIscsiNode
iscsiadm.node_login(iface.name, portalStr, target.iqn)
  File "/usr/share/vdsm/storage/iscsiadm.py", line 336, in node_login
raise IscsiNodeError(rc, out, err)
IscsiNodeError: (8, ['Logging in to [iface: enp9s0f0, target:
iqn.2005-10.org.freenas.ctl:tgtb, portal: 10.0.132.121,3260] (multiple)'],
['iscsiadm: Could not login to [iface: enp9s0f0, targ
et: iqn.2005-10.org.freenas.ctl:tgtb, portal: 10.0.132.121,3260].',
'iscsiadm: initiator reported error (8 - connection timed out)', 'iscsiadm:
Could not log into all portals'])



vdsm.log:
[2]
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2400, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 508, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
  File "/usr/share/vdsm/storage/iscsi.py", line 204, in addIscsiNode
iscsiadm.node_login(iface.name, portalStr, target.iqn)
  File "/usr/share/vdsm/storage/iscsiadm.py", line 336, in node_login
raise IscsiNodeError(rc, out, err)
IscsiNodeError: (8, ['Logging in to [iface: enp9s0f1, target:
iqn.2005-10.org.freenas.ctl:tgta, portal: 10.0.131.121,3260] (multiple)'],
['iscsiadm: Could not login to [iface: enp9s0f1, target:
iqn.2005-10.org.freenas.ctl:tgta, portal: 10.0.131.121,3260].', 'iscsiadm:
initiator reported error (8 - connection timed out)', 'iscsiadm: Could not
log into all portals'])


engine.log:
[3]


2016-08-24 14:10:23,222 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-25) [15d1637f] Correlation ID: 15d1637f, Call Stack: null,
Custom Event ID:
 -1, Message: iSCSI bond 'iBond' was successfully created in Data Center
'Default' but some of the hosts encountered connection issues.



2016-08-24 14:10:23,208 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [15d1637f] Command
'org.ovirt.engine.core.vdsbrok
er.vdsbroker.ConnectStorageServerVDSCommand' return value '
ServerConnectionStatusReturnForXmlRpc:{status='StatusForXmlRpc [code=5022,
message=Message timeout which can be caused by communication issues]'}



On Wed, Aug 24, 2016 at 4:04 PM, Uwe Laverenz  wrote:

> Hi Elad,
>
> I sent you a download message.
>
> thank you,
> Uwe
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-24 Thread Uwe Laverenz

Hi Elad,

I sent you a download message.

thank you,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ETL service sampling

2016-08-24 Thread Fernando Fuentes
Kasturi,

 I had the same issue and here is what Yaniv said:

 

This can happen when the engine is down or very busy. This error seems
to happen only sometimes, which indicates this is the case.
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109
Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary

 


 In my case my engine was very busy.
 Good luck!

 Regards,

--
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org



On Wed, Aug 24, 2016, at 04:03 AM, knarra wrote:
> Hi All,
>  I see the  event below getting logged in the events tab . What is
>  this event related to ? Why does this get logged as an error ?
> ETL service sampling has encountered an error. Please consult the
> service log for more details.
>
>  Thanks
>  kasturi
> _
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-24 Thread Elad Ben Aharon
Network configuration seems OK.
Please provide engine.log and vdsm.log

Thanks

On Wed, Aug 24, 2016 at 3:22 PM, Uwe Laverenz  wrote:

> Hi,
>
> sorry for the delay, I reinstalled everything, configured the networks,
> attached the iSCSI storage with 2 interfaces and finally created the
> iSCSI-bond:
>
> [root@ovh01 ~]# route
>> Kernel IP Routentabelle
>> ZielRouter  Genmask Flags Metric RefUse
>> Iface
>> default hp5406-1-srv.mo 0.0.0.0 UG0  00
>> ovirtmgmt
>> 10.0.24.0   0.0.0.0 255.255.255.0   U 0  00
>> ovirtmgmt
>> 10.0.131.0  0.0.0.0 255.255.255.0   U 0  00
>> enp9s0f0
>> 10.0.132.0  0.0.0.0 255.255.255.0   U 0  00
>> enp9s0f1
>> link-local  0.0.0.0 255.255.0.0 U 1005   00
>> enp9s0f0
>> link-local  0.0.0.0 255.255.0.0 U 1006   00
>> enp9s0f1
>> link-local  0.0.0.0 255.255.0.0 U 1008   00
>> ovirtmgmt
>> link-local  0.0.0.0 255.255.0.0 U 1015   00
>> bond0
>> link-local  0.0.0.0 255.255.0.0 U 1017   00
>> ADMIN
>> link-local  0.0.0.0 255.255.0.0 U 1021   00
>> SRV
>>
>
> and:
>
> [root@ovh01 ~]# ip a
>> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>>valid_lft forever preferred_lft forever
>> inet6 ::1/128 scope host
>>valid_lft forever preferred_lft forever
>> 2: enp13s0:  mtu 1500 qdisc pfifo_fast
>> master ovirtmgmt state UP qlen 1000
>> link/ether e0:3f:49:6d:68:c4 brd ff:ff:ff:ff:ff:ff
>> 3: enp8s0f0:  mtu 1500 qdisc mq
>> master bond0 state UP qlen 1000
>> link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
>> 4: enp8s0f1:  mtu 1500 qdisc mq
>> master bond0 state UP qlen 1000
>> link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
>> 5: enp9s0f0:  mtu 1500 qdisc mq state
>> UP qlen 1000
>> link/ether 90:e2:ba:11:21:d4 brd ff:ff:ff:ff:ff:ff
>> inet 10.0.131.181/24 brd 10.0.131.255 scope global enp9s0f0
>>valid_lft forever preferred_lft forever
>> 6: enp9s0f1:  mtu 1500 qdisc mq state
>> UP qlen 1000
>> link/ether 90:e2:ba:11:21:d5 brd ff:ff:ff:ff:ff:ff
>> inet 10.0.132.181/24 brd 10.0.132.255 scope global enp9s0f1
>>valid_lft forever preferred_lft forever
>> 7: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
>> link/ether 26:b2:4e:5e:f0:60 brd ff:ff:ff:ff:ff:ff
>> 8: ovirtmgmt:  mtu 1500 qdisc noqueue
>> state UP
>> link/ether e0:3f:49:6d:68:c4 brd ff:ff:ff:ff:ff:ff
>> inet 10.0.24.181/24 brd 10.0.24.255 scope global ovirtmgmt
>>valid_lft forever preferred_lft forever
>> 14: vnet0:  mtu 1500 qdisc pfifo_fast
>> master ovirtmgmt state UNKNOWN qlen 500
>> link/ether fe:16:3e:79:25:86 brd ff:ff:ff:ff:ff:ff
>> inet6 fe80::fc16:3eff:fe79:2586/64 scope link
>>valid_lft forever preferred_lft forever
>> 15: bond0:  mtu 1500 qdisc
>> noqueue state UP
>> link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
>> 16: bond0.32@bond0:  mtu 1500 qdisc
>> noqueue master ADMIN state UP
>> link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
>> 17: ADMIN:  mtu 1500 qdisc noqueue
>> state UP
>> link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
>> 20: bond0.24@bond0:  mtu 1500 qdisc
>> noqueue master SRV state UP
>> link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
>> 21: SRV:  mtu 1500 qdisc noqueue state
>> UP
>> link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
>>
>
> The host keeps toggling all storage domains on and off as soon as there is
> an iSCSI bond configured.
>
> Thank you for your patience.
>
> cu,
> Uwe
>
>
> Am 18.08.2016 um 11:10 schrieb Elad Ben Aharon:
>
>> I don't think it's necessary.
>> Please provide the host's routing table and interfaces list ('ip a' or
>> ifconfing) while it's configured with the bond.
>>
>> Thanks
>>
>> On Tue, Aug 16, 2016 at 4:39 PM, Uwe Laverenz > > wrote:
>>
>> Hi Elad,
>>
>> Am 16.08.2016 um 10:52 schrieb Elad Ben Aharon:
>>
>> Please be sure that ovirtmgmt is not part of the iSCSI bond.
>>
>>
>> Yes, I made sure it is not part of the bond.
>>
>> It does seem to have a conflict between default and enp9s0f0/
>> enp9s0f1.
>> Try to put the host in maintenance and then delete the iscsi
>> 

Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-24 Thread Uwe Laverenz

Hi,

sorry for the delay, I reinstalled everything, configured the networks, 
attached the iSCSI storage with 2 interfaces and finally created the 
iSCSI-bond:



[root@ovh01 ~]# route
Kernel IP Routentabelle
ZielRouter  Genmask Flags Metric RefUse Iface
default hp5406-1-srv.mo 0.0.0.0 UG0  00 
ovirtmgmt
10.0.24.0   0.0.0.0 255.255.255.0   U 0  00 
ovirtmgmt
10.0.131.0  0.0.0.0 255.255.255.0   U 0  00 enp9s0f0
10.0.132.0  0.0.0.0 255.255.255.0   U 0  00 enp9s0f1
link-local  0.0.0.0 255.255.0.0 U 1005   00 enp9s0f0
link-local  0.0.0.0 255.255.0.0 U 1006   00 enp9s0f1
link-local  0.0.0.0 255.255.0.0 U 1008   00 
ovirtmgmt
link-local  0.0.0.0 255.255.0.0 U 1015   00 bond0
link-local  0.0.0.0 255.255.0.0 U 1017   00 ADMIN
link-local  0.0.0.0 255.255.0.0 U 1021   00 SRV


and:


[root@ovh01 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: enp13s0:  mtu 1500 qdisc pfifo_fast master 
ovirtmgmt state UP qlen 1000
link/ether e0:3f:49:6d:68:c4 brd ff:ff:ff:ff:ff:ff
3: enp8s0f0:  mtu 1500 qdisc mq master 
bond0 state UP qlen 1000
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
4: enp8s0f1:  mtu 1500 qdisc mq master 
bond0 state UP qlen 1000
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
5: enp9s0f0:  mtu 1500 qdisc mq state UP qlen 
1000
link/ether 90:e2:ba:11:21:d4 brd ff:ff:ff:ff:ff:ff
inet 10.0.131.181/24 brd 10.0.131.255 scope global enp9s0f0
   valid_lft forever preferred_lft forever
6: enp9s0f1:  mtu 1500 qdisc mq state UP qlen 
1000
link/ether 90:e2:ba:11:21:d5 brd ff:ff:ff:ff:ff:ff
inet 10.0.132.181/24 brd 10.0.132.255 scope global enp9s0f1
   valid_lft forever preferred_lft forever
7: ;vdsmdummy;:  mtu 1500 qdisc noop state DOWN
link/ether 26:b2:4e:5e:f0:60 brd ff:ff:ff:ff:ff:ff
8: ovirtmgmt:  mtu 1500 qdisc noqueue state UP
link/ether e0:3f:49:6d:68:c4 brd ff:ff:ff:ff:ff:ff
inet 10.0.24.181/24 brd 10.0.24.255 scope global ovirtmgmt
   valid_lft forever preferred_lft forever
14: vnet0:  mtu 1500 qdisc pfifo_fast master 
ovirtmgmt state UNKNOWN qlen 500
link/ether fe:16:3e:79:25:86 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe79:2586/64 scope link
   valid_lft forever preferred_lft forever
15: bond0:  mtu 1500 qdisc noqueue 
state UP
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
16: bond0.32@bond0:  mtu 1500 qdisc noqueue 
master ADMIN state UP
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
17: ADMIN:  mtu 1500 qdisc noqueue state UP
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
20: bond0.24@bond0:  mtu 1500 qdisc noqueue 
master SRV state UP
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
21: SRV:  mtu 1500 qdisc noqueue state UP
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff


The host keeps toggling all storage domains on and off as soon as there 
is an iSCSI bond configured.


Thank you for your patience.

cu,
Uwe


Am 18.08.2016 um 11:10 schrieb Elad Ben Aharon:

I don't think it's necessary.
Please provide the host's routing table and interfaces list ('ip a' or
ifconfing) while it's configured with the bond.

Thanks

On Tue, Aug 16, 2016 at 4:39 PM, Uwe Laverenz > wrote:

Hi Elad,

Am 16.08.2016 um 10:52 schrieb Elad Ben Aharon:

Please be sure that ovirtmgmt is not part of the iSCSI bond.


Yes, I made sure it is not part of the bond.

It does seem to have a conflict between default and enp9s0f0/
enp9s0f1.
Try to put the host in maintenance and then delete the iscsi
nodes using
'iscsiadm -m node -o delete'. Then activate the host.


I tried that, I managed to get the iSCSI interface clean, no
"default" anymore. But that didn't solve the problem of the host
becoming "inactive". Not even the NFS domains would come up.

As soon as I remove the iSCSI-bond, the host becomes responsive
again and I can activate all storage domains. Removing the bond also
brings the duplicated "Iface Name" 

[ovirt-users] Stable Next Generation Node Image for 4.0.2?

2016-08-24 Thread Thomas Klute
Dear oVirt community,

what is the correct way to set up a next generation node of the latest
stable version (4.0.2)?

Take the image from
http://resources.ovirt.org/pub/ovirt-4.0/iso/ovirt-node-ng-installer/ ?
Seems to be 4.0.0 and then update?

Or take the image from
http://resources.ovirt.org/pub/ovirt-4.0-snapshot/iso/
Seems to be 4.0.2 but nightly and thus unstable?

Thanks for the clarification,
best regards,
 Thomas

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade datacenter from 3.6 to 4.0

2016-08-24 Thread Barak Korren
On 24 August 2016 at 12:32, Christophe TREFOIS 
wrote:

> Really? That’s strange, as the engine should be backward compatible with
> previous versions no?
>
>
Think about it like this - When an update comes out it gets pushed to the
repos/channels. If you then go and provision a new host with some automated
system like Foreman, you end up with the new VDSM. Also, since you can have
a large amount of hosts, you may reasonably choose to use some automated
system to keep them up to date. It is more reasonable to expect the engine
machine to have a steadier/slower life cycle.



-- 
*Barak Korren*
bkor...@redhat.com
RHEV-CI Team
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] info about support status for hyper converged setup

2016-08-24 Thread Sandro Bonazzola
On Tue, Aug 23, 2016 at 11:10 AM, Sahina Bose  wrote:

>
>
> On Tue, Aug 23, 2016 at 2:21 PM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> On Fri, Aug 19, 2016 at 10:05 AM, Sahina Bose  wrote:
>>
>>> Gluster hyperconverged will be integrated and fully supported in 4.1 -
>>> but already available as preview from 3.6.8.
>>>
>>> I think there are couple of trackers around this. One of which you can
>>> look at for list of upcoming features/fixes -
>>> https://bugzilla.redhat.com/showdependencytree.cgi?id=127793
>>> 9_resolved=1
>>>
>>
>> Sahina, do you mind take ownership of http://www.ovirt.org/develo
>> p/release-management/features/engine/self-hosted-engine-
>> hyper-converged-gluster-support/ and update it?
>>
>
> Sure, I was in the process of revamping gluster content on site. can you
> merge the pull requests?
>

As soon as I find the time to review them


>
>
>>
>>
>>
>>>
>>> On Wed, Aug 17, 2016 at 6:04 PM, Gianluca Cecchi <
>>> gianluca.cec...@gmail.com> wrote:
>>>
 Is it now in version 4.0.2 (or previous 3.6.x) fully supported? Or
 still in testing?
 As described here:
 http://www.ovirt.org/develop/release-management/features/eng
 ine/self-hosted-engine-hyper-converged-gluster-support/

 Only with Gluster I presume...
 Is there any bug tracker to see a list of all potential
 problems/limitations or features not supported?

 Thanks,
 Gianluca


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>> --
>> Sandro Bonazzola
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com
>>
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade datacenter from 3.6 to 4.0

2016-08-24 Thread Christophe TREFOIS
Really? That’s strange, as the engine should be backward compatible with 
previous versions no?

Anyway, I believe the devs :)

--

Dr Christophe Trefois, Dipl.-Ing.
Technical Specialist / Post-Doc

UNIVERSITÉ DU LUXEMBOURG

LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
Campus Belval | House of Biomedicine
6, avenue du Swing
L-4367 Belvaux
T: +352 46 66 44 6124
F: +352 46 66 44 6949
http://www.uni.lu/lcsb

[Facebook]  [Twitter] 
   [Google Plus] 
   [Linkedin] 
   [skype] 



This message is confidential and may contain privileged information.
It is intended for the named recipient only.
If you receive it in error please notify me and permanently delete the original 
message and any copies.




On 24 Aug 2016, at 11:09, Barak Korren 
> wrote:



On 23 August 2016 at 19:09, Christophe TREFOIS 
> wrote:
Small add.

Shouldnt one update the engine first ?

Nope.
You should always update the hosts first. The other way around can work but 
AFAIK this is what is being tested.


--
Barak Korren
bkor...@redhat.com
RHEV-CI Team

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade datacenter from 3.6 to 4.0

2016-08-24 Thread Barak Korren
On 23 August 2016 at 19:09, Christophe TREFOIS 
wrote:

> Small add.
>
> Shouldnt one update the engine first ?
>

Nope.
You should always update the hosts first. The other way around can work but
AFAIK this is what is being tested.


-- 
*Barak Korren*
bkor...@redhat.com
RHEV-CI Team
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ETL service sampling

2016-08-24 Thread knarra

Hi All,

 I see the  event below getting logged in the events tab . What is 
this event related to ? Why does this get logged as an error ?


ETL service sampling has encountered an error. Please consult the 
service log for more details.


Thanks
kasturi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users