[ovirt-users] Re: Future of oVirt as RHV is converging with OpenShift

2021-02-06 Thread Uwe Laverenz

Hi Strahil,

Am 06.02.21 um 06:26 schrieb Strahil Nikolov:


I know several telecoms in Bulgaria use RHV, but they are small clients.

Yet, Openshift with 3 nodes looks quite difficult, while oVirt/RHV excells.


As I understand it, Openshift is mostly about containers, it is a 
different product for different customers... Let's hope they know what 
they're doing.


cu,
Uwe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JEYPWYRLVQKWLGV4AX6JNQPNMGPERUD4/


[ovirt-users] Re: Future of oVirt as RHV is converging with OpenShift

2021-02-05 Thread Uwe Laverenz

Hi.

Am 03.02.21 um 08:56 schrieb Strahil Nikolov via Users:

but without big software companies it will be quite hard. If Red Hat 
shifts to Openshift, I am afraid that this project will be going into 
oblivion.


My guess: if this happens then probably because OVirt/RHEV wasn't 
successful enough in attracting paying customers. From time to time I 
tried to find reference customers but I never found a real success 
story. At least here in Western Europe I've never heard of a company 
that uses OVirt instead of vSphere/Hyper-V/whatever with the exception 
of a rumor about a British airline.


And now with IBM as new owner, commercial success might even become more 
important.


just my 2 cents.

cu,
Uwe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WIYCERHXJDSWBUWRWCWABZUIHYSWAQML/


[ovirt-users] Re: iSCSI multipath with separate subnets... still not possible in 4.4.x?

2020-07-23 Thread Uwe Laverenz

Am 22.07.20 um 21:55 schrieb Mark R:


Thanks, Uwe. Am I understanding correctly that you're just letting
your nodes attach to the iSCSI storage on their own by leaving
"node.startup = automatic" in /etc/iscsi/iscsid.conf so the hosts
attach to all known targets as they boot, long before oVirt services
ever attempt to connect them? I've considered flipping that to


No, I use OVirt to connect to the iSCSI targets, this works as expected. 
The thing I do not use are OVirt's iSCSI bonds.


What I configure manually is multipathd in order to use round robin policy.


As another poster below mentioned, going the route of two separate
iSCSI bonds in the "iSCSI Multipath" section does work when you're
adding new storage domains. The aspect he talks about, where you
connect both paths and save it, isn't possible if you import an
existing storage domain. When importing, the UI won't expose the
"Add" button that's available when creating a new domain, so you
can't add redundant paths. You can import the storage, then edit it
and discover/login to the other path, but that does _not_ save to the
database and will not persist across reboots or connect on other
hosts you add to the cluster (have to login manually on each). You
can't edit your iSCSI bonds and check the box for these manually
logged in targets either, they'll never populate in that part of the
UI so can't be selected. I think it's just a UI issue because some
very easy fidling in the database makes it work exactly as you'd
expect (and as it does for domains you newly add instead of
importing ).


This sounds quite ugly, I wasn't aware of this.


Sorry, rambling, but I am curious about your "node.startup" setting
in iscid.conf.  If left at 'automatic' (the default), are your hosts
attaching all the disks as they boot and oVirt doesn't mind that? It
could be the path I'll take as honestly I'd much prefer configuring
the storage connections directly.


As I said, the only thing I change is /etc/multipath.conf:

https://lists.ovirt.org/pipermail/users/2017-July/083308.html

cu,
Uwe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VDWAU4TE5UY5Z3LQYFWMBIFFCLCFKMQ3/


[ovirt-users] Re: iSCSI multipath with separate subnets... still not possible in 4.4.x?

2020-07-18 Thread Uwe Laverenz

Hi Mark,

Am 14.07.20 um 02:14 schrieb Mark R:


I'm looking through quite a few bug reports and mailing list threads,
but want to make sure I'm not missing some recent development.  It
appears that doing iSCSI with two separate, non-routed subnets is
still not possible with 4.4.x. I have the dead-standard iSCSI setup
with two separate switches, separate interfaces on hosts and storage,
and separate subnets that have no gateway and are completely
unreachable except from directly attached interfaces.


I haven't tested 4.4 yet but AFAIK nothing has changed, OVirt iSCSI 
bonds don't work with separated, isolated subnets:


https://bugzilla.redhat.com/show_bug.cgi?id=1474904

I don't use them as multipathing generally works without OVirt bonds in 
my setup, I configured multipathd directly to use round robin e.g..


cu,
Uwe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EXRSANPZHZ2JE2DKRB6KBMYVVMDSGSJV/


[ovirt-users] Re: oVirt 4.4.0 Beta release refresh is now available for testing

2020-04-13 Thread Uwe Laverenz

Hi Eric,
Am 13.04.20 um 18:15 schrieb eev...@digitaldatatechs.com:
I have a question for the developers: Why use gluster? Why not Pacemaker 
or something with better performance stats?


Just curious.

Eric Evans


if I'm not mistaken, these two have different purposes: gluster(fs) is a 
distributed storage software and pacemaker is for resource management of 
HA cluster systems.


regards,
Uwe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/33JKX6SNGJZRO4D5HNTHDUBZEURLCDAX/


[ovirt-users] Re: oVirt 4.2.8 CPU Compatibility

2019-01-23 Thread Uwe Laverenz
Hi,

Am Dienstag, den 22.01.2019, 15:46 +0100 schrieb Lucie Leistnerova:

>  Yes, it should be supported also in 4.2.8. According to Release
> notes for 4.2.7 this warning is related to 4.3 version.
> 
> https://www.ovirt.org/release/4.2.7/
> 
> BZ 1623259 Mark clusters with deprecated CPU type
> In the current release, for compatibility versions 4.2 and 4.3, a
> warning in the Cluster screen indicates that the CPU types currently
> used are not supported in 4.3. The warning enables the user to change
> the cluster CPU type to a supported CPU type.

Does this mean that I would not be able to install OVirt 4.3 on
machines with Opteron 6174 cpu? Or would I just get a warning?

I was thinking of recycling our old DL385 machines for an OVirt/Gluster
testing lab. :)

cu,
Uwe

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H5OCE7Y53RZ2WEOCCU77PBV7DZTTHZTR/


[ovirt-users] Re: need network design advice for iSCSI

2019-01-21 Thread Uwe Laverenz
Hi,

Am Montag, den 21.01.2019, 06:43 +0100 schrieb Uwe Laverenz:

> I will post a bonnie++ result later. If you need more details please 

Attached are the results of the smallest setup (my home lab): storage
server is a HP N40L with 16GB RAM, 4x2TB WD RE as RAID10, CentOS 7 with
LIO as iSCSI target with 2 Gigabit networks (jumbo frames: mtu 9000).

cu,
Uwe

Version  1.97   --Sequential Output-- --Sequential Input- --Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
ovirt-vm  7568M   739  97 123014  10 72061   8  1395  99 228302  11 405.9  
10
Latency 12475us   13397us 874ms   15675us 247ms   91975us
Version  1.97   --Sequential Create-- Random Create
ovirt-vm-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16 12828  58 + +++ 14219  36 13435  62 + +++ 12789  35
Latency 29490us 142us 413ms1160us  36us   23231us
1.97,1.97,ovirt-vm,1,1548073693,7568M,,739,97,123014,10,72061,8,1395,99,228302,11,405.9,10,16,12828,58,+,+++,14219,36,13435,62,+,+++,12789,35,12475us,13397us,874ms,15675us,247ms,91975us,29490us,142us,413ms,1160us,36us,23231us

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UODLGEASXUUP54PHXSWRELY55W3BFRMB/


[ovirt-users] Re: need network design advice for iSCSI

2019-01-20 Thread Uwe Laverenz

Hi John,

Am 20.01.19 um 18:32 schrieb John Florian:

As for how to get there, whatever exactly that might look like, I'm also 
having troubles figuring that out.  I figured I would transform the 
setup described below into one where each host has:


  * 2 NICs bonded with LACP for my ovirtmgmt and "main" net
  * 1 NIC for my 1st storage net
  * 1 NIC for my 2nd storage net


This is exactly the setup I use. I have run this successfully with 
CentOS/LIO and FreeNAS iSCSI targets with good performance.


In short:

- 2 separate, isolated networks for iSCSI with dedicated adapters
  on hosts and iSCSI target
- jumbo frames enabled
- no VLANs config needed on hosts, untagged VLANs on switch
- do _not_ use LACP, let multipathd handle failovers

Same experience as Vinicius: what did _not_ work for me is the 
iSCSI-Bonding in OVirt. It seems to require that all storage IPs are 
reachable from all other IPs, which is not the case in every setup.


To get multipathing to work I use multipath directly:


https://www.mail-archive.com/users@ovirt.org/msg42735.html


I will post a bonnie++ result later. If you need more details please let 
me know.


cu,
Uwe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E2QKV7CZR27NT6MRSNL352KLOQ5OAGDR/


[ovirt-users] windows remote-viewer vs. linux vm: strange keyboard problem

2017-11-03 Thread Uwe Laverenz

Hi all,

I have the following strange problem with virt-viewer 5.0 and 6.0 on 
Windows 10: when I connect to Linux VMs running on a OVirt 4.1 cluster 
it sometimes is not possible to use e.g. "AltGr + q" to get the "@"-sign 
or "AltGr+8" for "[" on my german keyboard. It seems to be a problem 
only when I use a dual screen setup with Linux VMs.


It is possible to get it working by just moving the mouse to the top 
center of the screen until the remote-viewer panel shows up. After this 
the keyboard works for a short period of time until I work on the other 
screen.


I tested CentOS 7,CentOS 6 and Fedora 26 VMs. Windows 7 VMs do not seem 
to show this problem, so I'm not sure if it is a problem with 
virt-/remote-viewer, OVirt/qemu or the Windows 10 client.


Does anybody else see this behaviour? What would the best place for a 
bug report, OVirt and Spice/virt-viewer are separate teams, right?


thanks in advance,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and FreeNAS

2017-08-16 Thread Uwe Laverenz

Hi,

Am 15.08.2017 um 13:35 schrieb Latchezar Filtchev:


1. Is it in production?


Not really, just for testing purposes to provide some kind of shared 
storage for OVirt. I like FreeNAS, it's a very nice system but for 
production we use a setup with distributed/mirrored storage that 
tolerates the loss of a storage device or even a complete server room 
(Datacore on FC infrastructure). I haven't tested OVirt with Datacore 
yet, maybe I'll have time and hardware for this next year.


2. Can you share details about your FreeNAS installation - hardware 
used, RAM installed, Type of Disks - SATA, SAS, SSD, network cards 
used? Do you have SSD for ZIL/L2ARC? 3. The size of your data

domain? Number of virtual machines? .


Nothing spectacular: HP microservers or white boxes with 16-32 GB ECC 
Ram and 4-6 Sata disks (500GB - 2TB), 2x1 Gbit/s (Intel) for iSCSI. The 
network is the limiting factor, no extra SSDs used/needed.


cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and FreeNAS

2017-08-15 Thread Uwe Laverenz

Hi,

Am 15.08.2017 um 10:50 schrieb Latchezar Filtchev:

Just curious – did someone uses FreeNAS as storage  for oVirt.  My 
staging environment is - two virtualization nodes, hosted engine, 
FreeNAS as storage (iSCSI hosted storage, iSCSI Data(Master) domain and 
NFS shares as ISO and export domains)


Yes, works very well (NFS and iSCSI).

cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipath issues

2017-07-19 Thread Uwe Laverenz

Hi,


Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:

I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m 
trying to enable the feature without success too.


Here’s what I’ve done, step-by-step.

1. Installed oVirt Node 4.1.3 with the following network settings:

eno1 and eno2 on a 802.3ad (LACP) Bond, creating a bond0 interface.
eno3 with 9216 MTU.
eno4 with 9216 MTU.
vlan11 on eno3 with 9216 MTU and fixed IP addresses.
vlan12 on eno4 with 9216 MTU and fixed IP addresses.

eno3 and eno4 are my iSCSI MPIO Interfaces, completelly segregated, on 
different switches.


This is the point: the OVirt implementation of iSCSI-Bonding assumes 
that all network interfaces in the bond can connect/reach all targets, 
including those in the other net(s). The fact that you use separate, 
isolated networks means that this is not the case in your setup (and not 
in mine).


I am not sure if this is a bug, a design flaw or a feature, but as a 
result of this OVirt's iSCSI-Bonding does not work for us.


Please see my mail from yesterday for a workaround.

cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-18 Thread Uwe Laverenz

Hi,

just to avoid misunderstandings: the workaround I suggested means that I 
don't use OVirt's iSCSI-Bonding at all (because it let's my environment 
misbehave in the same way you described).


cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing

2017-07-18 Thread Uwe Laverenz

Hi,


Am 17.07.2017 um 14:11 schrieb Devin Acosta:

I am still troubleshooting the issue, I haven’t found any resolution to 
my issue at this point yet. I need to figure out by this Friday 
otherwise I need to look at Xen or another solution. iSCSI and oVIRT 
seems problematic.


The configuration of iSCSI-Multipathing via OVirt didn't work for me 
either. IIRC the underlying problem in my case was that I use totally 
isolated networks for each path.


Workaround: to make round robin work you have to enable it by editing 
"/etc/multipath.conf". Just add the 3 lines for the round robin setting 
(see comment in the file) and additionally add the "# VDSM PRIVATE" 
comment to keep vdsmd from overwriting your settings.


My multipath.conf:



# VDSM REVISION 1.3
# VDSM PRIVATE

defaults {
polling_interval5
no_path_retry   fail
user_friendly_names no
flush_on_last_del   yes
fast_io_fail_tmo5
dev_loss_tmo30
max_fds 4096
# 3 lines added manually for multipathing:
path_selector   "round-robin 0"
path_grouping_policymultibus
failbackimmediate
}

# Remove devices entries when overrides section is available.
devices {
device {
# These settings overrides built-in devices settings. It does not apply
# to devices without built-in settings (these use the settings in the
# "defaults" section), or to devices defined in the "devices" section.
# Note: This is not available yet on Fedora 21. For more info see
# https://bugzilla.redhat.com/1253799
all_devsyes
no_path_retry   fail
}
}




To enable the settings:

  systemctl restart multipathd

See if it works:

  multipath -ll


HTH,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migrate VirtualBox Vm into

2017-05-12 Thread Uwe Laverenz

Hi,

Am 09.05.2017 um 12:01 schrieb Gajendra Ravichandran:


I tried to convert using virt-v2v
as http://libguestfs.org/virt-v2v.1.html. However, I get error as
(Debian/ Linux cannot be converted).


Yes, virt-v2v only supports a limited number of operating systems, 
Debian is just not supported.



I have exported the vm from virtualbox and have the image as .ova. Is
there any way to migrate?


The only way I know of is to convert/migrate the harddisk of your guest 
machine and attach it to a newly created VM.


The necessary tools are "VBoxManage clonemedium ..." and qemu-img(1) for 
example.


cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-04 Thread Uwe Laverenz

Hi all,

Am 02.02.2017 um 13:19 schrieb Sandro Bonazzola:


did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it
works fine for you :-)


I just updated my test environment (3 hosts, hosted engine, iSCSI) to 
4.1 and it worked very well. I initially had a problem to migrate my 
engine vm to another host but this could have been a local problem.


The only thing that could be improved is the online documentation (404 
errors, already adressed in another thread). ;)


Otherwise erverything runs very well so far, thank you for your work!

cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] OVirt 4.0.3 VDI?

2016-09-01 Thread Uwe Laverenz

Hi,

I have a small DC running with OVirt 4.0.3 and I am very pleased so far. 
The next thing I want to test is VDI, so I:


- installed a Windows 7 machine
- ran sysprep and created a template

On my hosted-engine I ran:

   # ovirt-engine-extension-aaa-ldap-setup

where I chose '3 - Active Directory' and entered a non-admin-user for 
LDAP-Queries and successfully tested a Login.


next:

   # engine-config -s SysPrepDefaultUser=admin
   # engine-config -s SysPrepDefaultPassword=interactive

This is my first problem: this supposedly is for the creation of the 
local admin account on the windows machines, right? These values do not 
have any effect: the web dialog for pool creation doesn't know about the 
password and the DefaultUser is created as "user" instead of "admin". Do 
I have to create a custom sysprep file and insert it into the web dialog 
to override these settings? Is there any other way to configure this?


The second problem is: when I create a pool based on my Windows 7 
template, how do I manage to automatically join the newly created 
virtual machines to our Active Directory? Where can I set the 
credentials of a domain admin that joins the machines to the domain?


Thank you in advance,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-25 Thread Uwe Laverenz

Hi Jürgen,

Am 24.08.2016 um 17:15 schrieb InterNetX - Juergen Gotteswinter:

iSCSI & Ovirt is an awful combination, no matter if multipathed or
bonded. its always gambling how long it will work, and when it fails why
did it fail.

its supersensitive to latency, and superfast with setting an host to
inactive because the engine thinks something is wrong with it. in most
cases there was no real reason for.

we had this in several different hardware combinations, self built
filers up on FreeBSD/Illumos & ZFS, Equallogic SAN, Nexenta Filer

Been there, done that, wont do again.


Thank you, I take this as a warning. :)

For my testbed I chose to ignore the iSCSI-bond feature and change the 
multipath default to round robin instead.


What kind of storage do you use in production? Fibre channel, gluster, 
ceph, ...?


thanks,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-24 Thread Uwe Laverenz

Hi Elad,

thank you very much for clearing things up.

Initiator/iface 'a' tries to connect target 'b' and vice versa. As 'a' 
and 'b' are in completely separate networks this can never work as long 
as there is no routing between the networks.


So it seems the iSCSI-bonding feature is not useful for my setup. I 
still wonder how and where this feature is supposed to be used?


thank you,
Uwe

Am 24.08.2016 um 15:35 schrieb Elad Ben Aharon:

Thanks.

You're getting an iSCSI connection timeout [1], [2]. It means the host
cannot connect to the targets from iface: enp9s0f1 nor iface: enp9s0f0.

This causes the host to loose its connection to the storage and also,
the connection to the engine becomes inactive. Therefore, the host
changes its status to Non-responsive [3] and since it's the SPM, the
whole DC, with all its storage domains become inactive.


vdsm.log:
[1]
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2400, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 508, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
  File "/usr/share/vdsm/storage/iscsi.py", line 204, in addIscsiNode
iscsiadm.node_login(iface.name <http://iface.name>, portalStr,
target.iqn)
  File "/usr/share/vdsm/storage/iscsiadm.py", line 336, in node_login
raise IscsiNodeError(rc, out, err)
IscsiNodeError: (8, ['Logging in to [iface: enp9s0f0, target:
iqn.2005-10.org.freenas.ctl:tgtb, portal: 10.0.132.121,3260]
(multiple)'], ['iscsiadm: Could not login to [iface: enp9s0f0, targ
et: iqn.2005-10.org.freenas.ctl:tgtb, portal: 10.0.132.121,3260].',
'iscsiadm: initiator reported error (8 - connection timed out)',
'iscsiadm: Could not log into all portals'])



vdsm.log:
[2]
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2400, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 508, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
  File "/usr/share/vdsm/storage/iscsi.py", line 204, in addIscsiNode
iscsiadm.node_login(iface.name <http://iface.name>, portalStr,
target.iqn)
  File "/usr/share/vdsm/storage/iscsiadm.py", line 336, in node_login
raise IscsiNodeError(rc, out, err)
IscsiNodeError: (8, ['Logging in to [iface: enp9s0f1, target:
iqn.2005-10.org.freenas.ctl:tgta, portal: 10.0.131.121,3260]
(multiple)'], ['iscsiadm: Could not login to [iface: enp9s0f1, target:
iqn.2005-10.org.freenas.ctl:tgta, portal: 10.0.131.121,3260].',
'iscsiadm: initiator reported error (8 - connection timed out)',
'iscsiadm: Could not log into all portals'])


engine.log:
[3]


2016-08-24 14:10:23,222 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-25) [15d1637f] Correlation ID: 15d1637f, Call Stack: null,
Custom Event ID:
 -1, Message: iSCSI bond 'iBond' was successfully created in Data Center
'Default' but some of the hosts encountered connection issues.



2016-08-24 14:10:23,208 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [15d1637f] Command
'org.ovirt.engine.core.vdsbrok
er.vdsbroker.ConnectStorageServerVDSCommand' return value '
ServerConnectionStatusReturnForXmlRpc:{status='StatusForXmlRpc
[code=5022, message=Message timeout which can be caused by communication
issues]'}



On Wed, Aug 24, 2016 at 4:04 PM, Uwe Laverenz <u...@laverenz.de
<mailto:u...@laverenz.de>> wrote:

Hi Elad,

I sent you a download message.

thank you,
Uwe
___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-24 Thread Uwe Laverenz

Hi Elad,

I sent you a download message.

thank you,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-24 Thread Uwe Laverenz

Hi,

sorry for the delay, I reinstalled everything, configured the networks, 
attached the iSCSI storage with 2 interfaces and finally created the 
iSCSI-bond:



[root@ovh01 ~]# route
Kernel IP Routentabelle
ZielRouter  Genmask Flags Metric RefUse Iface
default hp5406-1-srv.mo 0.0.0.0 UG0  00 
ovirtmgmt
10.0.24.0   0.0.0.0 255.255.255.0   U 0  00 
ovirtmgmt
10.0.131.0  0.0.0.0 255.255.255.0   U 0  00 enp9s0f0
10.0.132.0  0.0.0.0 255.255.255.0   U 0  00 enp9s0f1
link-local  0.0.0.0 255.255.0.0 U 1005   00 enp9s0f0
link-local  0.0.0.0 255.255.0.0 U 1006   00 enp9s0f1
link-local  0.0.0.0 255.255.0.0 U 1008   00 
ovirtmgmt
link-local  0.0.0.0 255.255.0.0 U 1015   00 bond0
link-local  0.0.0.0 255.255.0.0 U 1017   00 ADMIN
link-local  0.0.0.0 255.255.0.0 U 1021   00 SRV


and:


[root@ovh01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: enp13s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master 
ovirtmgmt state UP qlen 1000
link/ether e0:3f:49:6d:68:c4 brd ff:ff:ff:ff:ff:ff
3: enp8s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master 
bond0 state UP qlen 1000
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
4: enp8s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master 
bond0 state UP qlen 1000
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
5: enp9s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 
1000
link/ether 90:e2:ba:11:21:d4 brd ff:ff:ff:ff:ff:ff
inet 10.0.131.181/24 brd 10.0.131.255 scope global enp9s0f0
   valid_lft forever preferred_lft forever
6: enp9s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 
1000
link/ether 90:e2:ba:11:21:d5 brd ff:ff:ff:ff:ff:ff
inet 10.0.132.181/24 brd 10.0.132.255 scope global enp9s0f1
   valid_lft forever preferred_lft forever
7: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 26:b2:4e:5e:f0:60 brd ff:ff:ff:ff:ff:ff
8: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether e0:3f:49:6d:68:c4 brd ff:ff:ff:ff:ff:ff
inet 10.0.24.181/24 brd 10.0.24.255 scope global ovirtmgmt
   valid_lft forever preferred_lft forever
14: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master 
ovirtmgmt state UNKNOWN qlen 500
link/ether fe:16:3e:79:25:86 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe79:2586/64 scope link
   valid_lft forever preferred_lft forever
15: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue 
state UP
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
16: bond0.32@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
master ADMIN state UP
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
17: ADMIN: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
20: bond0.24@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
master SRV state UP
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff
21: SRV: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff


The host keeps toggling all storage domains on and off as soon as there 
is an iSCSI bond configured.


Thank you for your patience.

cu,
Uwe


Am 18.08.2016 um 11:10 schrieb Elad Ben Aharon:

I don't think it's necessary.
Please provide the host's routing table and interfaces list ('ip a' or
ifconfing) while it's configured with the bond.

Thanks

On Tue, Aug 16, 2016 at 4:39 PM, Uwe Laverenz <u...@laverenz.de
<mailto:u...@laverenz.de>> wrote:

Hi Elad,

Am 16.08.2016 um 10:52 schrieb Elad Ben Aharon:

Please be sure that ovirtmgmt is not part of the iSCSI bond.


Yes, I made sure it is not part of the bond.

It does seem to have a conflict between default and enp9s0f0/
enp9s0f1.
Try to put the host in maintenance and then delete the iscsi
nodes using
'iscsiadm -m node -o delete'. Then activate the host.


I tried that, I managed to get the iSCSI interface clean, no
"default" anymore. But that didn't solve the problem of the host
becoming "inactive". Not even the NFS domains would come up.

As soon as I remove the iSCSI-bond, the host becomes respo

Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-16 Thread Uwe Laverenz

Hi Elad,

Am 16.08.2016 um 10:52 schrieb Elad Ben Aharon:


Please be sure that ovirtmgmt is not part of the iSCSI bond.


Yes, I made sure it is not part of the bond.


It does seem to have a conflict between default and enp9s0f0/ enp9s0f1.
Try to put the host in maintenance and then delete the iscsi nodes using
'iscsiadm -m node -o delete'. Then activate the host.


I tried that, I managed to get the iSCSI interface clean, no "default" 
anymore. But that didn't solve the problem of the host becoming 
"inactive". Not even the NFS domains would come up.


As soon as I remove the iSCSI-bond, the host becomes responsive again 
and I can activate all storage domains. Removing the bond also brings 
the duplicated "Iface Name" back (but this time causes no problems).


...

I wonder if there is a basic misunderstanding on my side: wouldn't it be 
necessary that all targets are reachable from all interfaces that are 
configured into the bond to make it work?


But this would either mean two interfaces in the same network or routing 
between the iSCSI networks.


Thanks,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-16 Thread Uwe Laverenz

Hi,

Am 16.08.2016 um 09:26 schrieb Elad Ben Aharon:

Currently, your host is connected through a single initiator, the
'Default' interface (Iface Name: default), to 2 targets: tgta and tgtb


I see what you mean, but the "Iface Name" is somewhat irritating here, 
it does not mean that the wrong interface (ovirtmgmt) is used.
If you have a look at "Iface IPaddress" for both you can see that the 
correct, dedicated interfaces are used:


Iface IPaddress: 10.0.131.122   (iSCSIA network)
Iface IPaddress: 10.0.132.122   (iSCSIB network)


(Target: iqn.2005-10.org.freenas.ctl:tgta and Target:
iqn.2005-10.org.freenas.ctl:tgtb). Hence, each LUN is exposed from the
storage server via 2 paths.
Since the connection to the storage is done via the 'Default' interface
and not via the 2 iSCSI networks you've configured, currently, the iSCSI
bond is not operational.


Please see above. The storage servers iSCSI-addresses aren't even 
reachable from the ovirtmgmt net, they are in completely isolated networks.



For the iSCSI bond to be operational, you'll have to do the following:
- Create 2 networks in RHEVM under the relevant cluster (not sure if
you've already did it) - iSCSI1 and iSCSI2 . Configure both networks to
be non-required networks for the cluster (should be also non-VM networks).
- Attach the networks to the host's 2 interfaces using hosts Setup-networks.
- Create a new iSCSI bond / modify the bond you've created and pick the
2 newly created networks along with all storage targets. Make sure that
the Default network is not part of the bond (usually, the Default
network is the management one - 'ovirtmgmt').
- Put the host in maintenance and re-activate it so the iSCSI sessions
will be refreshed with the new connection specifications.


This is exactly what I did, expect that I had to add the iSCSI-storage 
first, otherwise the "iSCSI Multipathing" tab does not appear in the 
data center section.


I configured an iSCSI-Bond and the problem seems to be that it leads to 
conflicting iSCSI-settings on the host. The host uses the very same 
interface twice only with different "IFace Name":


iSCSIA:

Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:cda91b279ac5
Iface IPaddress: 10.0.131.122

Iface Name: enp9s0f0
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:cda91b279ac5
Iface IPaddress: 10.0.131.122


iSCSIB:

Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:cda91b279ac5
Iface IPaddress: 10.0.132.122

Iface Name: enp9s0f1
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:cda91b279ac5
Iface IPaddress: 10.0.132.122

I guess this is the reason why the host has problems to attach the 
storage domain, it toggles all storage domains on and off all the time.


Thank you,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing -> host inactive

2016-08-16 Thread Uwe Laverenz

Hi,

Am 15.08.2016 um 16:53 schrieb Elad Ben Aharon:


Is the iSCSI domain that supposed to be connected through the bond the
current master domain?


No, it isn't. An NFS share is the master domain.



Also, can you please provide the output of 'iscsiadm -m session -P3' ?


Yes, of course (meanwhile I have switched to 2 targets, 1 per portal). 
This is _without_ iSCSI-Bond:


[root@ovh01 ~]# iscsiadm -m session -P3
iSCSI Transport Class version 2.0-870
version 6.2.0.873-33.2
Target: iqn.2005-10.org.freenas.ctl:tgta (non-flash)
Current Portal: 10.0.131.121:3260,257
Persistent Portal: 10.0.131.121:3260,257
**
Interface:
**
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:cda91b279ac5
Iface IPaddress: 10.0.131.122
Iface HWaddress: 
Iface Netdev: 
SID: 34
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*
Timeouts:
*
Recovery Timeout: 5
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*
CHAP:
*
username: 
password: 
username_in: 
password_in: 

Negotiated iSCSI params:

HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 131072
FirstBurstLength: 131072
MaxBurstLength: 16776192
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1

Attached SCSI devices:

Host Number: 44 State: running
scsi44 Channel 00 Id 0 Lun: 0
Attached scsi disk sdf  State: running
scsi44 Channel 00 Id 0 Lun: 1
Attached scsi disk sdg  State: running
scsi44 Channel 00 Id 0 Lun: 2
Attached scsi disk sdh  State: running
scsi44 Channel 00 Id 0 Lun: 3
Attached scsi disk sdi  State: running
Target: iqn.2005-10.org.freenas.ctl:tgtb (non-flash)
Current Portal: 10.0.132.121:3260,258
Persistent Portal: 10.0.132.121:3260,258
**
Interface:
**
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:cda91b279ac5
Iface IPaddress: 10.0.132.122
Iface HWaddress: 
Iface Netdev: 
SID: 35
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*
Timeouts:
*
Recovery Timeout: 5
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*
CHAP:
*
username: 
password: 
username_in: 
password_in: 

Negotiated iSCSI params:

HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 131072
FirstBurstLength: 131072
MaxBurstLength: 16776192
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1

Attached SCSI devices:

Host Number: 45 State: running
scsi45 Channel 00 Id 0 Lun: 0
Attached scsi disk sdj  State: running
scsi45 Channel 00 Id 0 Lun: 1
Attached scsi disk sdk  State: running
scsi45 Channel 00 Id 0 Lun: 2
Attached scsi disk sdl  State: running
scsi45 Channel 00 Id 0 Lun: 3
Attached scsi disk sdm  State: running

And `multipath -ll`:

[root@ovh01 ~]# multipath -ll
36589cfc00fafcc87da5ddd69c7e2 dm-2 FreeNAS ,iSCSI Disk

[ovirt-users] iSCSI Multipathing -> host inactive

2016-08-15 Thread Uwe Laverenz

Hi all,

I'd like to test iSCSI multipathing with OVirt 4.02 and see the 
following problem: if I try to add an iSCSI-Bond the host loses 
connection to _all_ storage domains.


I guess I'm doing something wrong. :)

I have built a small test environment for this:

The storage is provided by a freenas box with two dedicated interfaces 
for two separate iSCSI networks.


Each interface has one address in one network (no VLANs, no trunking). 
For each network there is one portal configured. Both portals point to 
the same target. The target has 4 LUNs.


The host also has two dedicated interfaces for iSCSI and finds the 
target over both portals, all LUNs are selected for use and show 2 paths.


My questions:

1) Is this setup ok or did I miss something?

2) The LUNs already show 2 paths (multipath -ll), one "active" and one 
"enabled", what difference would a Datacenter iSCSI-bond make?


3) What combination of checkboxes do I have to use?

Logical Letworks

[ ] ISCSIA
[ ] ISCSIB

Storage Targets

[ ] iqn.2005-10.org.freenas.ctl:tgt01   10.0.131.121   3260
[ ] iqn.2005-10.org.freenas.ctl:tgt01   10.0.132.121   3260


As stated in the beginning: all my tests made the host lose connection 
to all storage domains (NFS included) and I can not see what I am doing 
wrong.


Thank you very much!

cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Linux guests vs. Spice/QXL?

2016-05-18 Thread Uwe Laverenz

Hi,

Am 18.05.2016 um 16:03 schrieb Shmuel Melamud:


Did you set the correct OS type in the VM properties in each test?


It seems I didn't. After setting it to reasonable values the problem was 
solved for Debian 8 and CentOS 7 (both KDE4).


Fedora 24 and Kubuntu 16.04 (both Plasma 5) stop insisting on 1024x768 
as soon as you choose OS type "RHEL 7x" (no autosizing though).


I never thought this setting could be so important. :)

Thank you very much!

cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Linux guests vs. Spice/QXL?

2016-05-18 Thread Uwe Laverenz

Hi all,

I'm running some tests on OVirt (3.6.5.3) on CentOS 7 and almost 
everything works quite well so far.


CentOS 6.x, Windows 7 and 2008R2 work fine with Spice/QXL, so my setup 
seems to be ok.


Other Linux systems don't work: Debian 8, Fedora 23/24, CentOS 7.x, 
Kubuntu 16.04... CentOS 7.x even kills his X server every time the user 
logs out. X-)


They all have in common that they show a fixed display resolution of 
1024x768 pixels. This can not be changed manually and of course 
automatic display resizing doesn't work either.


All machines have spice-vdagent and ovirt-guest-agent installed and running.

Is this a local problem or is this known/expected behaviour? Is there 
anything I can do to improve this?


thank you,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Windows 10

2016-03-11 Thread Uwe Laverenz

Hi,

Am 10.03.2016 um 17:18 schrieb Jean-Marie Perron:

Hello,

OVirt 3.6.3 is installed on CentOS 7.

I use 64-bit Windows 10 client with spice display.

After installing the spice-guest-tools and oVirt-tools-setup on the VM
Windows 10, the display always lag and slow.

The display on a Windows 7 VM is fluid.

On Device Manager and Display adapters, I see see the graphics card "Red
Hat QLX Controller"

Are Windows 10 is fully supported by oVirt?


I haven't tested this but please have a look at the qxlwddm driver here:

https://people.redhat.com/vrozenfe/qxlwddm/


Some people reported that this works for Win 8/8.1/10:

https://bugzilla.redhat.com/show_bug.cgi?id=895356

cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Spice-devel] Problem with USB redirection

2015-03-31 Thread Uwe Laverenz

Hi all,

Am 25.02.2015 um 09:31 schrieb Christophe Fergeau:


The Windows clients have installed Windows 7 Enterprise gets from Windows
MSDNAA and it is installed in all machines here in our University.

If you need some other information, please, feel free to ask me.


I'm asking about remote-viewer (Windows SPICE client ;). Where did you
get it from, and what version is it?


I see the same problem. The SPICE-Client ist the latest version from here:

http://virt-manager.org/download/

It's VirtViewer 2.0.256 for Windows and the problem is, that if you try 
to connect an USB device you only get an info dialog stating USB 
redirection support not compiled in. This is the same behaviour as in 
client version 1.0256.


I guess this feature simply doesn't exist in the Windows client (yet)?

bye,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted-engine : how to shutdown hosts?

2015-02-28 Thread Uwe Laverenz

Hi,

just a minor problem I guess: I have a small test environment with 2 
hosts and a hosted engine on a separate NFS3 share, all running CentOS7. 
The VMs are running from an iSCSI storage.


When I want to shutdown the environment I:

- shutdown VMs
- enable global maintenance mode
- shutdown -h now on the hosted engine vm
- shutdown -h now on the hosts

The problem: instead of shutting down, the hosts perform a reboot after 
a short while.


Is this the expected behaviour or a known bug? How can I cleanly 
shutdown my OVirt-environment?


Thank you,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Setup

2015-02-02 Thread Uwe Laverenz

Hello Michael,

Am 02.02.2015 um 00:55 schrieb Michael Schefczyk:


- In the web interface of the hosted engine, however (Hosted Engine
Network.pdf, page 3) the required network ovirtmgmt is initially
not connected to bond0 (while it is in reality connected, as ifconfig
shows). When dragging ovirtmtgt to the arrow pointing to bond0, it
does not work. The error message is Bad bond name, it must begin
with the prefix 'bond' followed by a number. This is easy to
understand, as bond0 is a combination of bond and the number zero.


bond0 is the correct one, the error messages refers to your other 
bond: bondC is not a correct name, you should name it bond1 or bond2.


hth,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine setup ovirtmgmt bridge

2015-01-26 Thread Uwe Laverenz

Hi,

Am 26.01.2015 um 23:49 schrieb Mikola Rose:


On a hosted-engine --deploy on a machine that has 2 network cards
em1 192.168.0.178  General Network
em2 192.168.1.151  Net that NFS server is on,  no dns no gateway

which one would I set as ovirtmgmt bridge

Please indicate a nic to set ovirtmgmt bridge on: (em1, em2) [em1]


The general network would be the correct one (em1).

cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users