Am 22.07.20 um 21:55 schrieb Mark R:
Thanks, Uwe. Am I understanding correctly that you're just letting
your nodes attach to the iSCSI storage on their own by leaving
"node.startup = automatic" in /etc/iscsi/iscsid.conf so the hosts
attach to all known targets as they boot, long before oVirt
Am 14.07.20 um 02:14 schrieb Mark R:
I'm looking through quite a few bug reports and mailing list threads,
but want to make sure I'm not missing some recent development. It
appears that doing iSCSI with two separate, non-routed subnets is
still not possible with 4.4.x. I have the
Am 13.04.20 um 18:15 schrieb eev...@digitaldatatechs.com:
I have a question for the developers: Why use gluster? Why not Pacemaker
or something with better performance stats?
if I'm not mistaken, these two have different purposes: gluster(fs) is a
Am Dienstag, den 22.01.2019, 15:46 +0100 schrieb Lucie Leistnerova:
> Yes, it should be supported also in 4.2.8. According to Release
> notes for 4.2.7 this warning is related to 4.3 version.
> BZ 1623259 Mark clusters with deprecated CPU type
Am Montag, den 21.01.2019, 06:43 +0100 schrieb Uwe Laverenz:
> I will post a bonnie++ result later. If you need more details please
Attached are the results of the smallest setup (my home lab): storage
server is a HP N40L with 16GB RAM, 4x2TB WD RE as RAID10, CentOS 7 with
LIO as iS
Am 20.01.19 um 18:32 schrieb John Florian:
As for how to get there, whatever exactly that might look like, I'm also
having troubles figuring that out. I figured I would transform the
setup described below into one where each host has:
* 2 NICs bonded with LACP for my ovirtmgmt
I have the following strange problem with virt-viewer 5.0 and 6.0 on
Windows 10: when I connect to Linux VMs running on a OVirt 4.1 cluster
it sometimes is not possible to use e.g. "AltGr + q" to get the "@"-sign
or "AltGr+8" for "[" on my german keyboard. It seems to be a problem
Am 15.08.2017 um 13:35 schrieb Latchezar Filtchev:
1. Is it in production?
Not really, just for testing purposes to provide some kind of shared
storage for OVirt. I like FreeNAS, it's a very nice system but for
production we use a setup with distributed/mirrored storage that
Am 15.08.2017 um 10:50 schrieb Latchezar Filtchev:
Just curious – did someone uses FreeNAS as storage for oVirt. My
staging environment is - two virtualization nodes, hosted engine,
FreeNAS as storage (iSCSI hosted storage, iSCSI Data(Master) domain and
NFS shares as ISO and export
Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
trying to enable the feature without success too.
Here’s what I’ve done, step-by-step.
1. Installed oVirt Node 4.1.3 with the following network settings:
eno1 and eno2
just to avoid misunderstandings: the workaround I suggested means that I
don't use OVirt's iSCSI-Bonding at all (because it let's my environment
misbehave in the same way you described).
Users mailing list
Am 17.07.2017 um 14:11 schrieb Devin Acosta:
I am still troubleshooting the issue, I haven’t found any resolution to
my issue at this point yet. I need to figure out by this Friday
otherwise I need to look at Xen or another solution. iSCSI and oVIRT
Am 09.05.2017 um 12:01 schrieb Gajendra Ravichandran:
I tried to convert using virt-v2v
as http://libguestfs.org/virt-v2v.1.html. However, I get error as
(Debian/ Linux cannot be converted).
Yes, virt-v2v only supports a limited number of operating systems,
Debian is just not supported.
Am 02.02.2017 um 13:19 schrieb Sandro Bonazzola:
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it
works fine for you :-)
I just updated my test environment (3 hosts, hosted engine, iSCSI) to
4.1 and it
I have a small DC running with OVirt 4.0.3 and I am very pleased so far.
The next thing I want to test is VDI, so I:
- installed a Windows 7 machine
- ran sysprep and created a template
On my hosted-engine I ran:
where I chose '3 - Active
Am 24.08.2016 um 17:15 schrieb InterNetX - Juergen Gotteswinter:
iSCSI & Ovirt is an awful combination, no matter if multipathed or
bonded. its always gambling how long it will work, and when it fails why
did it fail.
its supersensitive to latency, and superfast with setting an host
er.vdsbroker.ConnectStorageServerVDSCommand' return value '
[code=5022, message=Message timeout which can be caused by communication
On Wed, Aug 24, 2016 at 4:04 PM, Uwe Laverenz <u...@laverenz.de
I sent you a download message.
Users mailing list
an iSCSI bond configured.
Thank you for your patience.
Am 18.08.2016 um 11:10 schrieb Elad Ben Aharon:
I don't think it's necessary.
Please provide the host's routing table and interfaces list ('ip a' or
ifconfing) while it's configured with the bond.
On Tue, Aug 16, 2016 at 4:
Am 16.08.2016 um 10:52 schrieb Elad Ben Aharon:
Please be sure that ovirtmgmt is not part of the iSCSI bond.
Yes, I made sure it is not part of the bond.
It does seem to have a conflict between default and enp9s0f0/ enp9s0f1.
Try to put the host in maintenance and then delete the
Am 16.08.2016 um 09:26 schrieb Elad Ben Aharon:
Currently, your host is connected through a single initiator, the
'Default' interface (Iface Name: default), to 2 targets: tgta and tgtb
I see what you mean, but the "Iface Name" is somewhat irritating here,
it does not mean that the wrong
Am 15.08.2016 um 16:53 schrieb Elad Ben Aharon:
Is the iSCSI domain that supposed to be connected through the bond the
current master domain?
No, it isn't. An NFS share is the master domain.
Also, can you please provide the output of 'iscsiadm -m session -P3' ?
Yes, of course
I'd like to test iSCSI multipathing with OVirt 4.02 and see the
following problem: if I try to add an iSCSI-Bond the host loses
connection to _all_ storage domains.
I guess I'm doing something wrong. :)
I have built a small test environment for this:
The storage is provided by a
Am 18.05.2016 um 16:03 schrieb Shmuel Melamud:
Did you set the correct OS type in the VM properties in each test?
It seems I didn't. After setting it to reasonable values the problem was
solved for Debian 8 and CentOS 7 (both KDE4).
Fedora 24 and Kubuntu 16.04 (both Plasma 5) stop
I'm running some tests on OVirt (220.127.116.11) on CentOS 7 and almost
everything works quite well so far.
CentOS 6.x, Windows 7 and 2008R2 work fine with Spice/QXL, so my setup
seems to be ok.
Other Linux systems don't work: Debian 8, Fedora 23/24, CentOS 7.x,
Kubuntu 16.04... CentOS
Am 10.03.2016 um 17:18 schrieb Jean-Marie Perron:
OVirt 3.6.3 is installed on CentOS 7.
I use 64-bit Windows 10 client with spice display.
After installing the spice-guest-tools and oVirt-tools-setup on the VM
Windows 10, the display always lag and slow.
The display on a Windows
Am 25.02.2015 um 09:31 schrieb Christophe Fergeau:
The Windows clients have installed Windows 7 Enterprise gets from Windows
MSDNAA and it is installed in all machines here in our University.
If you need some other information, please, feel free to ask me.
I'm asking about
just a minor problem I guess: I have a small test environment with 2
hosts and a hosted engine on a separate NFS3 share, all running CentOS7.
The VMs are running from an iSCSI storage.
When I want to shutdown the environment I:
- shutdown VMs
- enable global maintenance mode
Am 02.02.2015 um 00:55 schrieb Michael Schefczyk:
- In the web interface of the hosted engine, however (Hosted Engine
Network.pdf, page 3) the required network ovirtmgmt is initially
not connected to bond0 (while it is in reality connected, as ifconfig
shows). When dragging
Am 26.01.2015 um 23:49 schrieb Mikola Rose:
On a hosted-engine --deploy on a machine that has 2 network cards
em1 192.168.0.178 General Network
em2 192.168.1.151 Net that NFS server is on, no dns no gateway
which one would I set as ovirtmgmt bridge
Please indicate a nic to set
Mail list logo