Hello Michael,
Am 02.02.2015 um 00:55 schrieb Michael Schefczyk:
- In the web interface of the hosted engine, however (Hosted Engine
Network.pdf, page 3) the required network ovirtmgmt is initially
not connected to bond0 (while it is in reality connected, as ifconfig
shows). When dragging
Hi,
Am 26.01.2015 um 23:49 schrieb Mikola Rose:
On a hosted-engine --deploy on a machine that has 2 network cards
em1 192.168.0.178 General Network
em2 192.168.1.151 Net that NFS server is on, no dns no gateway
which one would I set as ovirtmgmt bridge
Please indicate a nic to set
Hi all,
Am 25.02.2015 um 09:31 schrieb Christophe Fergeau:
The Windows clients have installed Windows 7 Enterprise gets from Windows
MSDNAA and it is installed in all machines here in our University.
If you need some other information, please, feel free to ask me.
I'm asking about
Hi,
just a minor problem I guess: I have a small test environment with 2
hosts and a hosted engine on a separate NFS3 share, all running CentOS7.
The VMs are running from an iSCSI storage.
When I want to shutdown the environment I:
- shutdown VMs
- enable global maintenance mode
- shutdown
Hi,
Am 10.03.2016 um 17:18 schrieb Jean-Marie Perron:
Hello,
OVirt 3.6.3 is installed on CentOS 7.
I use 64-bit Windows 10 client with spice display.
After installing the spice-guest-tools and oVirt-tools-setup on the VM
Windows 10, the display always lag and slow.
The display on a Windows
Hi,
Am 18.05.2016 um 16:03 schrieb Shmuel Melamud:
Did you set the correct OS type in the VM properties in each test?
It seems I didn't. After setting it to reasonable values the problem was
solved for Debian 8 and CentOS 7 (both KDE4).
Fedora 24 and Kubuntu 16.04 (both Plasma 5) stop
Hi all,
I'm running some tests on OVirt (3.6.5.3) on CentOS 7 and almost
everything works quite well so far.
CentOS 6.x, Windows 7 and 2008R2 work fine with Spice/QXL, so my setup
seems to be ok.
Other Linux systems don't work: Debian 8, Fedora 23/24, CentOS 7.x,
Kubuntu 16.04... CentOS
Hi all,
I'd like to test iSCSI multipathing with OVirt 4.02 and see the
following problem: if I try to add an iSCSI-Bond the host loses
connection to _all_ storage domains.
I guess I'm doing something wrong. :)
I have built a small test environment for this:
The storage is provided by a
Hi,
Am 16.08.2016 um 09:26 schrieb Elad Ben Aharon:
Currently, your host is connected through a single initiator, the
'Default' interface (Iface Name: default), to 2 targets: tgta and tgtb
I see what you mean, but the "Iface Name" is somewhat irritating here,
it does not mean that the wrong
Hi,
Am 15.08.2016 um 16:53 schrieb Elad Ben Aharon:
Is the iSCSI domain that supposed to be connected through the bond the
current master domain?
No, it isn't. An NFS share is the master domain.
Also, can you please provide the output of 'iscsiadm -m session -P3' ?
Yes, of course
Hi all,
Am 02.02.2017 um 13:19 schrieb Sandro Bonazzola:
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it
works fine for you :-)
I just updated my test environment (3 hosts, hosted engine, iSCSI) to
4.1 and it
Hi Elad,
Am 16.08.2016 um 10:52 schrieb Elad Ben Aharon:
Please be sure that ovirtmgmt is not part of the iSCSI bond.
Yes, I made sure it is not part of the bond.
It does seem to have a conflict between default and enp9s0f0/ enp9s0f1.
Try to put the host in maintenance and then delete the
Hi Jürgen,
Am 24.08.2016 um 17:15 schrieb InterNetX - Juergen Gotteswinter:
iSCSI & Ovirt is an awful combination, no matter if multipathed or
bonded. its always gambling how long it will work, and when it fails why
did it fail.
its supersensitive to latency, and superfast with setting an host
Hi,
I have a small DC running with OVirt 4.0.3 and I am very pleased so far.
The next thing I want to test is VDI, so I:
- installed a Windows 7 machine
- ran sysprep and created a template
On my hosted-engine I ran:
# ovirt-engine-extension-aaa-ldap-setup
where I chose '3 - Active
Hi Elad,
I sent you a download message.
thank you,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
an iSCSI bond configured.
Thank you for your patience.
cu,
Uwe
Am 18.08.2016 um 11:10 schrieb Elad Ben Aharon:
I don't think it's necessary.
Please provide the host's routing table and interfaces list ('ip a' or
ifconfing) while it's configured with the bond.
Thanks
On Tue, Aug 16, 2016 at 4:
and
'org.ovirt.engine.core.vdsbrok
er.vdsbroker.ConnectStorageServerVDSCommand' return value '
ServerConnectionStatusReturnForXmlRpc:{status='StatusForXmlRpc
[code=5022, message=Message timeout which can be caused by communication
issues]'}
On Wed, Aug 24, 2016 at 4:04 PM, Uwe Laverenz <u...@laverenz.de
<mailto:u
Hi,
Am 15.08.2017 um 13:35 schrieb Latchezar Filtchev:
1. Is it in production?
Not really, just for testing purposes to provide some kind of shared
storage for OVirt. I like FreeNAS, it's a very nice system but for
production we use a setup with distributed/mirrored storage that
tolerates
Hi,
Am 15.08.2017 um 10:50 schrieb Latchezar Filtchev:
Just curious – did someone uses FreeNAS as storage for oVirt. My
staging environment is - two virtualization nodes, hosted engine,
FreeNAS as storage (iSCSI hosted storage, iSCSI Data(Master) domain and
NFS shares as ISO and export
Hi,
Am 17.07.2017 um 14:11 schrieb Devin Acosta:
I am still troubleshooting the issue, I haven’t found any resolution to
my issue at this point yet. I need to figure out by this Friday
otherwise I need to look at Xen or another solution. iSCSI and oVIRT
seems problematic.
The
Hi,
just to avoid misunderstandings: the workaround I suggested means that I
don't use OVirt's iSCSI-Bonding at all (because it let's my environment
misbehave in the same way you described).
cu,
Uwe
___
Users mailing list
Users@ovirt.org
Hi,
Am 19.07.2017 um 04:52 schrieb Vinícius Ferrão:
I’m joining the crowd with iSCSI Multipath issues on oVirt here. I’m
trying to enable the feature without success too.
Here’s what I’ve done, step-by-step.
1. Installed oVirt Node 4.1.3 with the following network settings:
eno1 and eno2
Hi,
Am 09.05.2017 um 12:01 schrieb Gajendra Ravichandran:
I tried to convert using virt-v2v
as http://libguestfs.org/virt-v2v.1.html. However, I get error as
(Debian/ Linux cannot be converted).
Yes, virt-v2v only supports a limited number of operating systems,
Debian is just not supported.
Hi all,
I have the following strange problem with virt-viewer 5.0 and 6.0 on
Windows 10: when I connect to Linux VMs running on a OVirt 4.1 cluster
it sometimes is not possible to use e.g. "AltGr + q" to get the "@"-sign
or "AltGr+8" for "[" on my german keyboard. It seems to be a problem
Hi,
Am Montag, den 21.01.2019, 06:43 +0100 schrieb Uwe Laverenz:
> I will post a bonnie++ result later. If you need more details please
Attached are the results of the smallest setup (my home lab): storage
server is a HP N40L with 16GB RAM, 4x2TB WD RE as RAID10, CentOS 7 with
LIO as iS
Hi John,
Am 20.01.19 um 18:32 schrieb John Florian:
As for how to get there, whatever exactly that might look like, I'm also
having troubles figuring that out. I figured I would transform the
setup described below into one where each host has:
* 2 NICs bonded with LACP for my ovirtmgmt
Hi,
Am Dienstag, den 22.01.2019, 15:46 +0100 schrieb Lucie Leistnerova:
> Yes, it should be supported also in 4.2.8. According to Release
> notes for 4.2.7 this warning is related to 4.3 version.
>
> https://www.ovirt.org/release/4.2.7/
>
> BZ 1623259 Mark clusters with deprecated CPU type
>
Hi Eric,
Am 13.04.20 um 18:15 schrieb eev...@digitaldatatechs.com:
I have a question for the developers: Why use gluster? Why not Pacemaker
or something with better performance stats?
Just curious.
Eric Evans
if I'm not mistaken, these two have different purposes: gluster(fs) is a
Hi Mark,
Am 14.07.20 um 02:14 schrieb Mark R:
I'm looking through quite a few bug reports and mailing list threads,
but want to make sure I'm not missing some recent development. It
appears that doing iSCSI with two separate, non-routed subnets is
still not possible with 4.4.x. I have the
Am 22.07.20 um 21:55 schrieb Mark R:
Thanks, Uwe. Am I understanding correctly that you're just letting
your nodes attach to the iSCSI storage on their own by leaving
"node.startup = automatic" in /etc/iscsi/iscsid.conf so the hosts
attach to all known targets as they boot, long before oVirt
Hi.
Am 03.02.21 um 08:56 schrieb Strahil Nikolov via Users:
but without big software companies it will be quite hard. If Red Hat
shifts to Openshift, I am afraid that this project will be going into
oblivion.
My guess: if this happens then probably because OVirt/RHEV wasn't
successful
Hi Strahil,
Am 06.02.21 um 06:26 schrieb Strahil Nikolov:
I know several telecoms in Bulgaria use RHV, but they are small clients.
Yet, Openshift with 3 nodes looks quite difficult, while oVirt/RHV excells.
As I understand it, Openshift is mostly about containers, it is a
different product
32 matches
Mail list logo