Thank you Strahil for that.
On Fri, Apr 5, 2019, 06:45 Strahil wrote:
> Adding Gluster users' mail list.
> On Apr 5, 2019 06:02, Leo David wrote:
>
> Hi Everyone,
> Any thoughts on this ?
>
>
> On Wed, Apr 3, 2019, 17:02 Leo David wrote:
>
> Hi Everyone,
> For a hyperconverged setup started
Hi,
I think you mean to ask about the connection broker to connect to your
VDI infrastructure?
Something like this:
Or
https://www.leostream.com/solution/remote-access-for-virtual-and-physical-workstations/
Ovirt has the VM user portal https://github.com/oVirt/ovirt-web-ui , but
I have never
At least , based on spec I would prefer LSI9265-8i as it supports hot spare,
SSD support , cache and set it up in Raid 0 - but only in a replica 3 or
replica 3 arbiter 1 volumes.
Best Regards,Strahil Nikolov
В петък, 5 април 2019 г., 9:20:57 ч. Гринуич+3, Leo David
написа:
Thank
Hi Andrej,
I missed to point a fact that is probably determining. Prior to noticing
the error, we upgraded the Cluster & Data Center compatibility version
from 4.1 to 4.3, which caused ovirt-engine to automatically edit all VMs
and modify their compatibility versions as well (with changes
Hi Simone,
a short mail chain in gluster-users Amar confirmed my suspicion that Gluster
v5.5 is performing a little bit slower than 3.12.15 .In result the sanlock
reservations take too much time.
I have updated my setup and uncached (used lvm caching in writeback mode) my
data bricks and used
On Fri, Apr 5, 2019 at 10:48 AM Strahil Nikolov
wrote:
> Hi Simone,
>
> a short mail chain in gluster-users Amar confirmed my suspicion that
> Gluster v5.5 is performing a little bit slower than 3.12.15 .
> In result the sanlock reservations take too much time.
>
Thanks for the report!
> I
Hello,
can someone tell me if this is an expected behaviour:
1. I have created a data storage domain exported by nfs-ganesha via NFS2. Stop
all VMs on the Storage domain
3. Set to maintenance and detached (without wipe) the storage domain3.2 All VMs
are gone (which was expected)4. Imported the
On Fri, Apr 5, 2019 at 9:56 AM Miguel Duarte de Mora Barroso <
mdbarr...@redhat.com> wrote:
>
>
> Mind sharing the created ACLs ? (which I'm quite positive will be the
> default ones, but I just have to be sure). Can be done via "ovn-nbctl
> list acl" . With that I can check the ACLs assigned to
On Fri, Apr 5, 2019 at 2:18 PM Strahil Nikolov
wrote:
> >This definitively helps, but for my experience the network speed is
> really determinant here.Can you describe your network >configuration?
> >A 10 Gbps net is definitively fine here.
> >A few bonded 1 Gbps nics could work.
> >A single 1
On Thu, Apr 4, 2019 at 2:04 PM Gianluca Cecchi
wrote:
>
> On Thu, Apr 4, 2019 at 12:07 PM Miguel Duarte de Mora Barroso
> wrote:
>>
>>
>> > Questions:
>> > - what is the role of the "Network port security" option for an OVN
>> > network?
>>
>> It means that newly created ports under that
Especially in the event that you are trying with explicit equipment highlights,
multiprocessor frameworks, and other critical variables. These restrictions
rely upon the specific VM and you should check to http://www.essayempire.co.uk
their documentation first before performing mechanized
>This definitively helps, but for my experience the network speed is really
>determinant here.Can you describe your network >configuration?
>A 10 Gbps net is definitively fine here.
>A few bonded 1 Gbps nics could work.
>A single 1 Gbps nic could be an issue.
I have a gigabit interface on my
Forward to the list.
Forwarded Message
Subject:Re: [ovirt-users] Re: VDI broker and oVirt
Date: Fri, 05 Apr 2019 04:52:13 -0400
From: a...@triadic.us
As far as official software, the best you'll find is the user portal.
There is also this...
Dear all
Upgrading from 3.5 to 4.0 or 4.3 is not supported. You need to upgrade to 3.6 first. We've done this and everything worked just fine. Always check out the release notes and upgrade instructions before upgrading.
From
Thanks for the info.
Here is the Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1696621
Andrej
On Fri, 5 Apr 2019 at 10:22, wrote:
> Hi Andrej,
>
> I missed to point a fact that is probably determining. Prior to noticing
> the error, we upgraded the Cluster & Data Center compatibility
On 4/4/19 7:03 AM, Dominik Holler wrote:
> On Sun, 10 Mar 2019 13:45:59 -0400
> John Florian wrote:
>
>> In my oVirt deployment at home, I'm trying to minimize the amount of
>> physical HW and its 24/7 power draw. As such I have the NFS server for
>> my domain virtualized. This is not used for
Also, I see in the notification drawer a message that says:
Storage domains with IDs [ed4d83f8-41a2-41bd-a0cd-6525d9649edb] could not
be synchronized. To synchronize them, please move them to maintenance and
then activate.
However, when I navigate to Compute > Data Centers > Default, the
I am in a severe pinch here. A while back I upgraded from 4.2.8 to 4.3.3
and only had one step remaining and that was to set the cluster compat
level to 4.3 (from 4.2). When I tried this it gave the usual warning that
each VM would have to be rebooted to complete, but then I got my first
unusual
Are you able to access your iSCSI via the /rhev/data-center/mnt... mount point ?
Best Regards,
Strahil NikolovOn Apr 5, 2019 19:04, John Florian
wrote:
>
> I am in a severe pinch here. A while back I upgraded from 4.2.8 to 4.3.3 and
> only had one step remaining and that was to set the
Hi Simone,
> According to gluster administration guide:
> https://docs.gluster.org/en/latest/Administrator%20Guide/Network%20Configurations%20Techniques/
>
> in the "when to bond" section we can read:
> network throughput limit of client/server \<\< storage throughput limit
>
> 1 GbE (almost
Hi.
I have a few issues after a recent upgrade from 4.3.1 to 4.3.2:
1) Power management is no longer working. I'm using Dell drac7. This
has always worked previously. When I click on the "Test" button, I get:
"Testing in progress. It will take a few seconds. Please wait" but then
it just
> What kind of storage are you using? local?
iSCSI
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
Doh! I am such an idiot !!!
First of all, I meant to say I upgraded to 4.3.2 not 4.3.3. I only installed
ovirt-release43.rpm on the engine. I've gotten too lazy with using the upgrade
host feature in the GUI that I completely failed to think of doing this on each
of the hosts. Worse, I've
Hello, I just installed ovirt 4.3.2, in Self-Hosted mode, all the same as in
previous versions. It happens that when I want to create a disk with a user
that is not the admin I get the following error.
"Error while executing action Add Disk to VM: Internal Engine Error"
This happens to me with
What kind of storage are you using? local?
On 2019-04-05 12:26, John Florian wrote:
> Also, I see in the notification drawer a message that says:
>
> Storage domains with IDs [ed4d83f8-41a2-41bd-a0cd-6525d9649edb] could not be
> synchronized. To synchronize them, please move them to
Hi all:
I had an unplanned power outage (generator failed to start, power failure
lasted 3 min longer than UPS batteries). One node didn't survive the
unplanned power outage.
By that, I mean it kernel panic's on boot, and I haven't been able to
capture the KP or the first part of it (just the
I'd say yes, I blow away nodes and reinstall them often, as workarounds for
various upgrade failures.
If you spend more than 20 mins troubleshooting it's more time efficient to
just start over.
On Fri, Apr 5, 2019, 4:42 PM Jim Kusznir wrote:
> Hi all:
>
> I had an unplanned power outage
Hi,
I have just extended the disk of one of my openSuSE VMs and I have noticed that
despite the disk is only 140GiB (in UI), the VM sees it as 180GiB.
I think that this should not happen at all.
[root@ovirt1 ee8b1dce-c498-47ef-907f-8f1a6fe4e9be]# qemu-img info
28 matches
Mail list logo