On Wed, Mar 6, 2019 at 12:49 PM Benny Zlotnik wrote:
> Also, which driver are you planning on trying?
>
> And there are some known issues we fixed in the upcoming 4.3.2,
> like setting correct permissions to /usr/share/ovirt-engine/cinderlib
> it should be owned by the ovirt user
>
> We'll be hap
So from the profile, it appears the XATTROPs and FINODELKs are way higher
than the number of WRITEs:
...
...
%-latency Avg-latency Min-Latency Max-Latency No. of calls
Fop
- --- --- ---
0.43 384.83 us 51.00 us 65
Hi,
Could you share the following pieces of information to begin with -
1. output of `gluster volume info $AFFECTED_VOLUME_NAME`
2. glusterfs version you're running
-Krutika
On Sat, Mar 2, 2019 at 3:38 AM Drew R wrote:
> Saw some people asking for profile info. So I had started a migration
I don't think that you can achieve it with only 2 nodes, as you can't protect
yourself of split brain.
Ovirt supports only glusterfs of replica 3 arbiter 1. If you create your own
glusterfs , you can use glusterd2 with a "remote arbiter" in another location.
That will give you protection from sp
Shane,
This may be possible, I'm sure others will chime in. I do think you'd save
yourself a lot of headaches if you were able to do three server
Hyper-converged infrastructure setup with GlusterFS in either replica 3 or
replica 3 arbiter 1. You will get a very good HA solution out of a 3 node
H
Is it possible to have only two physical hosts with NFS and be able to do VM HA
/ Failover between these hosts?
Both hosts are identical with RAID Drive Arrays of 8TB.
If so, can anybody point me to any docs or examples on exactly how the Storage
setup is done so that NFS will replicate across
Hi,
looking for help...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-
Hi,
looking for help...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-
Hi,
Is it possible to install oVirt Hypervisor using PXE boot install.
I did a simple dhcpd+tftp+httpd install and I configured a test with CentOS
7 and it works both for legacy and efi.
When I tried with the Hypervisor ISO, the image booted but I didn't any
package for installation, digging a l
Hi all,
in the past we have used customized ipxe (to allow boot over network
with 10G cards), now we have finally updated our hypervisors to the
latest ipxe-roms-qemu
Of course the sum now differs and during live-migration the libvirtd
throws this error:
Mar 4 11:37:14 hypevisor-01 libvirtd
On Wed, Mar 6, 2019 at 3:44 PM Bryan Sockel
wrote:
> Hi,
>
> I am looking to implement OVN with in my setup, but could use some
> guidance on the implementation. As part of the implementation i am
> planning on moving from engine being installed on a physical server to
> running as a VM with in
On Wed, Mar 6, 2019 at 3:09 PM Strahil Nikolov
wrote:
> Hi Simone,
>
> thanks for your reply.
>
> >Are you really sure that the issue was on the ping?
> >on storage errors the broker restart itself and while the broker is
> restarting >the agent cannot ask the broker to trigger the gateway monito
Hi,
I am looking to implement OVN with in my setup, but could use some guidance
on the implementation. As part of the implementation i am planning on
moving from engine being installed on a physical server to running as a VM
with in my environment.
I do not need to retain any historical dat
Thank you!
The ownership of the volume file had changed to root:root. I changed it back
to vdsm:kvm and the hosted engine started.
For anyone else who runs in to this, the file was in:
/rhev/data-center/mnt/glusterSD/ovirtnode-02:_vmstore/79376c46-b80c-4c44-bbb1-80c0714a4b52/images/48ee766b-18
Hi,
On Tue, Mar 05, 2019 at 05:01:30PM +0100, jeanbaptiste.coup...@nfrance.com
wrote:
> Hello Victor,
>
> Thanks for answer.
>
> Attached, output of remote-viewer, with --debug --spice-debug. During this
> debug session I've tried some time to paste content (via CTRL+SHIFT+V)
>
> Regards,
> Je
Hi all,
in the past we have used customized ipxe (to allow boot over network
with 10G cards), now we have finally updated our hypervisors to the
latest ipxe-roms-qemu
Of course the sum now differs and during live-migration the libvirtd
throws this error:
Mar 4 11:37:14 hypevisor-01 libvirtd
It sure if this is the same bug I hit but check ownership of the cam
images. There’s a bug in 4.3 upgrade that changes ownership to root and
causes vms to not start until you change back to vdsm
On Wed, Mar 6, 2019 at 4:57 AM Shawn Southern
wrote:
> After running 'hosted-engine --vm-start', the
Also, which driver are you planning on trying?
And there are some known issues we fixed in the upcoming 4.3.2,
like setting correct permissions to /usr/share/ovirt-engine/cinderlib
it should be owned by the ovirt user
We'll be happy to receive bug reports
On Wed, Mar 6, 2019, 13:44 Benny Zlotnik
Hey Gianluca,
The process of adding a cinderlib DB is similar to the engine DB.
The cinderlib DB will be used by cinderlib process, so you will not see
there anything until you will use this feature.
The "Managed block storage" domain function will be available only if you
configured the cinderl
unfortunately we don't have proper packaging for cinderlib at the moment,
it needs to be installed via pip,
pip install cinderlib
also you need to enable the config value ManagedBlockDomainSupported
On Wed, Mar 6, 2019, 13:24 Gianluca Cecchi
wrote:
> Hello,
> I have updated an environment from
forgot the Gluster Versions:
Hypervisors:
glusterfs-3.12.15-1.el7.x86_64
glusterfs-api-3.12.15-1.el7.x86_64
glusterfs-cli-3.12.15-1.el7.x86_64
glusterfs-client-xlators-3.12.15-1.el7.x86_64
glusterfs-events-3.12.15-1.el7.x86_64
glusterfs-fuse-3.12.15-1.el7.x86_64
glusterfs-geo-replication-3.12.15
Hello,
I have updated an environment from 4.2.8 to 4.3.1.
During setup I selected:
--== PRODUCT OPTIONS ==--
Set up Cinderlib integration
(Currently in tech preview. For more info -
https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.
> On 5 Mar 2019, at 18:35, p.stanifo...@leedsbeckett.ac.uk wrote:
>
> Hello is there a newer version than 5 for virt-viewer for Centos?
That’s a CentOS question really.
But apparently not.
Any specific issue you’re trying to fix?
Thanks,
michal
>
> Thanks,
> Paul S.
> ___
I wanto try to install ovirt4.3.1,follow the manual of ovirt,but I encounted
this error:
[root@localhost ~]# sudo subscription-manager repos
--enable="rhel-7-server-rpms"
System certificates corrupted. Please reregister.
And I have no idea to deal it
Thanks.
_
After running 'hosted-engine --vm-start', the status of the hosted engine VM is:
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirtnode-01
Host ID: 3
Engine status : {"reaso
Hello,
With my first "in Ovirt" made Gluster Storage I am getting some annoying
Warnings.
On the Hypervisor(s) engine.log :
2019-03-05 13:07:45,281+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler5) [59957167] START,
GetGlust
26 matches
Mail list logo