Hi all,
I've set up an infrastructure with OVirt, using self-hosted engine.
I use some ansible scripts from my Virtualization Host (the physical
machine), to bootstrap the hosted engine, and create a set of virtual
machines on which I deploy a k8s cluster.
The deployment goes well, and everythin
t started after rebooting hosts.
>
>
> On Wed, Jan 29, 2020 at 9:28 AM Eugène Ngontang
> wrote:
>
>> I was looking if the "High Availability" option could be used for
>> automatic startup, but Ovirt documentation is pretty clear about it as
>> explained here
Hi,
I'm facing a virtual disk behavior I don't understand.
Currently my VMs are spun up with a Boot disk of 25GB and an additional
disk of 215/45/65 GB depending.
When logged to the webui I see the two disks, but when I ssh to VM we only
see the primary boot disk, the other one can't get a *UUID
(
> https://www.digitalocean.com/community/tutorials/how-to-partition-and-format-storage-devices-in-linux
> - guide using parted I found from a quick google)
>
> On 30/1/20 11:54 am, Eugène Ngontang wrote:
>
> Hi,
>
> I'm facing a virtual disk behavior I don't
/automated systems usually found in the big
> cloud providers like AWS.
>
> On 30 January 2020 6:57:34 pm Eugène Ngontang wrote:
>
>> Hi Joseph.
>>
>> Thanks to your answer, I perfectly know Linux disk management, but I
>> thought it was to Ovirt to manage all attac
Hi,
I'm trying to find out there a sort of API or ovirt CLI/SDK in order to be
able to interact with my ovirt VMS and associated resources.
In my architecture, I have an Ovirt virtualization host, with a self-hosted
engine VM to manage VMs.
>From the host I have the virsh command to list VMs sta
rit :
> On Thu, Feb 27, 2020 at 11:45 AM Eugène Ngontang
> wrote:
> >
> > Hi,
> >
> > I'm trying to find out there a sort of API or ovirt CLI/SDK in order to
> be able to interact with my ovirt VMS and associated resources.
> >
> > In my architecture,
020 at 7:22 AM Nathanaël Blanchet
> wrote:
>
>>
>> Le 27/02/2020 à 11:00, Yedidyah Bar David a écrit :
>>
>> On Thu, Feb 27, 2020 at 11:53 AM Eugène Ngontang
>> wrote:
>>
>> Yes Ansible ovirt_vms module is useful, I use it for
>> provision
@Derek,
You're talking about a client the should up-port, but before having a
client, my question is is there a documented API (server) to interact with
through that client?
Eugene NG
Le jeu. 27 févr. 2020 à 14:57, Derek Atkins a écrit :
> Eugene,
>
> On Thu, February 27, 2020 4
020 à 16:50, Derek Atkins a écrit :
> Yes. The devs call it "SDK4", which has been around for a few releases
> now.
> The CLI, however, uses SDK3, which was removed from Ovirt 4.4.
> Search for "ovirt-shell".
>
> -derek
>
> On Fri, February 28,
TV sometimes ;-).
>
> -derek
>
> On Fri, February 28, 2020 1:12 pm, Eugène Ngontang wrote:
> > Yes I know ovirt-shell.
> >
> > But if the Interface (API) is well exposed, we could ourself code add-hoc
> > client to interact with, as we know how it's defined
Hi,
This is not really an OVirt matter but I'll try my chance if anyone here
has already had this kind of issue or could help
I've two bare metal servers in two different network subnets with same
release version of RHVH/Ovirt.
I use an automated script to deploy VMs on both, and provision a doc
MT+02:00, "Eugène Ngontang" <
> sympav...@gmail.com> wrote:
> >Hi,
> >
> >This is not really an OVirt matter but I'll try my chance if anyone
> >here
> >has already had this kind of issue or could help
> >
> >I've two bare metal serv
Le ven. 6 mars 2020 à 07:34, Strahil Nikolov a
écrit :
> On March 6, 2020 12:26:24 AM GMT+02:00, "Eugène Ngontang" <
> sympav...@gmail.com> wrote:
> >Yes @Strahil, I also thought about mirrors list, but you see my
> >repolist
> >output above doesn'
Hi,
I'm facing a strange issue with ovirt host.
I can't list nodes status running virsh, vdsdm seems bad.
I can't even delete the hosted engine, nor ping it, but I can still access
my infra vm inside it[image: 😉]
My outputs :
> [root@moe ~]# /usr/sbin/ovirt-hosted-engine-cleanup
> This will de
Hi,
Our self hosted engine has been accidentally shut down by a teammate and
now I'm trying hard to get it back up without success.
I've tried the --vm-start command but it says the VM is in WaitForLaunch
status.
I've set the global maintenance mode but it does nothing.
root@milhouse-main ~]# h
able as brick?
>
> gluster volume status engine
>
> Br
> Marcel
>
> Am 18. Mai 2021 23:37:38 MESZ schrieb Edward Berger :
>>
>> With all the other VMs paused, I would guess all the VM disk image
>> storage is offline or unreachable
>> from the hypervisor.
>&g
Hi,
I understood we have to put hosted-engine into maintenance mode and
shutdown the VMs if we want to power off the host (otherwise it will
reboot). I'm setting up a process to
- put the HestedEngine in global maintenance mode (*hosted-engine
--set-maintenance --mode=global*)
- shutdown the Host
Hi,
I understood we have to put hosted-engine into maintenance mode and
shutdown the VMs if we want to power off the host (otherwise it will
reboot). I'm setting up a process to
- put the HestedEngine in global maintenance mode (*hosted-engine
--set-maintenance --mode=global*)
- shutdown the Host
Hi,
The problem is solved, the option to make VM start automatically is to set
it high available, and yes I've solved the same issue with infra vm during
my first days here.
Best regards.
Le ven. 9 juil. 2021 à 22:01, Eugène Ngontang a
écrit :
> Hi,
>
> I understood we have
Hi Didi,
Yes I opened another thread where i indicated that I've solve the issue
marking BipIp HA.
Thx.
Eugène NG
Le dim. 18 juil. 2021 à 11:17, Yedidyah Bar David a
écrit :
> On Fri, Jul 9, 2021 at 6:45 PM Eugène Ngontang
> wrote:
> >
> > Hi,
> >
> >
-Darrell
>
> On Feb 1, 2022, at 8:34 AM, Eugène Ngontang wrote:
>
> Hi,
>
> I'm using an aws ec2 bare metal install to deploy RHV-M in order to create
> and test NVidia GPU VMs.
>
> I'm trying to deploy a self hosted engine version 4.4.
> I've se
Hi,
I’ve set up a *RHVM/Ovirt* host on AWS using a bare metal instance.
Everything is working but now I would like to give a direct internet access
to VMs created inside this host. Actually those VMs get to internet through
a ssh forwarded squid proxy.
I can’t find the way to set that direct int
heck the log recommended and do a simple test:
> sudo -u vdsm touc NEWFILE /rhev/...path/to/storage/somefile
>
> Usually many users set the in the nfs the anonuid/anongid to 36 and force
> 'allsquash'.
>
> Best Regards,
> Strahil Nikolov
>
> On Sun, Mar 13,
v.../.../.../somefile bs=4M count=1 ->
> will write a file inside the mountpoint of your storage , as the user vdsm
> (just like oVirt does)
>
> Best Regards,
> Strahil Nikolov
>
> On Tue, Mar 15, 2022 at 0:59, Eugène Ngontang
> wrote:
> __
also get into the path.
>
> Have you checked if the storage is not already mounted on the /rhev/ ?
>
> Best Regards,
> Strahil Nikolov
>
> On Tue, Mar 15, 2022 at 12:23, Eugène Ngontang
> wrote:
> ___
> Users mailing li
sys,clientaddr=172.31.21.171,local_lock=none,addr=172.31.81.195)
Regards,
Eugène NG
Le mar. 15 mars 2022 à 13:55, Eugène Ngontang a
écrit :
> This screenshot show the output of `mount -l` command.
>
> Le mar. 15 mars 2022 à 13:52, Eugène Ngontang a
> écrit :
>
>> No @Strahil
hugetlbfs
> (rw,relatime,seclabel,pagesize=1024M)*
> *172.31.81.195:/home/ec2-user/export on
> /rhev/data-center/mnt/172.31.81.195:_home_ec2-user_export type nfs4
> (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,
r@ip-172-31-21-171 ~]$*
So I'll cleanup the deployment and run it again.
Let me know in the meanwhile if you have any other idea.
Eugène NG
Le mar. 15 mars 2022 à 14:33, Eugène Ngontang a
écrit :
> I unmounted my home *export* folder, but still have the same error :
>
>
>>
>
est Regards,
> Strahil Nikolov
>
> On Thu, Mar 17, 2022 at 20:47, Eugène Ngontang
> wrote:
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www
30 matches
Mail list logo