Start is not an option.
It notes two bricks. but command line denotes three bricks and all present
[root@odin thorst.penguinpages.local:_vmstore]# gluster volume status data
Status of volume: data
Gluster process TCP Port RDMA Port Online Pid
Agree about an NVMe Card being put under mpath control.
I have not even gotten to that volume / issue. My guess is something
weird in CentOS / 4.18.0-193.19.1.el8_2.x86_64 kernel with NVMe block
devices.
I will post once I cross bridge of getting standard SSD volumes working
On Mon, Sep 21,
Well.. to know how to do it with Curl is helpful.. but I think I did
[root@odin ~]# curl -s -k --user admin@internal:blahblah
https://ovirte01.penguinpages.local/ovirt-engine/api/storagedomains/ |grep
''
data
hosted_storage
ovirt-image-repository
What I guess I did is
在 2020/9/21 14:09, Yedidyah Bar David 写道:
On Fri, Sep 18, 2020 at 3:50 AM Adam Xu wrote:
在 2020/9/17 17:42, Yedidyah Bar David 写道:
On Thu, Sep 17, 2020 at 11:57 AM Adam Xu wrote:
在 2020/9/17 16:38, Yedidyah Bar David 写道:
On Thu, Sep 17, 2020 at 11:29 AM Adam Xu wrote:
在 2020/9/17
when i activate the ovirt-engine cert, i can access to the ovirt-engine webpage
i have checked the log file , i have 4 hosts, all of them are "ERROR
[org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] Unable
to process messages: Received fatal alert: certificate_expired"
Interesting is that I don't find anything recent , but this one:
https://devblogs.microsoft.com/oldnewthing/20120511-00/?p=7653
Can you check if anything in the OS was updated/changed recently ?
Also check if the VM is with nested virtualization enabled.
Best Regards,
Strahil Nikolov
В
Yes. I can. The host which does not host the HE could be reinstalled
sucessfully in web UI. After this is done nothing has changed.
在 2020-09-22 03:08:18,"Strahil Nikolov" 写道:
>Can you put 1 host in maintenance and use the "Installation" -> "Reinstall"
>and enable the HE
Thanks for reply. I read this late at night and assumed the "engine url" meant
old KVM. system .. but this implies the oVirt engine. I then translated your
helpful notes... but likely missing some parameter.
#
# Install import client
dnf install
You could try setting host to maintenance and check stop gluster option,
then re-activate host or try restarting glusterd service on the host
On Mon, Sep 21, 2020 at 2:52 PM Jeremey Wise wrote:
>
> oVirt engine shows one of the gluster servers having an issue. I did a
> graceful shutdown of
On Mon, Sep 21, 2020 at 8:37 PM penguin pages wrote:
>
>
> I pasted old / file path not right example above.. But here is a cleaner
> version with error i am trying to root cause
>
> [root@odin vmstore]# python3
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url
>
I pasted old / file path not right example above.. But here is a cleaner
version with error i am trying to root cause
[root@odin vmstore]# python3
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url
https://ovirte01.penguinpages.local/ --username admin@internal
oVirt engine shows one of the gluster servers having an issue. I did a
graceful shutdown of all three nodes over weekend as I have to move around
some power connections in prep for UPS.
Came back up.. but
[image: image.png]
And this is reflected in 2 bricks online (should be three for
I tried to use AAA mapping. but I have this message
Sep 21, 2020 2:12:05 PM org.ovirt.engine.exttool.aaa.AAAServiceImpl run
INFO: Iteration: 0
Sep 21, 2020 2:12:05 PM org.ovirt.engine.exttool.core.ExtensionsToolExecutor
main
SEVERE: Extension mapping could not be found
Sep 21, 2020 2:12:05 PM
Il giorno lun 21 set 2020 alle ore 17:13 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:
>
> On Thu, Sep 17, 2020 at 4:06 PM Lev Veyde wrote:
>
>> The oVirt project is excited to announce the general availability of
>> oVirt 4.4.2 , as of September 17th, 2020.
>>
>>
>>
> [snip]
>
>>
On Thu, Sep 17, 2020 at 4:06 PM Lev Veyde wrote:
> The oVirt project is excited to announce the general availability of oVirt
> 4.4.2 , as of September 17th, 2020.
>
>
>
[snip]
> oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only) will
> be released separately due to a blocker
Hi Everyone,
In a test environment I'm trying to deploy a single node self hosted engine 4.4
on CentOS 8 from a 4.3 backup. The actual setup is:
- node1 with CentOS7, oVirt 4.3 with a working SH engine. The data domain is a
local NFS;
- node2 with CentOS8, where we are triyng to deploy the
On Mon, Sep 21, 2020 at 5:12 PM Gianluca Cecchi
wrote:
>
> On Thu, Sep 17, 2020 at 4:06 PM Lev Veyde wrote:
>
>> The oVirt project is excited to announce the general availability of
>> oVirt 4.4.2 , as of September 17th, 2020.
>>
>>
>>
> [snip]
>
>> oVirt Node 4.4 based on CentOS Linux 8.2
On Mon, Sep 21, 2020 at 5:26 PM Sandro Bonazzola
wrote:
>
>
> Il giorno lun 21 set 2020 alle ore 17:13 Gianluca Cecchi <
> gianluca.cec...@gmail.com> ha scritto:
>
>>
>> On Thu, Sep 17, 2020 at 4:06 PM Lev Veyde wrote:
>>
>>> The oVirt project is excited to announce the general availability of
That's quite strange.
Any errors/clues in the Engine's logs ?
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 05:58:35 Гринуич+3, ddqlo
написа:
so strange! After I set global maintenance, powered off and started H The cpu
of HE became 'Westmere'(did not change
Can you put 1 host in maintenance and use the "Installation" -> "Reinstall" and
enable the HE deployment from one of the tabs ?
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 06:38:06 Гринуич+3, ddqlo
написа:
so strange! After I set global maintenance, powered
What type of disks are you using ? Any change you use thin disks ?
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 07:20:23 Гринуич+3, Vinícius Ferrão via
Users написа:
Hi, sorry to bump the thread.
But I still with this issue on the VM. This crashes are still
Hey Eyal,
it's really irritating that only ISOs can be imported as disks.
I had to:
1. Delete snapshot (but I really wanted to keep it)
2. Detach all disks from existing VM
3. Delete the VM
4. Import the Vm from the data domain
5. Delete the snapshot , so disks from data domain are "in sync"
Usually gluster has a 10% reserver defined in 'cluster.min-free-disk' volume
option.
You can power off the VM , then set cluster.min-free-disk
to 1% and immediately move any of the VM's disks to another storage domain.
Keep in mind that filling your bricks is bad and if you eat that reserve ,
Usually libvirt's log might provide hints (yet , no clues) of any issues.
For example:
/var/log/libvirt/qemu/.log
Anything changed recently (maybe oVirt version was increased) ?
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 23:28:13 Гринуич+3, Vinícius Ferrão
Strahil, thank you man. We finally got some output:
2020-09-15T12:34:49.362238Z qemu-kvm: warning: CPU(s) not present in any NUMA
nodes: CPU 10 [socket-id: 10, core-id: 0, thread-id: 0], CPU 11 [socket-id: 11,
core-id: 0, thread-id: 0], CPU 12 [socket-id: 12, core-id: 0, thread-id: 0],
CPU 13
I was using the old config file.
Here's the new one explained here:
https://www.ovirt.org/documentation/administration_guide/
Example 21. Example authentication mapping configuration file
ovirt.engine.extension.name = example-http-mapping
ovirt.engine.extension.bindings.method = jbossmodule
Have you tried to upload your qcow2 disks via the UI ?
Maybe you can create a blank VM (same size of disks) and then replacing the
disk with your qcow2 from KVM (works only of file-based storages like
Gluster/NFS).
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г.,
For some OS versions , the oVirt's behavior is accurate , but for other
versions it's not accurate.
I think that it is more accurate to say that oVirt improperly calculates memory
for SLES 15/openSUSE 15.
I would open a bug at bugzilla.redhat.com .
Best Regards,
Strahil Nikolov
В
Just select the volume and press "start" . It will automatically mark "force
start" and will fix itself.
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 20:53:15 Гринуич+3, Jeremey Wise
написа:
oVirt engine shows one of the gluster servers having an issue. I
Why is your NVME under multipath ? That doesn't make sense at all .
I have modified my multipath.conf to block all local disks . Also ,don't forget
the '# VDSM PRIVATE' line somewhere in the top of the file.
Best Regards,
Strahil Nikolov
В понеделник, 21 септември 2020 г., 09:04:28
On Mon, Sep 21, 2020 at 11:18 AM momokch--- via Users wrote:
>
> What you have done???
> i just regenerate the ovirt-engine cert according to the link below
> https://lists.ovirt.org/pipermail/users/2014-April/023402.html
>
>
> # cp -a /etc/pki/ovirt-engine "/etc/pki/ovirt-engine.$(date
>
Hi Stranhil,
Maybe those VMs has more disks on different data storage domains?
If so, those VMs will remain on the environment with the disks that are not
based on the detached storage-domain.
You can try to import the VM as partial, another option is to remove the VM
that remained in the
What you have done???
i just regenerate the ovirt-engine cert according to the link below
https://lists.ovirt.org/pipermail/users/2014-April/023402.html
# cp -a /etc/pki/ovirt-engine "/etc/pki/ovirt-engine.$(date
"+%Y%m%d")"
# SUBJECT="$(openssl x509 -subject -noout -in
On sobota 19. září 2020 5:58:43 CEST Jeremey Wise wrote:
> [image: image.png]
>
> vdo: ERROR - Device /dev/sdc excluded by a filter
>
> [image: image.png]
when this error happens? When you install ovirt HCI?
> Where is getting this filter.
> I have done gdisk /dev/sdc ( new 1TB Drive) and
On Mon, Sep 21, 2020 at 2:45 AM momokch--- via Users wrote:
>
> hello everyone,
>
> I apologize for asking what is probably a very basic question.
> i have 4 hosts running on the ovirt-engine, but on the last week the alert
> show all the hosts status has gone to non responsive like the
Hey!
Can you try editing the lvm cache filter and including sdc multipath into
the filter?
I see that it is missing, and hence the error that sdc is excluded.
Add "a|^/dev/sdc$|" to the lvmfilter and try again.
Thanks
On Mon, Sep 21, 2020 at 11:34 AM Jeremey Wise
wrote:
>
>
>
> [image:
I rebuilt my lab environment. And their are four or five VMs that really
would help if I did not have to rebuild.
oVirt as I am now finding when it creates infrastructure, sets it out such
that I cannot just use older means of placing .qcow2 files in folders and
.xml files in other folders
[image: image.png]
vdo: ERROR - Device /dev/sdc excluded by a filter
[image: image.png]
Other server
vdo: ERROR - Device
/dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
excluded by a filter.
All systems when I go to create VDO
On Fri, Sep 18, 2020 at 3:50 AM Adam Xu wrote:
>
>
> 在 2020/9/17 17:42, Yedidyah Bar David 写道:
> > On Thu, Sep 17, 2020 at 11:57 AM Adam Xu wrote:
> >>
> >> 在 2020/9/17 16:38, Yedidyah Bar David 写道:
> >>> On Thu, Sep 17, 2020 at 11:29 AM Adam Xu wrote:
> 在 2020/9/17 15:07, Yedidyah Bar
On Mon, Sep 21, 2020 at 9:02 AM Jeremey Wise wrote:
>
>
>
>
>
>
> vdo: ERROR - Device /dev/sdc excluded by a filter
>
>
>
>
>
> Other server
>
> vdo: ERROR - Device
> /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
> excluded by a
On Mon, Sep 21, 2020 at 9:11 AM Jeremey Wise wrote:
>
>
> I rebuilt my lab environment. And their are four or five VMs that really
> would help if I did not have to rebuild.
>
> oVirt as I am now finding when it creates infrastructure, sets it out such
> that I cannot just use older means of
Hello,
I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving
only one VM.
The VM filled all the Domain storage.
The Linux filesystem has 4.1G available and 100% used, the mounted brick has
0GB available and 100% used
I can
On Sun, Sep 20, 2020 at 11:21 AM Gilboa Davara wrote:
> On Sat, Sep 19, 2020 at 7:44 PM Arik Hadas wrote:
> >
> >
> >
> > On Fri, Sep 18, 2020 at 8:27 AM Gilboa Davara wrote:
> >>
> >> Hello all (and happy new year),
> >>
> >> (Note: Also reported as
>
Hi,
I think I already checked that.
What I meant (since beginning) was that ovirt is reporting memory usage in GUI
same way regardless of CentOS or SLES in our case.
My main question is why ovirt is reporting memory usage percentage based on
"free" memory but not actually based on "available
Old System
Three servers.. Centos 7 -> Lay down VDO (dedup / compression) add those
VDO volumes as bricks to gluster.
New cluster (remove boot drives and run wipe of all data drives)
Goal: use first 512GB Drives to ignite the cluster and get things on feet
and stage infrastructure things. Then
On Mon, Sep 21, 2020 at 3:30 PM Jeremey Wise wrote:
>
>
> Old System
> Three servers.. Centos 7 -> Lay down VDO (dedup / compression) add those VDO
> volumes as bricks to gluster.
>
> New cluster (remove boot drives and run wipe of all data drives)
>
> Goal: use first 512GB Drives to ignite the
Ugh.. this is bad.
On the hypervisor where the files are located ...
My customers send me tar files with VMs all the time. And I send them.
This will make it much more difficult, if I can't import xml / qcow2
This cluster .. is my home cluster and so.. three servers.. and they were
47 matches
Mail list logo