On February 15, 2020 12:25:25 AM GMT+02:00, eev...@digitaldatatechs.com wrote:
>The SDM is on an nfs share on a local disk instead of iscsi luns. I
>could cre less which is the SDM as long as my vm disks are on thee
>iscsi luns. That's all I want to accomplish.
>Reading the documentation, I
The SDM is on an nfs share on a local disk instead of iscsi luns. I could cre
less which is the SDM as long as my vm disks are on thee iscsi luns. That's all
I want to accomplish.
Reading the documentation, I understood that the SDM would hold all disks, vm
info, etc. That's why I wanted to
Hi Eric-
Glad you got thought that part. I don’t use iscsi backed volumes for my gluster
storage, so I don’t much advice for you there. I’ve cc’d the ovirt users list
back in, someone there may be able to help you futher. It’s good practice to
reply to the list and specific people when
I originally set my SDM on an iscsi lun 4TB but Ovirt moved it to a raid 5
internal disk on one of the hosts. How can I force Ovirt to make the 4TB lun
the storage domain master? That’s where I want to vm's to reside.
Eric Evans
Digital Data Services LLC.
304.660.9080
-Original
Are there any known issues with running the Zabbix agent on either the Hosted
Engine (4.3.8) or oVirt Nodes (4.3.8) ??? I'd like to install the agent while
not crashing my hosting environment.
___
Users mailing list -- users@ovirt.org
To unsubscribe
On February 14, 2020 6:53:47 PM GMT+02:00, Darrell Budic
wrote:
>You can add it in to a running ovirt cluster, it just isn’t as
>automatic. First you need to enable Gluster in at the cluster settings
>level for a new or existing cluster. Then either install/reinstall your
>nodes, or install
On February 14, 2020 4:19:53 PM GMT+02:00, "Vrgotic, Marko"
wrote:
>Good answer Strahil,
>
>Thank you, I forgot.
>
>Libvirt logs are actually showing the reason why:
>
>2020-02-14T12:33:51.847970Z qemu-kvm: -drive
You can add it in to a running ovirt cluster, it just isn’t as automatic. First
you need to enable Gluster in at the cluster settings level for a new or
existing cluster. Then either install/reinstall your nodes, or install gluster
manually and add vdsm-gluster packages. You can create a stand
Thanks, Fredy for your great help. Setting the Banner and PrintMotd options
on all 3 nodes helped me to succeed with the installation.
Am Fr., 14. Feb. 2020 um 16:23 Uhr schrieb Fredy Sanchez <
fredy.sanc...@modmed.com>:
> Banner none
> PrintMotd no
>
> # systemctl restart sshd
>
That should be
Banner none
PrintMotd no
# systemctl restart sshd
If gluster installed successfully, you don't have to reinstall it.
Just run the hyperconverged install again from cockpit, and it will detect
the existing gluster install, and ask you if you want to re-use it;
re-using worked for me. Only thing
Good answer Strahil,
Thank you, I forgot.
Libvirt logs are actually showing the reason why:
2020-02-14T12:33:51.847970Z qemu-kvm: -drive
Am Fr., 14. Feb. 2020 um 12:21 Uhr schrieb Fredy Sanchez <
fredy.sanc...@modmed.com>:
> Hi Florian,
>
> In my case, Didi's suggestions got me thinking, and I ultimately traced
> this to the ssh banners; they must be disabled. You can do this in
> sshd_config. I do think that logging could be
I currently have 3 nodes, one is the engine node and 2 Centos 7 hosts, and I
plan to add another Centos 7 KVM host once I get all the vm's migrated. I have
san storage plus the raid 5 internal disks. All OS are installed on mirrored
SAS raid 1. I want to use the raid 5 vd's as exports, ISO and
On February 14, 2020 3:10:12 PM GMT+02:00, "Josep Manel Andrés Moscardó"
wrote:
>Hi,
>I have seen in the website that there are some companies offering
>support, but from how updated some of them have their website, it
>doesn't like they are updated.
>
>Does anyone know of companies providing
Hi,
I have seen in the website that there are some companies offering
support, but from how updated some of them have their website, it
doesn't like they are updated.
Does anyone know of companies providing professional support right now?
And also if someone has experience with Bobcare I
On February 14, 2020 2:47:04 PM GMT+02:00, "Vrgotic, Marko"
wrote:
>Dear oVirt,
>
>I have problem migrating HostedEngine, only HA VM server, to other HA
>nodes.
>
>Bit of background story:
>
> * We have oVirt SHE 4.3.5
> * Three Nodes act as HA pool for SHE
> * Node 3 is currently
Dear oVirt,
I have problem migrating HostedEngine, only HA VM server, to other HA nodes.
Bit of background story:
* We have oVirt SHE 4.3.5
* Three Nodes act as HA pool for SHE
* Node 3 is currently Hosting SHE
* Actions:
* Put Node1 in Maintenance mode, all VMs were
Hey folks,
thank you all for your replies. Turns out that in my use case creating a
gluster volume manually, moving the DC folder back into gluster and
importing the domain in oVirt Engine did the trick.
The followed importing of the VMs was a bliss and really worked
streamlined. I am truly
Hi Florian,
In my case, Didi's suggestions got me thinking, and I ultimately traced
this to the ssh banners; they must be disabled. You can do this in
sshd_config. I do think that logging could be better for this issue, and
that the host up check should incorporate things other than ssh, even if
I'also stuck with that issue.
I have
3x HP ProLiant DL360 G7
1x 1gbit => as control network
3x 1gbit => bond0 as Lan
2x 10gbit => bond1 as gluster network
I installed on all 3 servers Ovirt Node 4.3.8
configured the networks using cockpit.
followed this guide for the gluster setup with
20 matches
Mail list logo