Fixed that thanks it still fails
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
Hi,
Gluster will not set up and fails... can anyone see why ?
/etc/hosts set up for both backend Gluster network and front end, also LAN DNS
set up on the subnet for the front end.
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] **
task path:
No,
Is this the issue ?
1.32 TiB Inactive volumeovirt-node-ng-4.3.6-0.20190926.0
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt
Logical Volumes Create new Logical Volume
1.35 TiB Pool for Thin Volumes pool00
1 GiB ext4 File System /dev/onn_ovirt1/home
1.32 TiB Inactive volumeovirt-node-ng-4.3.6-0.20190926.0
___
Users mailing list -- users@ovirt.org
To
[root@ovirt1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO
TYPE MOUNTPOINT
sda 8:00 1.5T 0
disk
├─sda1 8:101G 0
part /boot
Can i create that from the current SSD storage I have or do I need to reinstall
?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of
I have set up a 3 node system.
Gluster has its own backend network and I have tried entering the FQDN hosts
via ssh as follows...
gfs1.gluster.private10.10.45.11
gfs2.gluster.private10.10.45.12
gfs3.gluster.private10.10.45.13
I entered at /etc/hosts
127.0.0.1 localhost
So using FQDN or ip I get this
task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:4
fatal: [10.10.45.13]: UNREACHABLE! => {"changed": false, "msg": "Failed to
connect to the host via ssh: Permission denied
(publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
[root@ovirt3 ~]# nmcli general hostname
ovirt3.kvm.private
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
yes
see below...
still getting
FQDN is not added in known_hosts
on Additional hosts screen...
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter
out any that are already installed
/usr/bin/ssh-copy-id:
OK so I found that even though DNS was set correctly havng put IP address' in
additioanl hosts and adding them to /etc/hosts Deployment does not immediately
fail...
however it does fail...
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] **
task path:
I have 3 nodes set up with a 2TB Partition on each for Gluster set up.
The drives are all SSD should I enable DM-VDO ?
has anyone one got a simple / Pro / Con list ?
I understand response is slower if enabled also some CPU hit both are
acceptable.
Thanks
I have set up 3 Nodes with a separate volume for Gluster, I have set up the two
networks and DNS works fine SSH has been set up for Gluster and you can login
via ssh to the other two hosts from the host used to set up.
When going to Virtualisation > Setup Gluster and Hosted Engine only single
Hi Engine deployment fails here...
[ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
"[Unexpected exception]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
I have set up 3 new servers and as you can see Gluster is working well, however
the hosted engine deployment fails
can anyone suggest a reason ?
I have wiped and set up all three servers again and set up Gluster first.
This is the gluster congig I have used for the setup.
Please review the
Hi,
I believe I need to create a Storage Block which I was unaware of as I thought
one would be able to use part of the free space on the disks automatically
provisioned by the node installer, I believe this requires a reinstall and
creation of a new volume or reduce the current size of the
So...
I have got to the last step
3 Machines with Gluster Storage configured however at the last screen
Deploying the Engine to Gluster and the wizard does not auto fill the two
fields
Hosted Engine Deployment
Storage Connection
and
Mount Options
I also had to expand /tmp as it was
I am using
4.3.6
I got deployment working but failed at the last step
I detailed that in this thread
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NL4HS6MIKWQAGI36NMSXGESBMB433SPL/
___
Users mailing list -- users@ovirt.org
To
Sorry not sure what happened there I opened a new thread for the last issue I
have above...
I am using 4.3.6
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
Nope not filters there
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
Full log of point of failure
TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] **
task path:
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:9
failed: [gfs1.gluster.private] (item={u'writepolicy': u'auto', u'name':
u'vdo_sdb',
Gluster fails with
vdo: ERROR - Device /dev/sdb excluded by a filter.\n",
however I have run
[root@ovirt1 ~]# vdo create --name=vdo1 --device=/dev/sdb --force
Creating VDO vdo1
Starting VDO vdo1
Starting compression on VDO vdo1
VDO instance 1 volume is ready at /dev/mapper/vdo1
[root@ovirt1 ~]#
So no Filters in there...
Also Gluster / Engin wizard only shows setup for single node.
I have 3 nodes in Dashboard and have shared passwordless SSH keys between the
host and the other two hosts via the backend Gluster network
___
Users mailing list --
Ok so i found that the System was set to MBR ...
there are no filters in LVM.conf
However disk creation still fails on VDO disk creation with the same error...
I have set the disk format to
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sdb.
The operation
I deployed Gluster FS by first applying the vdo Volume on each host that
Gluster tried to create and failed with
vdo create --name=vdo_sdb --device=/dev/sdb --force
re deploying the gluster wizard then completed without error
___
Users mailing
I have removed all deployment of the hosted engine by running the following
commands.
ovirt-hosted-engine-cleanup
vdsm-tool configure --force
systemctl restart libvirtd
systemctl restart vdsm
on my hosts I have the following ovirt 1 is the host I ran the hosted engine
setup.
I have set the
I have set up 3 SuperMicro's with Ovirt Node and all pretty sweet.
FQDN set up for LAN and also after setup I have enabled a second NIC with FQDN
for a Gluster network.
The issue is the second ports seem to be unavailable for network access by ping
or login if you login on root the system
27 matches
Mail list logo