Hey!
I think this should work -
gluster volume add-brick replica 3 test-volume newServer1:/brick1
newServer2:/brick1
On Wed, Oct 28, 2020 at 4:07 AM marcel d'heureuse
wrote:
> Hi Strahil,
>
> where can I find some documents for the conversion to replica? works this
> also for the engine
Hi, Strahil,
Thank you for your reply.
I've try setting host to maintenance and the host reboot immediately, What
does vdsm do when setting host to maintenance? Thank you
Best Regards
Mark Lee
From: Strahil Nikolov via Users
Date: 2020-10-27 23:44
To: users; lifuqi...@sunyainfo.com
I think I figured this one out. Looks like the disk format changed when
moving to block storage. The VM template could not cope with this change.
I deleted the VM, and attached the disk to a new VM, and everything worked
fine.
On Tue, Oct 27, 2020 at 8:41 AM Wesley Stewart wrote:
> This is a
Hi Strahil,
where can I find some documents for the conversion to replica? works this also
for the engine brick?
br
marcel
Am 27. Oktober 2020 16:40:59 MEZ schrieb Strahil Nikolov via Users
:
>Hello Gobinda,
>
>I know that gluster can easily convert distributed volume to replica
>volume, so
Greetings,
After reverting the ovirt_disk module to the ovirt_disk_28, I'm able to get
passed that step, however now I'm running into a new issue.
When it tries to start the vm after moving it from local storage to the hosted
storage, I get the following errors:
2020-10-27 21:42:17,334+
No, is not a replica gluster, is just one brick one volume, one single server
storage.
This is what I get:
# gluster volume set data group virt
volume set: failed: Cannot set cluster.shd-max-threads for a non-replicate
volume.
Since I don't have a replica volume I cannot optimize it for
I was left with the impression that you got 2 bricks (2 raids) on a single
server.
As I understand , you cannot get a 'replica 3 arbiter 1' or a 'replica 3'
volumes , right ?
When you create your volume , you need to use the UI's dropdown "Optimize for
virt" or the command 'gluster volume
The default value is the one found via "gluster volume set help | grep -A 3
cluster.min-free-disk". In my case it is 10%.
The best option is to use a single brick on either software/hardware raid of
equal in performance and size disks. Size should be the same or gluster will
write
Well, each volume has only one brick. Each volume has one dedicated server.
Each server storage has the disks in raid10.
I made the gluster volume like this
gluster volume create data transport tcp gfs.domain.com:/home/brick1
And than create a data domain in ovirt: gfs.domain.com:/data
is
So, in oVirt if I want a single storage, without high availability, what is the
best solution?
Can I use gluster replica 1 for a single storage?
by the way, cluster.min-free-disk: 1% not 10% the default value. And I still
cannot remove the disk.
I think I will destroy the volume and
Asaf,
That worked. now on to new problems. :)
Thanks,
Lee
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
Nope,
oficially oVirt supports only replica 3 (replica 3 arbiter 1) or replica 1
(which actually is a single brick distributed ) volumes.
If you have issues related to the Gluster Volume - liek this case , the
community support will be "best effort".
Best Regards,
Strahil Nikolov
В
Why don't you use the devices of the 2 bricks in a single striped LV or raid0
md device ?
Distributed volumes spread the files among the bricks and the performance is
limited to the brick's device speed.
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 18:09:47 Гринуич+2,
But there are also distributed volume types, right?
Replicated is if you want high availability, that is not the case.
Thanks
José
De: "Strahil Nikolov"
Para: supo...@logicworks.pt
Cc: "users"
Enviadas: Terça-feira, 27 De Outubro de 2020 15:49:22
Assunto: Re: [ovirt-users] Re:
Yes, the VM's disks are on the distributed volume
They were created like this:
gluster volume create data transport tcp gfs2.domain.com:/home/brick1
De: "Strahil Nikolov"
Para: supo...@logicworks.pt
Cc: "users"
Enviadas: Terça-feira, 27 De Outubro de 2020 15:53:57
Assunto: Re:
Are the VM's disks located on the distributed Volume ?
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 17:21:49 Гринуич+2, supo...@logicworks.pt
написа:
The engine version is 4.3.9.4-1.el7
I have 2 simple glusterfs, a volume with one brick gfs1 with an old version of
You have exactly 90% used space.
The Gluster's default protection value is exactly 10%:
Option: cluster.min-free-disk
Default Value: 10%
Description: Percentage/Size of disk space, after which the process starts
balancing out the cluster, and logs will appear in log files
I would recommend you
Hello Gobinda,
I know that gluster can easily convert distributed volume to replica volume, so
why it is not possible to first convert to replica and then add the nodes as
HCI ?
Best Regards,
Strahil Nikolov
В вторник, 27 октомври 2020 г., 08:20:56 Гринуич+2, Gobinda Das
написа:
The engine version is 4.3.9.4-1.el7
I have 2 simple glusterfs, a volume with one brick gfs1 with an old version of
gluster (3.7.6) and a volume with one brick gfs2 with version 6.8
2 hosts, node3 is up to date, node 2 is not up to date:
RHEL - 7 – 7.1908.0.el7.centos
CentOS Linux 7 (Core)
This is a new one.
I migrated from an NFS share to an iSCSI share on a small single node oVirt
system (Currently running 4.3.10).
After migrating a disk (Virtual Machine -> Disk -> Move), I was unable to
boot to it. The console tells me "No bootable device". This is a Centos7
guest.
I booted
Hi everyone:
I met problem as follows:
Description of problem:
When exec "reboot" or "shutdown -h 0" cmd on vdsm server, the vdsm server
will reboot or shutdown more than 30 minutes. the screen shows '[FAILED] Failed
unmouting /rhev/data-center/mnt/172.18.81.41:_home_nfs_data'.
moin,
I start to generate the physical bricks and create an iso brick which is also
available on the single node.
the iso brick I create as redundant and I would like to move all data from iso
single install to the Redundanz iso. than I delete the single node iso and make
a new one and add
Hello,
I have updated an external engine from 4.3 to 4.4 and the OVN configuration
seems to have been retained:
[root@ovmgr1 ovirt-engine]# ovn-nbctl show
switch fc2fc4e8-ff71-4ec3-ba03-536a870cd483
(ovirt-ovn192-1e252228-ade7-47c8-acda-5209be358fcf)
switch 101d686d-7930-4176-b41a-b306d7c30a1a
The xyz.co.za domain is pointing (redirect) to the server IP, maybe that is the
problem
journalctl -u cockpit
-- Logs begin at Tue 2020-10-27 11:59:03 SAST, end at Tue 2020-10-27 12:06:33
SAST. --
Oct 27 12:01:03 node01.xyz.co.za systemd[1]: Starting Cockpit Web Service...
Oct 27 12:01:03
Hi team,
I have 4 node cluster where in each node I configured 2 dedicated 10 gig NIC
with dedicated subnet each ( NIC 1 = 10.10.10.0/24, NIC 2 = 10.10.20.0/24 )
and on the array side I configured 2 targets with 10.10.10.0 /24 & another 2
target with 10.10.20.0/24 subnet. without any errors
This is a simple installation with one storage, 3 hosts. One volume with 2
bricks, the second brick was to get more space to try to remove the disk, but
without success
]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md127 50G 2.5G 48G 5% /
devtmpfs 7.7G 0 7.7G 0% /dev
tmpfs 7.7G
Installing oVirt as a self-hosted engine using the Cockpit web interface and
AGAIN the cockpit start with the http without the default self-signed cert
(https)
Any suggestions, I think this is my first problem.
Yours Sincerely,
Henni
From: Yedidyah Bar David
Sent: Monday, 26
Hi Gianluca,
happy to hear that your issue was fixed!
Just please be aware that iptables support for hosts has been deprecated
and it's completely unsupported for cluster levels 4.4 and up. So unless
you switch your cluster to firewalld, you will not be able to upgrade your
cluster to 4.4
Morning Sandro,
we are running ovirt 4.4.2.6-1.el8 with iSCSI storage on CENTOS8. We have 20
VMs on the system. We put all of them into hibernation and only one gave us
this issue.
However over the weekend we had to reboot this VM which had the metadata and
memory dump disks and when we
Hello Marcel,
For a note, you can't expand your single gluster node cluster to 3
nodes.Only you can add compute nodes.
If you want to add compute nodes then you do not need any glusterfs
packages to be installed. Only ovirt packages are enough to add host as a
compute node.
On Tue, Oct 27, 2020
30 matches
Mail list logo