>And of course I want Gluster to switch between single node, replication >and
>dispersion seemlessly and on the fly, as well as much better >diagnostic tools.
Actually Gluster can switch from distributed to
replicated/distributed-replicated on the fly.
Best Regards,
Strahil Nikolov
Thanks a lot for this feed back!
I've never had any practical experience with Ceph, MooseFS, BeeGFS or Lustre:
GlusterFS to me mostly had the charme of running on 1/2/3 nodes and then
anything beyond at a balanced benefits in terms of resilience vs.
performance... in theory, of course.
And on
Thank you very much for your story!
It has very much confirmed a few suspicions that have been gathering over the
last... O my God! Has it two years already?
1. Don't expect plug-and-play, unless you're on SAN or NFS (even HCI doesn't
seem to be in the heart of the oVirt team)
2. Don't expect
Yes, we manage a number of Distributed Storage systems including
MooseFS, Ceph, DRBD and of course Gluster (since 3.3). Each has a
specific use.
For small customer-specific VM host clusters, which is the majority of
what we do, Gluster is by far the safest and easiest to
deploy/understand
On 2020-10-01 04:33, Jeremey Wise wrote:
>
> I have for many years used gluster because..well. 3 nodes.. and so
> long as I can pull a drive out.. I can get my data.. and with three
> copies.. I have much higher chance of getting it.
>
> Downsides to gluster: Slower (its my home..meh... and I
mey Wise" , "users"
>> Sent: Wednesday, September 30, 2020 10:03:34 PM
>> Subject: [ovirt-users] Re: CEPH - Opinions and ROI
>>
>> If you can’t go direct, how about round about, with an iSCSI gateway.
>>
>>
>>
>> From: Jeremey Wise
om: "Matthew Stier"
> To: "Jeremey Wise" , "users"
> Sent: Wednesday, September 30, 2020 10:03:34 PM
> Subject: [ovirt-users] Re: CEPH - Opinions and ROI
>
> If you can’t go direct, how about round about, with an iSCSI gateway.
>
>
>
> Fro
ceph through an iscsi gateway is very.. very.. slow.
- Original Message -
From: "Matthew Stier"
To: "Jeremey Wise" , "users"
Sent: Wednesday, September 30, 2020 10:03:34 PM
Subject: [ovirt-users] Re: CEPH - Opinions and ROI
If you can’t go direct, how ab
Thanks for response.
Seems a bit too far into "bleeding edge" .. such that I should kick tires
virtually vs commuting plugins to oVirt +Gluster where upgrades and other
issues may happen. Seems like Alpha stage of (no thin provisioning, issues
with deleting volumes, no export / import ..
CEPH requires at least 4 nodes to be "good".
I know that Gluster is not the "favourite child" for most vendors, yet it is
still optimal for HCI.
You can check
https://www.ovirt.org/develop/release-management/features/storage/cinder-integration.html
for cinder integration.
Best Regards,
These are all storage rich servers.
Drives:
USB 3 64GB Boot / OS
512GB SSD (Gluster HCI: volumes "engine", "data" , "vmstore" "iso" I added
last one to .. well to learn if I could extend with LVM ;)
1TB SSD: (VDO+Gluster Manual build due to brick and fqdn issues in oVirt, It
did import
If you can’t go direct, how about round about, with an iSCSI gateway.
From: Jeremey Wise
Sent: Wednesday, September 30, 2020 11:33 PM
To: users
Subject: [ovirt-users] CEPH - Opinions and ROI
I have for many years used gluster because..well. 3 nodes.. and so long as I
can pull a drive out..
12 matches
Mail list logo