Hello,
I wanted to post this as a question to the group before we go launch it in a
test environment. Will Gluster handle enabling sharding on an existing
distributed-replicated environment, and is it safe to do?
The environment in question is a VM image storage cluster with some disk files
Can you check whether you are hitting
https://bugzilla.redhat.com/show_bug.cgi?id=1512437? Note that the fix is
not backported to 3.13 branch, but is available on 4.0 through
https://bugzilla.redhat.com/1512437.
On Tue, Apr 3, 2018 at 11:13 PM, Artem Russakovskii
wrote:
>
On Thu, Apr 5, 2018 at 7:33 AM, Ian Halliday wrote:
> Hello,
>
> I wanted to post this as a question to the group before we go launch it in
> a test environment. Will Gluster handle enabling sharding on an existing
> distributed-replicated environment, and is it safe to do?
After updating the test server to 4.0.1, I can indeed confirm that so far
the disappearing directories bug is gone.
On Wed, Apr 4, 2018, 8:29 PM Raghavendra Gowdappa
wrote:
> Can you check whether you are hitting
> https://bugzilla.redhat.com/show_bug.cgi?id=1512437? Note
Hi,
I noticed when I run gluster volume heal data info, the follow message
shows up in the log, along with other stuff:
[dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal
> failed: Unable to form layout for directory /
I'm seeing it on Gluster 4.0.1 and 3.13.2.
Here's
On Thu, Apr 5, 2018 at 10:48 AM, Artem Russakovskii
wrote:
> Hi,
>
> I noticed when I run gluster volume heal data info, the follow message
> shows up in the log, along with other stuff:
>
> [dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory
>> selfheal
Hi,
I'm currently facing the same behaviour.
Today, one of my users tried to delete a folder. It failed, saying the
directory wasn't empty. ls -lah showed an empty folder but on the bricks I
found some files. Renaming the directory caused it to reappear.
We're running gluster 3.12.7-1 on
Hello there,
I want to know if there is a feature allow user to add lock on a file when
their app is modifying that file, so that other apps could use the file when
its unlocked.
Thanks
Lei
___
Gluster-users mailing list
Gluster-users@gluster.org
This sounds like it may be a different issue. Can you file a bug for this
([1]) and provide all the logs/information you have on this (dir name,
files on bricks, mount logs etc)?
Thanks,
Nithya
[1] https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
On 4 April 2018 at 19:03, Gudrun
Gluster 4.0!
At long last, Gluster 4.0 is released! Read more at:
https://www.gluster.org/announcing-gluster-4-0/
Other updates about Gluster 4.0:
https://www.gluster.org/more-about-gluster-d2/
https://www.gluster.org/gluster-4-0-kubernetes/
Want to give us feedback about 4.0? We’ve got our
Based on your message, it sounds like your total usable capacity
requirement is around <1TB. With a modern SSD, you'll get something like
40k theoretical IOPs for 4k I/O size.
You don't mention budget. What is your budget? You mention "4MB
operations", where is that requirement coming from?
thanks for your reply,
Yes the VMs are very small and provide only a single service, I would
prefer a total of 2TB but 1TB to start is sufficient. Ideally I want a
scheme that is easy to expand by dropping an extra disk in each node. When
all slots are full, add another node.
our current setup
Hi,
Trying to make the most of a limited budget. I need fast I/O for
operations under 4MB, and high availability of VMs in an Ovirt cluster.
I Have 3 nodes running Ovirt and want to rebuild them with hardware for
converging storage.
Should I use 2 960GB SSDs in RAID1 in each node, replica 3?
We currently have a 3 node gluster setup each has a 100TB brick (total
300TB, usable 100TB due to replica factor 3)
We would like to expand the existing volume by adding another 3 nodes, but
each will only have a 50TB brick. I think this is possible, but will it
affect gluster performance and if
Hi,
Yes this is possible. Make sure you have cluster.weighted-rebalance enabled
for the volume and run rebalance with the start force option.
Which version of gluster are you running (we fixed a bug around this a
while ago)?
Regards,
Nithya
On 4 April 2018 at 11:36, Anh Vo
We are using 3.8.15 I believe (the one comes with ubuntu 16.04). Do you
know when this was fixed?
Thanks
On Tue, Apr 3, 2018 at 11:24 PM Nithya Balachandran
wrote:
> Hi,
>
> Yes this is possible. Make sure you have cluster.weighted-rebalance
> enabled for the volume and
16 matches
Mail list logo