Hi,
Parallel-readdir is an experimental feature for 3.10, can you disable
performance.parallel-readdir option and see if the files are visible? Does the
unmount-mount help?
Also If you want to use parallel-readdir in production please use 3.11 or
greater.
Regards,
Poornima
-
On 11 October 2017 at 22:21, wrote:
> > corruption happens only in this cases:
> >
> > - volume with shard enabled
> > AND
> > - rebalance operation
> >
>
> I believe so
>
> > So, what If I have to replace a failed brick/disks ? Will this trigger
> > a rebalance and then
hi everyone
I assume such a situation where network segement changes, in
a simplest case where one provides a box(a brick) a new
faster net iterface. So after boxes have two nics and then
bricks gets introduced to them via gluster probe $_newIPs.
Ideally @ a developer - how gluster handles
> corruption happens only in this cases:
>
> - volume with shard enabled
> AND
> - rebalance operation
>
I believe so
> So, what If I have to replace a failed brick/disks ? Will this trigger
> a rebalance and then corruption?
>
> rebalance, is only needed when you have to expend a volume, ie
2017-10-11 15:37 GMT+02:00 ML :
> After some extra reading about LVM snapshots & Gluster, I think I can
> conclude it may be a bad idea to use it on big storage bricks.
>
> I understood that the LVM maximum metadata, used to store the snapshots
> data, is about 16GB.
>
LVM
We had a quick meeting today, with 2 main topics.
We have a new community issue tracker [1], which will be used to track
community initiatives. Amye will be sharing more information about
this in another email.
To co-ordinate people travelling to the Gluster Community Summit
better, a
Just to clarify as i'm planning to put gluster in production (after
fixing some issue, but for this I need community help):
corruption happens only in this cases:
- volume with shard enabled
AND
- rebalance operation
In any other cases, corruption should not happen (or at least is not
known to
LVM is also good if you want to add ssd cache. It is more flexible and
easier to manage and expand than bcache.
On 11 October 2017 at 04:00, Mohammed Rafi K C wrote:
>
> Volumes are aggregation of bricks, so I would consider bricks as a
> unique entity here rather than
Hi,
I have only a feedback to point 4/5.
Despite using
http://docs.ansible.com/ansible/latest/gluster_volume_module.html for
gluster management
I see the operation of replacing a server with a new hardware while keeping
the same IP number poorly documented.
Maybe I was just unlucky in my search
After some extra reading about LVM snapshots & Gluster, I think I can
conclude it may be a bad idea to use it on big storage bricks.
I understood that the LVM maximum metadata, used to store the snapshots
data, is about 16GB.
So if I have a brick with a volume arount 10TB (for example),
I'm testing iozone inside a VM booted from a gluster volume.
By looking at network traffic on the host (the one connected to the
gluster storage) I can
see that a simple
iozone -w -c -e -i 0 -+n -C -r 64k -s 1g -t 1 -F /tmp/gluster.ioz
will make about 1200mbit/s on a bonded dual gigabit nic
Hi,
As part of our initiative to improve Gluster usability, we would like
feedback on the current Gluster CLI. Gluster 4.0 upstream development is
currently in progress and it is an ideal time to consider CLI changes.
Answers to the following would be appreciated:
1. How often do you use the
Volumes are aggregation of bricks, so I would consider bricks as a
unique entity here rather than volumes. Taking the constraints from the
blog [1].
* All bricks should be carved out from an independent thinly provisioned
logical volume (LV). In other words, no two brick should share a common
Thanks Rafi, that's understood now :)
I'm considering to deploy gluster on a 4 x 40 TB bricks, do you think
it would better to make 1 LVM partition for each Volume I need or to
make one Big LVM partition and start multiple volumes on it ?
We'll store mostly big files (videos) on this
On 10/11/2017 09:50 AM, ML wrote:
Hi everyone,
I've read on the gluster & redhat documentation, that it seems recommended to
use XFS over LVM before creating & using gluster volumes.
Sources :
just had an answer here for those interrested :
https://github.com/gluster/glusterdocs/issues/218
Le 11/10/2017 à 08:50, ML a écrit :
Hi everyone,
I've read on the gluster & redhat documentation, that it seems
recommended to use XFS over LVM before creating & using gluster volumes.
On 10/11/2017 12:20 PM, ML wrote:
> Hi everyone,
>
> I've read on the gluster & redhat documentation, that it seems
> recommended to use XFS over LVM before creating & using gluster volumes.
>
> Sources :
>
Hi everyone,
I've read on the gluster & redhat documentation, that it seems
recommended to use XFS over LVM before creating & using gluster volumes.
Sources :
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Formatting_and_Mounting_Bricks.html
18 matches
Mail list logo