yes it is ext4. but what is the impact of this.
On Thu, Apr 13, 2017 at 9:26 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> Yes
>
> On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL > wrote:
>
>> Means the fs where this brick has been created?
>> On Apr 13, 2017 8:19 AM, "Pranith Ku
Yes
On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL
wrote:
> Means the fs where this brick has been created?
> On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri"
> wrote:
>
>> Is your backend filesystem ext4?
>>
>> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com
Means the fs where this brick has been created?
On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri"
wrote:
> Is your backend filesystem ext4?
>
> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL > wrote:
>
>> No,we are not using sharding
>> On Apr 12, 2017 7:29 PM, "Alessandro Briosi" wrote:
>>
Is your backend filesystem ext4?
On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL
wrote:
> No,we are not using sharding
> On Apr 12, 2017 7:29 PM, "Alessandro Briosi" wrote:
>
>> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>>
>> I have did more investigation and find out that brick dir s
No,we are not using sharding
On Apr 12, 2017 7:29 PM, "Alessandro Briosi" wrote:
> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>
> I have did more investigation and find out that brick dir size is
> equivalent to gluster mount point but .glusterfs having too much difference
>
>
> You are pr
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-04-12-e536bea0
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman
I have did more investigation and find out that brick dir size is
equivalent to gluster mount point but .glusterfs having too much difference
opt/lvmdir/c2/brick
# du -sch *
96K RNC_Exceptions
36K configuration
63Mjava
176K
On 04/12/2017 01:57 PM, Mateusz Slupny wrote:
Hi,
I'm observing strange behavior when accessing glusterfs 3.10.0 volume
through FUSE mount: when self-healing, stat() on a file that I know
has non-zero size and is being appended to results in stat() return
code 0, and st_size being set to 0 as
Hi,
I'm observing strange behavior when accessing glusterfs 3.10.0 volume
through FUSE mount: when self-healing, stat() on a file that I know has
non-zero size and is being appended to results in stat() return code 0,
and st_size being set to 0 as well.
Next week I'm planning to find a minim