Re: [Gluster-users] [dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal failed: Unable to form layout for directory /

2018-04-04 Thread Raghavendra Gowdappa
On Thu, Apr 5, 2018 at 10:48 AM, Artem Russakovskii 
wrote:

> Hi,
>
> I noticed when I run gluster volume heal data info, the follow message
> shows up in the log, along with other stuff:
>
> [dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory
>> selfheal failed: Unable to form layout for directory /
>
>
> I'm seeing it on Gluster 4.0.1 and 3.13.2.
>

This msg is harmless. You can ignore it for now. There is a fix in works at
https://review.gluster.org/19727


> Here's the full log after running heal info: https://gist.github.com/
> fa19201d064490ce34a512a8f5cb82cc.
>
> Any idea why this could be?
>
> Thanks.
>
> Sincerely,
> Artem
>
> --
> Founder, Android Police , APK Mirror
> , Illogical Robot LLC
> beerpla.net | +ArtemRussakovskii
>  | @ArtemR
> 
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] cluster.readdir-optimize and disappearing files/dirs bug

2018-04-04 Thread Artem Russakovskii
After updating the test server to 4.0.1, I can indeed confirm that so far
the disappearing directories bug is gone.

On Wed, Apr 4, 2018, 8:29 PM Raghavendra Gowdappa 
wrote:

> Can you check whether you are hitting
> https://bugzilla.redhat.com/show_bug.cgi?id=1512437? Note that the fix is
> not backported to 3.13 branch, but is available on 4.0 through
> https://bugzilla.redhat.com/1512437.
>
> On Tue, Apr 3, 2018 at 11:13 PM, Artem Russakovskii 
> wrote:
>
>> Hi,
>>
>> As many of you know, gluster suffers from pretty bad performance issues
>> when there are lots of files. One way to at least attempt to improve
>> performance is setting cluster.readdir-optimize to on.
>>
>> However, based on my recent tests (using Gluster 3.13.2), as well as
>> tests of many others (like
>> http://lists.gluster.org/pipermail/gluster-devel/2016-November/051417.html),
>> there's a bug that frequently makes the listings of entire dirs disappear.
>> Though the data is still there, it's a very scary bug for production
>> systems which may think all data is gone.
>>
>> I know Gluster 4 was recently released, but it makes no mention
>> of cluster.readdir-optimize in the notes.
>>
>> What's the status of diagnosing this bug and fixing it? We're
>> experiencing major performance issues with gluster in production, and I'd
>> love to do everything possible to get them resolved.
>>
>> Thank you.
>>
>> Sincerely,
>> Artem
>>
>> --
>> Founder, Android Police , APK Mirror
>> , Illogical Robot LLC
>> beerpla.net | +ArtemRussakovskii
>>  | @ArtemR
>> 
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] [dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal failed: Unable to form layout for directory /

2018-04-04 Thread Artem Russakovskii
Hi,

I noticed when I run gluster volume heal data info, the follow message
shows up in the log, along with other stuff:

[dht-selfheal.c:2328:dht_selfheal_directory] 0-data-dht: Directory selfheal
> failed: Unable to form layout for directory /


I'm seeing it on Gluster 4.0.1 and 3.13.2.

Here's the full log after running heal info:
https://gist.github.com/fa19201d064490ce34a512a8f5cb82cc.

Any idea why this could be?

Thanks.

Sincerely,
Artem

--
Founder, Android Police , APK Mirror
, Illogical Robot LLC
beerpla.net | +ArtemRussakovskii
 | @ArtemR

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Enable sharding on active volume

2018-04-04 Thread Krutika Dhananjay
On Thu, Apr 5, 2018 at 7:33 AM, Ian Halliday  wrote:

> Hello,
>
> I wanted to post this as a question to the group before we go launch it in
> a test environment. Will Gluster handle enabling sharding on an existing
> distributed-replicated environment, and is it safe to do?
>
Yes it's safe but it would mean that only the files that are created since
shard was enabled would be sharded, not the existing files.
If you want to shard the existing files, there are a couple of things you
can do:

1. move the existing file to a local fs from your glusterfs volume and then
move it back into the volume.
2. copy the existing file into a temporary file on the same volume and
rename the file back to its original name.

You could try both on two test vms and go with the faster of the two
approaches. And either way, you could do this one vm at a time.

-Krutika

The environment in question is a VM image storage cluster with some disk
> files starting to grow beyond the size of some of the smaller bricks.
>
> -- Ian
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] cluster.readdir-optimize and disappearing files/dirs bug

2018-04-04 Thread Raghavendra Gowdappa
Can you check whether you are hitting
https://bugzilla.redhat.com/show_bug.cgi?id=1512437? Note that the fix is
not backported to 3.13 branch, but is available on 4.0 through
https://bugzilla.redhat.com/1512437.

On Tue, Apr 3, 2018 at 11:13 PM, Artem Russakovskii 
wrote:

> Hi,
>
> As many of you know, gluster suffers from pretty bad performance issues
> when there are lots of files. One way to at least attempt to improve
> performance is setting cluster.readdir-optimize to on.
>
> However, based on my recent tests (using Gluster 3.13.2), as well as tests
> of many others (like http://lists.gluster.org/
> pipermail/gluster-devel/2016-November/051417.html), there's a bug that
> frequently makes the listings of entire dirs disappear. Though the data is
> still there, it's a very scary bug for production systems which may think
> all data is gone.
>
> I know Gluster 4 was recently released, but it makes no mention
> of cluster.readdir-optimize in the notes.
>
> What's the status of diagnosing this bug and fixing it? We're experiencing
> major performance issues with gluster in production, and I'd love to do
> everything possible to get them resolved.
>
> Thank you.
>
> Sincerely,
> Artem
>
> --
> Founder, Android Police , APK Mirror
> , Illogical Robot LLC
> beerpla.net | +ArtemRussakovskii
>  | @ArtemR
> 
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Enable sharding on active volume

2018-04-04 Thread Ian Halliday
Hello,

I wanted to post this as a question to the group before we go launch it in a 
test environment. Will Gluster handle enabling sharding on an existing 
distributed-replicated environment, and is it safe to do?

The environment in question is a VM image storage cluster with some disk files 
starting to grow beyond the size of some of the smaller bricks.

-- Ian___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] JBOD / ZFS / Flash backed

2018-04-04 Thread Vincent Royer
thanks for your reply,

Yes the VMs are very small and provide only a single service, I would
prefer a total of 2TB but 1TB to start is sufficient.  Ideally I want a
scheme that is easy to expand by dropping an extra disk in each node. When
all slots are full, add another node.

our current setup accesses a storage share via NFS, most read/write
operations under load are under 4MB. There isn't any long sequential I/O.

Currently we have 2 nodes, I am specing the 3rd and adding necessary
components to the existing.  Budget around $20k for the upgrade.


On Wed, Apr 4, 2018 at 12:49 PM, Alex Chekholko  wrote:

> Based on your message, it sounds like your total usable capacity
> requirement is around <1TB.  With a modern SSD, you'll get something like
> 40k theoretical IOPs for 4k I/O size.
>
> You don't mention budget.  What is your budget?  You mention "4MB
> operations", where is that requirement coming from?
>
> On Wed, Apr 4, 2018 at 12:41 PM, Vincent Royer 
> wrote:
>
>> Hi,
>>
>> Trying to make the most of a limited budget.  I need fast I/O for
>> operations under 4MB, and high availability of VMs in an Ovirt cluster.
>>
>> I Have 3 nodes running Ovirt and want to rebuild them with hardware for
>> converging storage.
>>
>> Should I use 2 960GB SSDs in RAID1 in each node, replica 3?
>>
>> Or can I get away with 1 larger SSD per node, JBOD, replica 3?
>>
>> Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, or 4gb
>> flash?
>>
>> Storage network will be 10gbe.
>>
>> Enterprise SSDs and Flash-backed raid is very expensive, so I want to
>> ensure the investment will provide best value in terms of capacity,
>> performance, and availability.
>>
>> Thanks,
>>
>> Vincent
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] JBOD / ZFS / Flash backed

2018-04-04 Thread Alex Chekholko
Based on your message, it sounds like your total usable capacity
requirement is around <1TB.  With a modern SSD, you'll get something like
40k theoretical IOPs for 4k I/O size.

You don't mention budget.  What is your budget?  You mention "4MB
operations", where is that requirement coming from?

On Wed, Apr 4, 2018 at 12:41 PM, Vincent Royer 
wrote:

> Hi,
>
> Trying to make the most of a limited budget.  I need fast I/O for
> operations under 4MB, and high availability of VMs in an Ovirt cluster.
>
> I Have 3 nodes running Ovirt and want to rebuild them with hardware for
> converging storage.
>
> Should I use 2 960GB SSDs in RAID1 in each node, replica 3?
>
> Or can I get away with 1 larger SSD per node, JBOD, replica 3?
>
> Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, or 4gb
> flash?
>
> Storage network will be 10gbe.
>
> Enterprise SSDs and Flash-backed raid is very expensive, so I want to
> ensure the investment will provide best value in terms of capacity,
> performance, and availability.
>
> Thanks,
>
> Vincent
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] JBOD / ZFS / Flash backed

2018-04-04 Thread Vincent Royer
Hi,

Trying to make the most of a limited budget.  I need fast I/O for
operations under 4MB, and high availability of VMs in an Ovirt cluster.

I Have 3 nodes running Ovirt and want to rebuild them with hardware for
converging storage.

Should I use 2 960GB SSDs in RAID1 in each node, replica 3?

Or can I get away with 1 larger SSD per node, JBOD, replica 3?

Is a flash-backed Raid required for JBOD, and should it be 1gb, 2, or 4gb
flash?

Storage network will be 10gbe.

Enterprise SSDs and Flash-backed raid is very expensive, so I want to
ensure the investment will provide best value in terms of capacity,
performance, and availability.

Thanks,

Vincent
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Invisible files and directories

2018-04-04 Thread Nithya Balachandran
This sounds like it may be a different issue. Can you file a bug for this
([1]) and provide all the logs/information you have on this (dir name,
files on bricks, mount logs etc)?

Thanks,
Nithya

[1] https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS

On 4 April 2018 at 19:03, Gudrun Mareike Amedick 
wrote:

> Hi,
>
> I'm currently facing the same behaviour.
>
> Today, one of my users tried to delete a folder. It failed, saying the
> directory wasn't empty. ls -lah showed an empty folder but on the bricks I
> found some files. Renaming the directory caused it to reappear.
>
> We're running gluster 3.12.7-1 on Debian 9 from the repositories provided
> by gluster.org, upgraded from 3.8 a while ago. The volume is mounted via
> the
> fuse client.Our settings are:
> > gluster volume info $VOLUMENAME
> >
> > Volume Name: $VOLUMENAME
> > Type: Distribute
> > Volume ID: 0d210c70-e44f-46f1-862c-ef260514c9f1
> > Status: Started
> > Snapshot Count: 0
> > Number of Bricks: 23
> > Transport-type: tcp
> > Bricks:
> > Brick1: gluster02:/srv/glusterfs/bricks/DATA201/data
> > Brick2: gluster02:/srv/glusterfs/bricks/DATA202/data
> > Brick3: gluster02:/srv/glusterfs/bricks/DATA203/data
> > Brick4: gluster02:/srv/glusterfs/bricks/DATA204/data
> > Brick5: gluster02:/srv/glusterfs/bricks/DATA205/data
> > Brick6: gluster02:/srv/glusterfs/bricks/DATA206/data
> > Brick7: gluster02:/srv/glusterfs/bricks/DATA207/data
> > Brick8: gluster02:/srv/glusterfs/bricks/DATA208/data
> > Brick9: gluster01:/srv/glusterfs/bricks/DATA110/data
> > Brick10: gluster01:/srv/glusterfs/bricks/DATA111/data
> > Brick11: gluster01:/srv/glusterfs/bricks/DATA112/data
> > Brick12: gluster01:/srv/glusterfs/bricks/DATA113/data
> > Brick13: gluster01:/srv/glusterfs/bricks/DATA114/data
> > Brick14: gluster02:/srv/glusterfs/bricks/DATA209/data
> > Brick15: gluster01:/srv/glusterfs/bricks/DATA101/data
> > Brick16: gluster01:/srv/glusterfs/bricks/DATA102/data
> > Brick17: gluster01:/srv/glusterfs/bricks/DATA103/data
> > Brick18: gluster01:/srv/glusterfs/bricks/DATA104/data
> > Brick19: gluster01:/srv/glusterfs/bricks/DATA105/data
> > Brick20: gluster01:/srv/glusterfs/bricks/DATA106/data
> > Brick21: gluster01:/srv/glusterfs/bricks/DATA107/data
> > Brick22: gluster01:/srv/glusterfs/bricks/DATA108/data
> > Brick23: gluster01:/srv/glusterfs/bricks/DATA109/data
> > Options Reconfigured:
> > nfs.addr-namelookup: off
> > transport.address-family: inet
> > nfs.disable: on
> > diagnostics.brick-log-level: ERROR
> > performance.readdir-ahead: on
> > auth.allow: $IP RANGE
> > features.quota: on
> > features.inode-quota: on
> > features.quota-deem-statfs: on
>
> We had a scheduled reboot yesterday.
>
> Kind regards
>
> Gudrun Amedick
>
>
> Am Mittwoch, den 04.04.2018, 01:33 -0400 schrieb Serg Gulko:
> > Right now the volume is running with
> >
> > readdir-optimize off
> > parallel-readdir off
> >
> > On Wed, Apr 4, 2018 at 1:29 AM, Nithya Balachandran 
> wrote:
> > > Hi Serg,
> > >
> > > Do you mean that turning off readdir-optimize did not work? Or did you
> mean turning off parallel-readdir did not work?
> > >
> > >
> > >
> > > On 4 April 2018 at 10:48, Serg Gulko  wrote:
> > > > Hello!
> > > >
> > > > Unfortunately no.
> > > > Directory still not listed using ls -la, but I can cd into.
> > > > I can rename it and it becomes available when I rename it back to
> the original name it's disappeared again.
> > > >
> > > > On Wed, Apr 4, 2018 at 12:56 AM, Raghavendra Gowdappa <
> rgowd...@redhat.com> wrote:
> > > > >
> > > > >
> > > > > On Wed, Apr 4, 2018 at 4:13 AM, Serg Gulko 
> wrote:
> > > > > > Hello!
> > > > > >
> > > > > > We are running distributed volume that contains 7 bricks.
> > > > > > Volume is mounted using native fuse client.
> > > > > >
> > > > > > After an unexpected system reboot, some files are disappeared
> from fuse mount point but still available on the bricks.
> > > > > >
> > > > > > The way it disappeared confusing me a lot. I can't see certain
> directories using ls -la but, at the same time, can cd into the missed
> > > > > > directory.  I can rename the invisible directory and it becomes
> accessible. When I renamed it back to the original name, it becomes
> > > > > > invisible.
> > > > > >
> > > > > > I also tried to mount the same volume into another location and
> run ls hoping that selfheal will fix the problem. Unfortunately, it did
> > > > > > not.
> > > > > >
> > > > > > Is there a way to bring our storage to normal?
> > > > > >
> > > > > Can you check whether turning off option performance.readdir-ahead
> helps?
> > > > >
> > > > > >
> > > > > > glusterfs 3.8.8 built on Jan 11 2017 16:33:17
> > > > > >
> > > > > > Serg Gulko
> > > > > >
> > > > > > ___
> > > > > > Gluster-users mailing list
> > > > > > Gluster-users@gluster.org
> > > > > > http://lists.gluster.org/mailman/listinfo/gluster-users
> > > > > >

[Gluster-users] Gluster Monthly Newsletter, March 2018

2018-04-04 Thread Amye Scavarda
Gluster 4.0!
At long last, Gluster 4.0 is released! Read more at:
https://www.gluster.org/announcing-gluster-4-0/

Other updates about Gluster 4.0:
https://www.gluster.org/more-about-gluster-d2/
https://www.gluster.org/gluster-4-0-kubernetes/

Want to give us feedback about 4.0? We’ve got our retrospective open from
now until April 11.
https://www.gluster.org/4-0-retrospective/

Welcome piragua! https://github.com/gluster/piragua - Piragua emulates the
heketi api to provide directories on a single volume as kubernetes dynamic
volumes, and comes to Gluster from Comcast. Thank you!

Want swag for your meetup? https://www.gluster.org/events/ has a contact
form for us to let us know about your Gluster meetup! We’d love to hear
about Gluster presentations coming up, conference talks and gatherings. Let
us know!


Top Contributing Companies:  Red Hat, Comcast, DataLab, Gentoo Linux,
Facebook, Samsung
Top Contributors in March: N Balachandran, Atin Mukherjee, Vitalii
Koriakov, Amar Tumballi, Poornima G

Noteworthy threads:
[Gluster-users] Glustered 2018 schedule - a short report on how Glustered
went:
http://lists.gluster.org/pipermail/gluster-users/2018-March/033774.html
[Gluster-users] Request For Opinions: what to do about the synthetic
statfvs "tweak"?
http://lists.gluster.org/pipermail/gluster-users/2018-March/033775.html
[Gluster-devel] GlusterFS project contribution
http://lists.gluster.org/pipermail/gluster-devel/2018-March/054535.html
[Gluster-devel] Removal of use-compound-fops option in afr
http://lists.gluster.org/pipermail/gluster-devel/2018-March/054542.html
[Gluster-devel] Proposal to change the version numbers of Gluster project
http://lists.gluster.org/pipermail/gluster-devel/2018-March/054584.html
[Gluster-devel] Release 4.1: LTM release targeted for end of May
http://lists.gluster.org/pipermail/gluster-devel/2018-March/054589.html
[Gluster-devel] Branching out Gluster docs
http://lists.gluster.org/pipermail/gluster-devel/2018-March/054637.html

Upcoming CFPs:
Open Source Summit North America - Sunday, April 29, 2018
https://events.linuxfoundation.org/events/open-source-summit-north-america-2018/program/cfp/


http://www.gluster.org/gluster-monthly-newsletter-march-2018/

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Any feature allow to add lock on a file between different apps?

2018-04-04 Thread Lei Gong
Hello there,

I want to know if there is a feature allow user to add lock on a file when 
their app is modifying that file, so that other apps could use the file when 
its unlocked.

Thanks

Lei
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Invisible files and directories

2018-04-04 Thread Gudrun Mareike Amedick
Hi,

I'm currently facing the same behaviour. 

Today, one of my users tried to delete a folder. It failed, saying the 
directory wasn't empty. ls -lah showed an empty folder but on the bricks I
found some files. Renaming the directory caused it to reappear.

We're running gluster 3.12.7-1 on Debian 9 from the repositories provided by 
gluster.org, upgraded from 3.8 a while ago. The volume is mounted via the
fuse client.Our settings are:
> gluster volume info $VOLUMENAME
>  
> Volume Name: $VOLUMENAME
> Type: Distribute
> Volume ID: 0d210c70-e44f-46f1-862c-ef260514c9f1
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 23
> Transport-type: tcp
> Bricks:
> Brick1: gluster02:/srv/glusterfs/bricks/DATA201/data
> Brick2: gluster02:/srv/glusterfs/bricks/DATA202/data
> Brick3: gluster02:/srv/glusterfs/bricks/DATA203/data
> Brick4: gluster02:/srv/glusterfs/bricks/DATA204/data
> Brick5: gluster02:/srv/glusterfs/bricks/DATA205/data
> Brick6: gluster02:/srv/glusterfs/bricks/DATA206/data
> Brick7: gluster02:/srv/glusterfs/bricks/DATA207/data
> Brick8: gluster02:/srv/glusterfs/bricks/DATA208/data
> Brick9: gluster01:/srv/glusterfs/bricks/DATA110/data
> Brick10: gluster01:/srv/glusterfs/bricks/DATA111/data
> Brick11: gluster01:/srv/glusterfs/bricks/DATA112/data
> Brick12: gluster01:/srv/glusterfs/bricks/DATA113/data
> Brick13: gluster01:/srv/glusterfs/bricks/DATA114/data
> Brick14: gluster02:/srv/glusterfs/bricks/DATA209/data
> Brick15: gluster01:/srv/glusterfs/bricks/DATA101/data
> Brick16: gluster01:/srv/glusterfs/bricks/DATA102/data
> Brick17: gluster01:/srv/glusterfs/bricks/DATA103/data
> Brick18: gluster01:/srv/glusterfs/bricks/DATA104/data
> Brick19: gluster01:/srv/glusterfs/bricks/DATA105/data
> Brick20: gluster01:/srv/glusterfs/bricks/DATA106/data
> Brick21: gluster01:/srv/glusterfs/bricks/DATA107/data
> Brick22: gluster01:/srv/glusterfs/bricks/DATA108/data
> Brick23: gluster01:/srv/glusterfs/bricks/DATA109/data
> Options Reconfigured:
> nfs.addr-namelookup: off
> transport.address-family: inet
> nfs.disable: on
> diagnostics.brick-log-level: ERROR
> performance.readdir-ahead: on
> auth.allow: $IP RANGE
> features.quota: on
> features.inode-quota: on
> features.quota-deem-statfs: on

We had a scheduled reboot yesterday.

Kind regards

Gudrun Amedick


Am Mittwoch, den 04.04.2018, 01:33 -0400 schrieb Serg Gulko:
> Right now the volume is running with
> 
> readdir-optimize off
> parallel-readdir off
> 
> On Wed, Apr 4, 2018 at 1:29 AM, Nithya Balachandran  
> wrote:
> > Hi Serg,
> > 
> > Do you mean that turning off readdir-optimize did not work? Or did you mean 
> > turning off parallel-readdir did not work?
> > 
> > 
> > 
> > On 4 April 2018 at 10:48, Serg Gulko  wrote:
> > > Hello! 
> > > 
> > > Unfortunately no. 
> > > Directory still not listed using ls -la, but I can cd into.
> > > I can rename it and it becomes available when I rename it back to the 
> > > original name it's disappeared again. 
> > > 
> > > On Wed, Apr 4, 2018 at 12:56 AM, Raghavendra Gowdappa 
> > >  wrote:
> > > > 
> > > > 
> > > > On Wed, Apr 4, 2018 at 4:13 AM, Serg Gulko  wrote:
> > > > > Hello! 
> > > > > 
> > > > > We are running distributed volume that contains 7 bricks. 
> > > > > Volume is mounted using native fuse client. 
> > > > > 
> > > > > After an unexpected system reboot, some files are disappeared from 
> > > > > fuse mount point but still available on the bricks. 
> > > > > 
> > > > > The way it disappeared confusing me a lot. I can't see certain 
> > > > > directories using ls -la but, at the same time, can cd into the missed
> > > > > directory.  I can rename the invisible directory and it becomes 
> > > > > accessible. When I renamed it back to the original name, it becomes
> > > > > invisible. 
> > > > > 
> > > > > I also tried to mount the same volume into another location and run 
> > > > > ls hoping that selfheal will fix the problem. Unfortunately, it did
> > > > > not. 
> > > > > 
> > > > > Is there a way to bring our storage to normal?
> > > > > 
> > > > Can you check whether turning off option performance.readdir-ahead 
> > > > helps?
> > > > 
> > > > > 
> > > > > glusterfs 3.8.8 built on Jan 11 2017 16:33:17
> > > > > 
> > > > > Serg Gulko 
> > > > > 
> > > > > ___
> > > > > Gluster-users mailing list
> > > > > Gluster-users@gluster.org
> > > > > http://lists.gluster.org/mailman/listinfo/gluster-users
> > > > > 
> > > > 
> > > 
> > > ___
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-users
> > 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

smime.p7s
Description: S/MIME cryptographic signature
___

Re: [Gluster-users] Expand distributed replicated volume with new set of smaller bricks

2018-04-04 Thread Anh Vo
We are using 3.8.15 I believe (the one comes with ubuntu 16.04). Do you
know when this was fixed?

Thanks

On Tue, Apr 3, 2018 at 11:24 PM Nithya Balachandran 
wrote:

> Hi,
>
> Yes this is possible. Make sure you have cluster.weighted-rebalance
> enabled for the volume and run rebalance with the start force option.
> Which version of gluster are you running (we fixed a bug around this a
> while ago)?
>
> Regards,
> Nithya
>
> On 4 April 2018 at 11:36, Anh Vo  wrote:
>
>> We currently have a 3 node gluster setup each has a 100TB brick (total
>> 300TB, usable 100TB due to replica factor 3)
>> We would like to expand the existing volume by adding another 3 nodes,
>> but each will only have a 50TB brick. I think this is possible, but will it
>> affect gluster performance and if so, by how much. Assuming we run a
>> rebalance with force option, will this distribute the existing data
>> proportionally? I.e., if the current 100TB volume has 60 TB, will it
>> distribute 20TB to the new set of servers?
>>
>> Thanks
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Expand distributed replicated volume with new set of smaller bricks

2018-04-04 Thread Nithya Balachandran
Hi,

Yes this is possible. Make sure you have cluster.weighted-rebalance enabled
for the volume and run rebalance with the start force option.
Which version of gluster are you running (we fixed a bug around this a
while ago)?

Regards,
Nithya

On 4 April 2018 at 11:36, Anh Vo  wrote:

> We currently have a 3 node gluster setup each has a 100TB brick (total
> 300TB, usable 100TB due to replica factor 3)
> We would like to expand the existing volume by adding another 3 nodes, but
> each will only have a 50TB brick. I think this is possible, but will it
> affect gluster performance and if so, by how much. Assuming we run a
> rebalance with force option, will this distribute the existing data
> proportionally? I.e., if the current 100TB volume has 60 TB, will it
> distribute 20TB to the new set of servers?
>
> Thanks
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Expand distributed replicated volume with new set of smaller bricks

2018-04-04 Thread Anh Vo
We currently have a 3 node gluster setup each has a 100TB brick (total
300TB, usable 100TB due to replica factor 3)
We would like to expand the existing volume by adding another 3 nodes, but
each will only have a 50TB brick. I think this is possible, but will it
affect gluster performance and if so, by how much. Assuming we run a
rebalance with force option, will this distribute the existing data
proportionally? I.e., if the current 100TB volume has 60 TB, will it
distribute 20TB to the new set of servers?

Thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users