Re: [Gluster-users] Geo replication stuck (rsync: link_stat "(unreachable)")

2017-04-16 Thread Kotresh Hiremath Ravishankar
Answers inline.

Thanks and Regards,
Kotresh H R

- Original Message -
> From: "mabi" 
> To: "Kotresh Hiremath Ravishankar" 
> Cc: "Gluster Users" 
> Sent: Thursday, April 13, 2017 8:51:29 PM
> Subject: Re: [Gluster-users] Geo replication stuck (rsync: link_stat 
> "(unreachable)")
> 
> Hi Kotresh,
> 
> Thanks for your feedback.
> 
> So do you mean I can simply login into the geo-replication slave node, mount
> the volume with fuse, and delete the problematic directory, and finally
> restart geo-replcation?
> 
   Trying to delete the problematic directory on slave might still result with
  the same ENOTEMPTY error. Try that out, if it does not work out, it needs to
  be deleted from backend bricks directly from all the nodes.

> I am planning to migrate to 3.8 as soon as I have a backup (geo-replication).
> Is this issue with DHT fixed in the latest 3.8.x release?
>
   Most of the issues are addressed.

> Regards,
> M.
> 
>  Original Message 
> Subject: Re: [Gluster-users] Geo replication stuck (rsync: link_stat
> "(unreachable)")
> Local Time: April 13, 2017 7:57 AM
> UTC Time: April 13, 2017 5:57 AM
> From: khire...@redhat.com
> To: mabi 
> Gluster Users 
> 
> Hi,
> 
> I think the directory Workhours_2017 is deleted on master and on
> slave it's failing to delete because there might be stale linkto files
> at the back end. These issues are fixed in DHT with latest versions.
> Upgrading to latest version would solve these issues.
> 
> To workaround the issue, you might need to cleanup the problematic
> directory on slave from the backend.
> 
> Thanks and Regards,
> Kotresh H R
> 
> - Original Message -
> > From: "mabi" 
> > To: "Kotresh Hiremath Ravishankar" 
> > Cc: "Gluster Users" 
> > Sent: Thursday, April 13, 2017 12:28:50 AM
> > Subject: Re: [Gluster-users] Geo replication stuck (rsync: link_stat
> > "(unreachable)")
> >
> > Hi Kotresh,
> >
> > Thanks for your hint, adding the "--ignore-missing-args" option to rsync
> > and
> > restarting geo-replication worked but it only managed to sync approximately
> > 1/3 of the data until it put the geo replication in status "Failed" this
> > time. Now I have a different type of error as you can see below from the
> > log
> > extract on my geo replication slave node:
> >
> > [2017-04-12 18:01:55.268923] I [MSGID: 109066]
> > [dht-rename.c:1574:dht_rename]
> > 0-myvol-private-geo-dht: renaming
> > /.gfid/1678ff37-f708-4197-bed0-3ecd87ae1314/Workhours_2017
> > empty.xls.ocTransferId2118183895.part
> > (hash=myvol-private-geo-client-0/cache=myvol-private-geo-client-0) =>
> > /.gfid/1678ff37-f708-4197-bed0-3ecd87ae1314/Workhours_2017 empty.xls
> > (hash=myvol-private-geo-client-0/cache=myvol-private-geo-client-0)
> > [2017-04-12 18:01:55.269842] W [fuse-bridge.c:1787:fuse_rename_cbk]
> > 0-glusterfs-fuse: 4786:
> > /.gfid/1678ff37-f708-4197-bed0-3ecd87ae1314/Workhours_2017
> > empty.xls.ocTransferId2118183895.part ->
> > /.gfid/1678ff37-f708-4197-bed0-3ecd87ae1314/Workhours_2017 empty.xls => -1
> > (Directory not empty)
> > [2017-04-12 18:01:55.314062] I [fuse-bridge.c:5016:fuse_thread_proc]
> > 0-fuse:
> > unmounting /tmp/gsyncd-aux-mount-PNSR8s
> > [2017-04-12 18:01:55.314311] W [glusterfsd.c:1251:cleanup_and_exit]
> > (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8064) [0x7f97d3129064]
> > -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f97d438a725]
> > -->/usr/sbin/glusterfs(cleanup_and_exit+0x57) [0x7f97d438a5a7] ) 0-:
> > received signum (15), shutting down
> > [2017-04-12 18:01:55.314335] I [fuse-bridge.c:5720:fini] 0-fuse: Unmounting
> > '/tmp/gsyncd-aux-mount-PNSR8s'.
> >
> > How can I fix now this issue and have geo-replication continue
> > synchronising
> > again?
> >
> > Best regards,
> > M.
> >
> >  Original Message 
> > Subject: Re: [Gluster-users] Geo replication stuck (rsync: link_stat
> > "(unreachable)")
> > Local Time: April 11, 2017 9:18 AM
> > UTC Time: April 11, 2017 7:18 AM
> > From: khire...@redhat.com
> > To: mabi 
> > Gluster Users 
> >
> > Hi,
> >
> > Then please use set the following rsync config and let us know if it helps.
> >
> > gluster vol geo-rep  :: config
> > rsync-options
> > "--ignore-missing-args"
> >
> > Thanks and Regards,
> > Kotresh H R
> >
> > - Original Message -
> > > From: "mabi" 
> > > To: "Kotresh Hiremath Ravishankar" 
> > > Cc: "Gluster Users" 
> > > Sent: Tuesday, April 11, 2017 2:15:54 AM
> > > Subject: Re: [Gluster-users] Geo replication stuck (rsync: link_stat
> > > "(unreachable)")
> > >
> > > Hi Kotresh,
> > >
> > > I am using the official Debian 8 (jessie) package which has rsync version
> > > 3.1.1.
> > >
> > > Regards,
> > > M.
> > >
> > >  

[Gluster-users] Release 3.10.2: Scheduled for the 30th of April

2017-04-16 Thread Raghavendra Talur
Hi,

It's time to prepare the 3.10.2 release, which falls on the 30th of
each month, and hence would be April-30th-2017 this time around.

This mail is to call out the following,

1) Are there any pending *blocker* bugs that need to be tracked for
3.10.2? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
mail

2) Pending reviews in the 3.10 dashboard will be part of the release,
*iff* they pass regressions and have the review votes, so use the
dashboard [2] to check on the status of your patches to 3.10 and get
these going

3) I have made checks on what went into 3.8 post 3.10 release and if
these fixes are included in 3.10 branch, the status on this is *green*
as all fixes ported to 3.8, are ported to 3.10 as well

4) Empty release notes are posted here [3], if there are any specific
call outs for 3.10 beyond bugs, please update the review, or leave a
comment in the review, for me to pick it up

Thanks,
Raghavendra Talur

[1] Release bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.10.2

[2] 3.10 review dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-10-dashboard

[3] Release notes WIP: https://review.gluster.org/#/c/17063/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-16 Thread ABHISHEK PALIWAL
There is no need but it could happen accidentally and I think it should be
protect or should not be permissible.



On Mon, Apr 17, 2017 at 8:36 AM, Atin Mukherjee  wrote:

>
>
> On Mon, 17 Apr 2017 at 08:23, ABHISHEK PALIWAL 
> wrote:
>
>> Hi All,
>>
>> Here we have below steps to reproduce the issue
>>
>> Reproduction steps:
>>
>>
>>
>> root@128:~# gluster volume create brick 128.224.95.140:/tmp/brick force
>> - create the gluster volume
>>
>> volume create: brick: success: please start the volume to access data
>>
>> root@128:~# gluster volume set brick nfs.disable true
>>
>> volume set: success
>>
>> root@128:~# gluster volume start brick
>>
>> volume start: brick: success
>>
>> root@128:~# gluster volume info
>>
>> Volume Name: brick
>>
>> Type: Distribute
>>
>> Volume ID: a59b479a-2b21-426d-962a-79d6d294fee3
>>
>> Status: Started
>>
>> Number of Bricks: 1
>>
>> Transport-type: tcp
>>
>> Bricks:
>>
>> Brick1: 128.224.95.140:/tmp/brick
>>
>> Options Reconfigured:
>>
>> nfs.disable: true
>>
>> performance.readdir-ahead: on
>>
>> root@128:~# gluster volume status
>>
>> Status of volume: brick
>>
>> Gluster process TCP Port RDMA Port Online Pid
>>
>> 
>> --
>>
>> Brick 128.224.95.140:/tmp/brick 49155 0 Y 768
>>
>>
>>
>> Task Status of Volume brick
>>
>> 
>> --
>>
>> There are no active volume tasks
>>
>>
>>
>> root@128:~# mount -t glusterfs 128.224.95.140:/brick gluster/
>>
>> root@128:~# cd gluster/
>>
>> root@128:~/gluster# du -sh
>>
>> 0 .
>>
>> root@128:~/gluster# mkdir -p test/
>>
>> root@128:~/gluster# cp ~/tmp.file gluster/
>>
>> root@128:~/gluster# cp tmp.file test
>>
>> root@128:~/gluster# cd /tmp/brick
>>
>> root@128:/tmp/brick# du -sh *
>>
>> 768K test
>>
>> 768K tmp.file
>>
>> root@128:/tmp/brick# rm -rf test - delete the test directory and
>> data in the server side, not reasonable
>>
>> root@128:/tmp/brick# ls
>>
>> tmp.file
>>
>> root@128:/tmp/brick# du -sh *
>>
>> 768K tmp.file
>>
>> *root@128:/tmp/brick# du -sh (brick dir)*
>>
>> *1.6M .*
>>
>> root@128:/tmp/brick# cd .glusterfs/
>>
>> root@128:/tmp/brick/.glusterfs# du -sh *
>>
>> 0 00
>>
>> 0 2a
>>
>> 0 bb
>>
>> 768K c8
>>
>> 0 c9
>>
>> 0 changelogs
>>
>> 768K d0
>>
>> 4.0K health_check
>>
>> 0 indices
>>
>> 0 landfill
>>
>> *root@128:/tmp/brick/.glusterfs# du -sh (.glusterfs dir)*
>>
>> *1.6M .*
>>
>> root@128:/tmp/brick# cd ~/gluster
>>
>> root@128:~/gluster# ls
>>
>> tmp.file
>>
>> *root@128:~/gluster# du -sh * (Mount dir)*
>>
>> *768K tmp.file*
>>
>>
>>
>> In the reproduce steps, we delete the test directory in the server side,
>> not in the client side. I think this delete operation is not reasonable.
>> Please ask the customer to check whether they do this unreasonable
>> operation.
>>
>
> What's the need of deleting data from backend (i.e bricks) directly?
>
>
>> *It seems while deleting the data from BRICK, metadata will not deleted
>> from .glusterfs directory.*
>>
>>
>> *I don't know whether it is a bug of limitations, please let us know
>> about this?*
>>
>>
>> Regards,
>>
>> Abhishek
>>
>>
>> On Thu, Apr 13, 2017 at 2:29 PM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>>> On Thu, Apr 13, 2017 at 12:19 PM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
 yes it is ext4. but what is the impact of this.

>>>
>>> Did you have a lot of data before and you deleted all that data? ext4 if
>>> I remember correctly doesn't decrease size of directory once it expands it.
>>> So in ext4 inside a directory if you create lots and lots of files and
>>> delete them all, the directory size would increase at the time of creation
>>> but won't decrease after deletion. I don't have any system with ext4 at the
>>> moment to test it now. This is something we faced 5-6 years back but not
>>> sure if it is fixed in ext4 in the latest releases.
>>>
>>>

 On Thu, Apr 13, 2017 at 9:26 AM, Pranith Kumar Karampuri <
 pkara...@redhat.com> wrote:

> Yes
>
> On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com> wrote:
>
>> Means the fs where this brick has been created?
>> On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri" <
>> pkara...@redhat.com> wrote:
>>
>>> Is your backend filesystem ext4?
>>>
>>> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
 No,we are not using sharding
 On Apr 12, 2017 7:29 PM, "Alessandro Briosi" 
 wrote:

> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>
> I have did more investigation and find out that brick dir size is
> equivalent to gluster mount point but .glusterfs having too much 
> difference
>
>
> You are probably 

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-16 Thread Atin Mukherjee
On Mon, 17 Apr 2017 at 08:23, ABHISHEK PALIWAL 
wrote:

> Hi All,
>
> Here we have below steps to reproduce the issue
>
> Reproduction steps:
>
>
>
> root@128:~# gluster volume create brick 128.224.95.140:/tmp/brick force
> - create the gluster volume
>
> volume create: brick: success: please start the volume to access data
>
> root@128:~# gluster volume set brick nfs.disable true
>
> volume set: success
>
> root@128:~# gluster volume start brick
>
> volume start: brick: success
>
> root@128:~# gluster volume info
>
> Volume Name: brick
>
> Type: Distribute
>
> Volume ID: a59b479a-2b21-426d-962a-79d6d294fee3
>
> Status: Started
>
> Number of Bricks: 1
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: 128.224.95.140:/tmp/brick
>
> Options Reconfigured:
>
> nfs.disable: true
>
> performance.readdir-ahead: on
>
> root@128:~# gluster volume status
>
> Status of volume: brick
>
> Gluster process TCP Port RDMA Port Online Pid
>
> 
> --
>
> Brick 128.224.95.140:/tmp/brick 49155 0 Y 768
>
>
>
> Task Status of Volume brick
>
> 
> --
>
> There are no active volume tasks
>
>
>
> root@128:~# mount -t glusterfs 128.224.95.140:/brick gluster/
>
> root@128:~# cd gluster/
>
> root@128:~/gluster# du -sh
>
> 0 .
>
> root@128:~/gluster# mkdir -p test/
>
> root@128:~/gluster# cp ~/tmp.file gluster/
>
> root@128:~/gluster# cp tmp.file test
>
> root@128:~/gluster# cd /tmp/brick
>
> root@128:/tmp/brick# du -sh *
>
> 768K test
>
> 768K tmp.file
>
> root@128:/tmp/brick# rm -rf test - delete the test directory and
> data in the server side, not reasonable
>
> root@128:/tmp/brick# ls
>
> tmp.file
>
> root@128:/tmp/brick# du -sh *
>
> 768K tmp.file
>
> *root@128:/tmp/brick# du -sh (brick dir)*
>
> *1.6M .*
>
> root@128:/tmp/brick# cd .glusterfs/
>
> root@128:/tmp/brick/.glusterfs# du -sh *
>
> 0 00
>
> 0 2a
>
> 0 bb
>
> 768K c8
>
> 0 c9
>
> 0 changelogs
>
> 768K d0
>
> 4.0K health_check
>
> 0 indices
>
> 0 landfill
>
> *root@128:/tmp/brick/.glusterfs# du -sh (.glusterfs dir)*
>
> *1.6M .*
>
> root@128:/tmp/brick# cd ~/gluster
>
> root@128:~/gluster# ls
>
> tmp.file
>
> *root@128:~/gluster# du -sh * (Mount dir)*
>
> *768K tmp.file*
>
>
>
> In the reproduce steps, we delete the test directory in the server side,
> not in the client side. I think this delete operation is not reasonable.
> Please ask the customer to check whether they do this unreasonable
> operation.
>

What's the need of deleting data from backend (i.e bricks) directly?


> *It seems while deleting the data from BRICK, metadata will not deleted
> from .glusterfs directory.*
>
>
> *I don't know whether it is a bug of limitations, please let us know about
> this?*
>
>
> Regards,
>
> Abhishek
>
>
> On Thu, Apr 13, 2017 at 2:29 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Thu, Apr 13, 2017 at 12:19 PM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> yes it is ext4. but what is the impact of this.
>>>
>>
>> Did you have a lot of data before and you deleted all that data? ext4 if
>> I remember correctly doesn't decrease size of directory once it expands it.
>> So in ext4 inside a directory if you create lots and lots of files and
>> delete them all, the directory size would increase at the time of creation
>> but won't decrease after deletion. I don't have any system with ext4 at the
>> moment to test it now. This is something we faced 5-6 years back but not
>> sure if it is fixed in ext4 in the latest releases.
>>
>>
>>>
>>> On Thu, Apr 13, 2017 at 9:26 AM, Pranith Kumar Karampuri <
>>> pkara...@redhat.com> wrote:
>>>
 Yes

 On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL <
 abhishpali...@gmail.com> wrote:

> Means the fs where this brick has been created?
> On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri" <
> pkara...@redhat.com> wrote:
>
>> Is your backend filesystem ext4?
>>
>> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> No,we are not using sharding
>>> On Apr 12, 2017 7:29 PM, "Alessandro Briosi" 
>>> wrote:
>>>
 Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:

 I have did more investigation and find out that brick dir size is
 equivalent to gluster mount point but .glusterfs having too much 
 difference


 You are probably using sharding?


 Buon lavoro.
 *Alessandro Briosi*

 *METAL.it Nord S.r.l.*
 Via Maioliche 57/C - 38068 Rovereto (TN)
 Tel.+39.0464.430130 - Fax +39.0464.437393
 www.metalit.com



>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> 

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-16 Thread ABHISHEK PALIWAL
Hi All,

Here we have below steps to reproduce the issue

Reproduction steps:



root@128:~# gluster volume create brick 128.224.95.140:/tmp/brick force
- create the gluster volume

volume create: brick: success: please start the volume to access data

root@128:~# gluster volume set brick nfs.disable true

volume set: success

root@128:~# gluster volume start brick

volume start: brick: success

root@128:~# gluster volume info

Volume Name: brick

Type: Distribute

Volume ID: a59b479a-2b21-426d-962a-79d6d294fee3

Status: Started

Number of Bricks: 1

Transport-type: tcp

Bricks:

Brick1: 128.224.95.140:/tmp/brick

Options Reconfigured:

nfs.disable: true

performance.readdir-ahead: on

root@128:~# gluster volume status

Status of volume: brick

Gluster process TCP Port RDMA Port Online Pid


--

Brick 128.224.95.140:/tmp/brick 49155 0 Y 768



Task Status of Volume brick


--

There are no active volume tasks



root@128:~# mount -t glusterfs 128.224.95.140:/brick gluster/

root@128:~# cd gluster/

root@128:~/gluster# du -sh

0 .

root@128:~/gluster# mkdir -p test/

root@128:~/gluster# cp ~/tmp.file gluster/

root@128:~/gluster# cp tmp.file test

root@128:~/gluster# cd /tmp/brick

root@128:/tmp/brick# du -sh *

768K test

768K tmp.file

root@128:/tmp/brick# rm -rf test - delete the test directory and
data in the server side, not reasonable

root@128:/tmp/brick# ls

tmp.file

root@128:/tmp/brick# du -sh *

768K tmp.file

*root@128:/tmp/brick# du -sh (brick dir)*

*1.6M .*

root@128:/tmp/brick# cd .glusterfs/

root@128:/tmp/brick/.glusterfs# du -sh *

0 00

0 2a

0 bb

768K c8

0 c9

0 changelogs

768K d0

4.0K health_check

0 indices

0 landfill

*root@128:/tmp/brick/.glusterfs# du -sh (.glusterfs dir)*

*1.6M .*

root@128:/tmp/brick# cd ~/gluster

root@128:~/gluster# ls

tmp.file

*root@128:~/gluster# du -sh * (Mount dir)*

*768K tmp.file*



In the reproduce steps, we delete the test directory in the server side,
not in the client side. I think this delete operation is not reasonable.
Please ask the customer to check whether they do this unreasonable
operation.


*It seems while deleting the data from BRICK, metadata will not deleted
from .glusterfs directory.*


*I don't know whether it is a bug of limitations, please let us know about
this?*


Regards,

Abhishek


On Thu, Apr 13, 2017 at 2:29 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

>
>
> On Thu, Apr 13, 2017 at 12:19 PM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com> wrote:
>
>> yes it is ext4. but what is the impact of this.
>>
>
> Did you have a lot of data before and you deleted all that data? ext4 if I
> remember correctly doesn't decrease size of directory once it expands it.
> So in ext4 inside a directory if you create lots and lots of files and
> delete them all, the directory size would increase at the time of creation
> but won't decrease after deletion. I don't have any system with ext4 at the
> moment to test it now. This is something we faced 5-6 years back but not
> sure if it is fixed in ext4 in the latest releases.
>
>
>>
>> On Thu, Apr 13, 2017 at 9:26 AM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>> Yes
>>>
>>> On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
 Means the fs where this brick has been created?
 On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri" 
 wrote:

> Is your backend filesystem ext4?
>
> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com> wrote:
>
>> No,we are not using sharding
>> On Apr 12, 2017 7:29 PM, "Alessandro Briosi"  wrote:
>>
>>> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>>>
>>> I have did more investigation and find out that brick dir size is
>>> equivalent to gluster mount point but .glusterfs having too much 
>>> difference
>>>
>>>
>>> You are probably using sharding?
>>>
>>>
>>> Buon lavoro.
>>> *Alessandro Briosi*
>>>
>>> *METAL.it Nord S.r.l.*
>>> Via Maioliche 57/C - 38068 Rovereto (TN)
>>> Tel.+39.0464.430130 - Fax +39.0464.437393
>>> www.metalit.com
>>>
>>>
>>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
> Pranith
>

>>>
>>>
>>> --
>>> Pranith
>>>
>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>
>
>
> --
> Pranith
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Questions concerning tiering

2017-04-16 Thread Vijay Bellur
On Tue, Apr 11, 2017 at 4:45 AM, David Spisla 
wrote:

> Dear Gluster Community,
>
>
>
> at the moment I play aroud with that Gluster tiering feature. It seems to
> be that there are alwas 2 Tiers à hot and cold
>
> Is there a way to have more than 2 Tiers? I think its not possible…
>

You are correct. Only one tier can be attached to a volume with the current
implementation.


>
>
> If I write some data e.g. big video files, in which tier will first write
> (hot or cold Tier)???
>
>
>

By default, all writes land on the hot tier initially.  Some more details
about tiering is available at [1].

Regards,
Vijay

[1]
https://docs.google.com/document/d/1cjFLzRQ4T1AomdDGk-yM7WkPNhAL345DwLJbK3ynk7I/edit#heading=h.3bfy3tyowln9
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Sudden performance drop in gluster

2017-04-16 Thread Vijay Bellur
On Fri, Apr 14, 2017 at 3:35 PM, Pat Haley  wrote:

>
> This seems to have cleared itself.  For future reference though, what
> kinds of things should I look at do diagnose an issue like this?
>


Turning on gluster volume profile [1] and sampling the output of profile
info at periodic intervals would help. In addition you could also strace
the glusterfsd process and/or use `perf record` to determine what the
process is doing.

HTH,
Vijay

[1]
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/



> Thanks
>
>
>
> On 04/14/2017 01:16 PM, Pat Haley wrote:
>
>>
>> Hi,
>>
>> Today we suddenly experienced a performance drop in gluster:  e.g. doing
>> an "ls" of a directory with about 20 files takes about 5 minutes.  This is
>> way beyond (and seem separate from) some previous concerns we had.
>>
>> Our gluster filesystem is two bricks hosted on a single server. Logging
>> onto that server and doing "top" shows a load average of ~30.  In general,
>> no process is showing significant CPU usage except an occasional flash a
>> 3300% from glusterfsd.  The rest of our system is not doing any exceptional
>> data demands on the file system (i.e. we aren't suddenly running more jobs
>> than we were yesterday).
>>
>> Any thoughts on how we can proceed with debugging this will be greatly
>> appreciated.
>>
>> Some additional information:
>>
>> glusterfs 3.7.11 built on Apr 27 2016 14:09:22
>> CentOS release 6.8 (Final)
>>
>>
>> [root@mseas-data2 ~]# gluster volume status data-volume
>> Status of volume: data-volume
>> Gluster process TCP Port  RDMA Port Online
>> Pid
>> --
>>
>> Brick mseas-data2:/mnt/brick1   49154 0 Y   5021
>> Brick mseas-data2:/mnt/brick2   49155 0 Y   5026
>>
>> Task Status of Volume data-volume
>> --
>>
>> Task : Rebalance
>> ID   : 892d9e3a-b38c-4971-b96a-8e4a496685ba
>> Status   : completed
>>
>>
>> [root@mseas-data2 ~]# gluster volume info data-volume
>>
>> Volume Name: data-volume
>> Type: Distribute
>> Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
>> Status: Started
>> Number of Bricks: 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: mseas-data2:/mnt/brick1
>> Brick2: mseas-data2:/mnt/brick2
>> Options Reconfigured:
>> diagnostics.brick-sys-log-level: WARNING
>> performance.readdir-ahead: on
>> nfs.disable: on
>> nfs.export-volumes: off
>>
>>
>>
> --
>
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> Pat Haley  Email:  pha...@mit.edu
> Center for Ocean Engineering   Phone:  (617) 253-6824
> Dept. of Mechanical EngineeringFax:(617) 253-8125
> MIT, Room 5-213http://web.mit.edu/phaley/www/
> 77 Massachusetts Avenue
> Cambridge, MA  02139-4301
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users