Re: [Gluster-users] [Gluster-devel] Support for Scaling TIered volume, ie Add/Remove brick and Rebalance

2016-06-21 Thread Hari Gowtham
- Original Message - > From: "Atin Mukherjee" > To: "Mohammed Rafi K C" , "Gluster Devel" > > Sent: Tuesday, June 21, 2016 12:34:12 PM > Subject: Re: [Gluster-devel] Support for Scaling TIered volume, ie Add/Remove

Re: [Gluster-users] [Gluster-devel] New commands for supporting add/remove brick and rebalance on tiered volume

2016-10-04 Thread Hari Gowtham
a separate command instead of the args. If i have been missing any pros from having the args let me know. - Original Message - > From: "Atin Mukherjee" <amukh...@redhat.com> > To: "Hari Gowtham" <hgowt...@redhat.com> > Cc: "gluster-deve

Re: [Gluster-users] [Gluster-devel] New commands for supporting add/remove brick and rebalance on tiered volume

2016-10-04 Thread Hari Gowtham
Yes. this sounds better than having two separate commands for each tier. If i don't get any other better solution will go with this one. Thanks Atin. - Original Message - > From: "Atin Mukherjee" <amukh...@redhat.com> > To: "Hari Gowtham" <hgowt.

[Gluster-users] New commands for supporting add/remove brick and rebalance on tiered volume

2016-10-04 Thread Hari Gowtham
Hi, The current add and remove brick commands aren't sufficient to support add/remove brick on tiered volumes.So the commands need minor changes like mentioning which tier we are doing the operation on. So in order to specify the tier on which we are performing the changes, I thought of using

Re: [Gluster-users] [Gluster-devel] New commands for supporting add/remove brick and rebalance on tiered volume

2016-10-25 Thread Hari Gowtham
nd aproach we can add arguments later if a specific function needs a specific argument. - Original Message - > From: "Dan Lambright" <dlamb...@redhat.com> > To: "Hari Gowtham" <hgowt...@redhat.com> > Cc: "Atin Mukherjee" <amukh...@redha

Re: [Gluster-users] [Gluster-devel] New commands for supporting add/remove brick and rebalance on tiered volume

2016-10-21 Thread Hari Gowtham
ret = do_cli_cmd_volume_add_coldbr_tier (state, word, words, wordcount-1); goto out; } it get differentiated here and is sent to the respective function. and the parsing remains same. Let me know which one is the better one to follow.

Re: [Gluster-users] Hot Tier

2017-07-31 Thread Hari Gowtham
For the tier daemon to migrate the files for read, few performance translators have to be turned off. By default the performance quick-read and io-cache are turned on. You can turn them off so that the files will be migrated for read. On Mon, Jul 31, 2017 at 11:34 AM, Hari Gowtham <hg

Re: [Gluster-users] Hot Tier

2017-07-31 Thread Hari Gowtham
, and the output for: gluster v info gluster v get v1 performance.io-cache gluster v get v1 performance.quick-read Do send us this and then we will let you know what should be done, as reads should also cause promotion On Mon, Jul 31, 2017 at 2:21 PM, Hari Gowtham <hgowt...@redhat.com> wrote: > For

Re: [Gluster-users] Hot Tier

2017-08-02 Thread Hari Gowtham
: Tier migration ID : c4c33b04-2a1e-4e53-b1f5-a96ec6d9d851 Status : in progress No errors reported in 'voldata3-tier-.log' file. I'll keep monitoring it for few day. I expect to see some 'cooled' data moving to 'cold tier'. Thank you. On Tue, Aug 1, 2017 at 1:32 A

Re: [Gluster-users] Some bricks are offline after restart, how to bring them online gracefully?

2017-06-30 Thread Hari Gowtham
the affected server? I > don’t want to use “gluster volume stop/start” since affected bricks are > online on other server and there is no reason to completely turn it off. gluster volume start force will not bring down the bricks that are already up and running

Re: [Gluster-users] Some bricks are offline after restart, how to bring them online gracefully?

2017-06-30 Thread Hari Gowtham
brick is offline. >> >> Force start command? >> sudo gluster volume start MyVolume force >> >> That works! Thank you. >> >> If I have this issue too often then I can create simple script that greps >> all bricks on the local server and force start when it’

Re: [Gluster-users] Hot Tier

2017-07-31 Thread Hari Gowtham
ing is moved to the hot tier >> bricks. >> >> Thank you. >> >> ___ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-users > > > > ___

Re: [Gluster-users] Hot Tier

2017-07-31 Thread Hari Gowtham
-count: 16 > performance.strict-o-direct: on > network.ping-timeout: 30 > network.remote-dio: disable > user.cifs: off > features.quota: on > features.inode-quota: on > features.quota-deem-statfs: on > > ~]# gluster v get home performance.io-cache > performance.io-c

Re: [Gluster-users] Upgrading from Gluster 3.8 to 3.12

2017-12-19 Thread Hari Gowtham
gt;> >> I have a cluster of 10 servers all running Fedora 24 along with >> >> Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 >> >> with Gluster 3.12. I saw the documentation and did some testing but >> >> I would like to run my plan throu

Re: [Gluster-users] Fwd: Ignore failed connection messages during copying files with tiering

2017-11-09 Thread Hari Gowtham
. > > The GlusterFS version is 3.11.0. Does anyone knows what’s the problem? Is it > related to tiering? > > Thanks, > Paul > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://lists.gluster.org/mailman/lis

Re: [Gluster-users] Turn off replication

2018-04-27 Thread Hari Gowtham
k Subrahmanya <ksubr...@redhat.com> >>> wrote: >>> >>> Hi Jose, >>> >>> By switching into pure distribute volume you will lose availability if >>> something goes bad. >>> >>> I am guessing you have a nX2 volume. >>> If you want to preserve one copy of the data in all the distr

Re: [Gluster-users] Turn off replication

2018-05-02 Thread Hari Gowtham
move-brick operation. > If you have any inconsistency, heal them first using the "gluster volume > heal " command and wait till the > "gluster volume heal info" output becomes zero, before removing > the bricks, so that you will have the correct data. > If you do not want to preserve the data then you can direc

Re: [Gluster-users] Blocking IO when hot tier promotion daemon runs

2018-01-19 Thread Hari Gowtham
62144 > cluster.watermark-hi: 95 > auto-delete: enable > > It will take some time to get the logs together, I need to strip out > potentially sensitive info, will update with them when I have them. > > Any theories as to why the promotions / demotions only take place on one box &g

Re: [Gluster-users] Blocking IO when hot tier promotion daemon runs

2018-01-09 Thread Hari Gowtham
2 > cluster.read-freq-threshold 5 > > # gluster volume get gv0 all | grep watermark > cluster.watermark-hi92 > cluster.watermark-low 75 > > ___ > Gluster-users m

Re: [Gluster-users] Failed to get quota limits

2018-02-12 Thread Hari Gowtham
es of that volume. > > Thanks in advance. > > Regards, > M. > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users -- Regards, Hari Gowtham. ___

Re: [Gluster-users] Failed to get quota limits

2018-02-13 Thread Hari Gowtham
Were you able to set new limits after seeing this error? On Tue, Feb 13, 2018 at 4:19 PM, Hari Gowtham <hgowt...@redhat.com> wrote: > Yes, I need the log files in that duration, the log rotated file after > hitting the > issue aren't necessary, but the ones before hitting the is

Re: [Gluster-users] Failed to get quota limits

2018-02-13 Thread Hari Gowtham
> "quota myvolume list" returns simply nothing. > > In order to lookup the directories should I run a "stat" on them? and if yes > should I do that on a client through the fuse mount? > > > Original Message > On February 13, 2018 10:58 A

Re: [Gluster-users] Failed to get quota limits

2018-02-13 Thread Hari Gowtham
9] E > [cli-cmd-volume.c:1674:cli_cmd_quota_handle_list_all] 0-cli: Failed to get > quota limits for 16ac4cde-a5d4-451f-adcc-422a542fea24 > [2018-02-13 08:16:14.092980] I [input.c:31:cli_batch] 0-: Exiting with: 0 > > > *** /var/log/glusterfs/bricks/data-myvolume-brick.log *** &

Re: [Gluster-users] Tiering Volumns

2018-02-11 Thread Hari Gowtham
> performance.client-io-threads: off > [root@Glus1 ~]# > > > Thank you all for your help! > > _______ > Gluster-users mailing list > Gluster-users@gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users -- Regards, Hari Gowtham. ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Failed to get quota limits

2018-02-22 Thread Hari Gowtham
er. > > I will send you tomorrow all the other logfiles as requested. > > > Original Message > On February 13, 2018 12:20 PM, Hari Gowtham <hgowt...@redhat.com> wrote: > >>Were you able to set new limits after seeing this error? >> >> O

Re: [Gluster-users] gluster 3.12 memory leak

2018-08-03 Thread Hari Gowtham
? >>> This issue is hitting us hard with several production installations. >>> >>> Thanx, >>> Alex >>> >>> ___ >>> Gluster-users mailing list >&

Re: [Gluster-users] gluster 3.12 memory leak

2018-08-07 Thread Hari Gowtham
: > > Thank you Hari. > Hope we get a fix soon to put us out of our misery J > > Alex > > On Fri, Aug 3, 2018 at 4:58 PM, Hari Gowtham wrote: >> >> Hi, >> >> It is a known issue. >> This bug will give more insight on the memory leak. >> https://b

Re: [Gluster-users] Gluster release 3.12.13 (Long Term Maintenance) Canceled for 10th of August, 2018

2018-08-14 Thread Hari Gowtham
gt; >>>> Regards, >>>> >>>> Jiffin >>>> >>>> ___ >>>> Gluster-users mailing list >>>> Gluster-users@gluster.org >>>> https://lists.gluster.org/mailman/listinfo/gluster-users >> >> >> ___ >> Gluster-users mailing list >> Gluster-users@gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-users > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users -- Regards, Hari Gowtham. ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 3.10.5: used disk size reported by quota and du mismatch

2018-09-10 Thread Hari Gowtham
you again for the detailed explanation. > Regards, > Mauro > > Il giorno 10 set 2018, alle ore 09:17, Hari Gowtham ha > scritto: > > Hi Mauro, > > The problem might be at some other place, So setting the xattr and > doing the lookup might not have fixed the issue.

Re: [Gluster-users] Gluster 3.10.5: used disk size reported by quota and du mismatch

2018-09-10 Thread Hari Gowtham
Hi Mauro, The problem might be at some other place, So setting the xattr and doing the lookup might not have fixed the issue. To resolve this we need to read the log file reported by the fsck script. In this log file we need to look for the size reported by the xattr (the value "SIZE:" in the

Re: [Gluster-users] Gluster 3.10.5: used disk size reported by quota and du mismatch

2018-09-10 Thread Hari Gowtham
to run the script after all the files are deleted (or other major modifications are done). So that we can fix once at the end. If the fix-issue argument on script doesn't work on the directory/ subdirectory where you find mismatch, then you can send the whole file. Will check the log and let you know

Re: [Gluster-users] Gluster 3.10.5: used disk size reported by quota and du mismatch

2018-09-10 Thread Hari Gowtham
ipt everything went smoothly, but this time it seems > to be more difficult. > > In attachment you can find the new log files. > > Thank you, > Mauro > > > Il giorno 10 set 2018, alle ore 12:27, Hari Gowtham ha > scritto: > > On Mon, Sep 10, 2018 at 3:13 PM Mauro Trid

Re: [Gluster-users] Gluster 3.10.5: used disk size reported by quota and du mismatch

2018-09-11 Thread Hari Gowtham
ou very much for your support. > I will do everything you suggested and I will contact you as soon as all the > steps will be completed. > > Thank you, > Mauro > > Il giorno 10 set 2018, alle ore 16:02, Hari Gowtham ha > scritto: > > Hi Mauro, > > I went through

Re: [Gluster-users] Gluster 3.10.5: used disk size reported by quota and du mismatch

2018-07-10 Thread Hari Gowtham
put here the output. > > ./quota_fsck_new.py --full-logs --sub-dir /gluster/mnt{1..12} > > Thank you again for your support. > Regards, > Mauro > > Il giorno 10 lug 2018, alle ore 11:02, Hari Gowtham ha > scritto: > > Hi, > > There is no explicit command t

Re: [Gluster-users] Gluster 3.10.5: used disk size reported by quota and du mismatch

2018-07-10 Thread Hari Gowtham
-limit exceeded? >>> ------- >>> /CSP/ans0041.0TB 99%(1013.8GB)3.9TB >>> 0Bytes Yes Yes >>> >>> [root@s01 ~]# du -hs /tier2/CSP/ans004/ >>> 295G /tier2/CSP/ans004/ >>> >>> >>> >>> >>> ___ >>> Gluster-users mailing list >>> Gluster-users@gluster.org >>> http://lists.gluster.org/mailman/listinfo/gluster-users >> >> >> > > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users -- Regards, Hari Gowtham. ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 3.10.5: used disk size reported by quota and du mismatch

2018-07-11 Thread Hari Gowtham
ter_lvg 9,0T 5,7T3,4T 64% /gluster/mnt5 > /dev/mapper/gluster_vgi-gluster_lvi 9,0T 5,7T3,4T 63% /gluster/mnt7 > /dev/mapper/gluster_vgk-gluster_lvk 9,0T 5,8T3,3T 65% /gluster/mnt9 > > I will execute the following command and I will put here the output. > > ./quota_f

Re: [Gluster-users] Blocking IO when hot tier promotion daemon runs

2018-01-18 Thread Hari Gowtham
gt; 3186 >> Brick pod-sjc1-gluster2:/data/ >> brick2/gv0 49153 0 Y >> 4829 >> Brick pod-sjc1-gluster1:/data/ >> brick3/gv0 49154 0 Y >> 3194 >> Brick pod-sjc1-gluster

Re: [Gluster-users] tiering

2018-03-04 Thread Hari Gowtham
e tier labgreenbin detach force > > > > but what does that mean? Will the content of tier get lost? > > > > How to solve this situation? > > /Curt > > > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users -- Regards, Hari Gowtham. ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Why files goes to hot tier and cold tier at same time

2018-03-05 Thread Hari Gowtham
s@gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users -- Regards, Hari Gowtham. ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] tiering

2018-03-05 Thread Hari Gowtham
luster volume start labgreenbin force" but that did not reinitialize > a new brick. > > /C > > On 2018-03-05, 07:31, "Hari Gowtham" <hgowt...@redhat.com> wrote: > > Hi Curt, > > gluster volume tier labgreenbin detach force will convert the v

Re: [Gluster-users] Failed to get quota limits

2018-02-27 Thread Hari Gowtham
gt; defined them. That did the trick! > > Now do you know if this bug is already corrected in a new release of > GlusterFS? if not do you know when it will be fixed? > > Again many thanks for your help here! > > Best regards, > M. > > ‐‐‐ Original Message ‐‐‐ &

Re: [Gluster-users] On sharded tiered volume, only first shard of new file goes on hot tier.

2018-02-27 Thread Hari Gowtham
t; cluster.tier-mode: cache > features.ctr-enabled: on > features.shard: on > features.shard-block-size: 64MB > server.allow-insecure: on > performance.quick-read: off > performance.stat-prefetch: off > nfs.disable: on > nfs.addr-namelookup

Re: [Gluster-users] Failed to get quota limits

2018-02-27 Thread Hari Gowtham
; > > ‐‐‐ Original Message ‐‐‐ > > On February 27, 2018 9:38 AM, Hari Gowtham <hgowt...@redhat.com> wrote: > >> >> >> Hi Mabi, >> >> The bugs is fixed from 3.11. For 3.10 it is yet to be backported and >> >> made available. >> >&

Re: [Gluster-users] directory quotas non existing directory failing

2018-09-13 Thread Hari Gowtham
; > Based on this i hoped that would be possible to create some sort of wildcard > quota like: > > gluster volume quota homes limit-usage /rhomes/* 20GB > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > https:/

Re: [Gluster-users] [External] Re: directory quotas non existing directory failing

2018-09-14 Thread Hari Gowtham
Davide Obbi wrote: > Here: > > https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Directory%20Quota/ > > Kr > Davide > > On Fri, Sep 14, 2018 at 7:12 AM Hari Gowtham wrote: > >> Hi, >> >> Can you point to the right place in doc where it

Re: [Gluster-users] Hot Tier exceeding watermark-hi

2018-09-30 Thread Hari Gowtham
ick2 >> Brick3: Glus1:/data/glusterfs/FFPrimary/brick1 >> Cold Tier: >> Cold Tier Type : Distributed-Replicate >> Number of Bricks: 2 x 3 = 6 >> Brick4: Glus1:/data/glusterfs/FFPrimary/brick5 >> Brick5: Glus2:/data/glusterfs/FFPrimary/brick6 >> Brick6: Glus3:/d

Re: [Gluster-users] Hot Tier exceeding watermark-hi

2018-09-29 Thread Hari Gowtham
0:45 > Glus3 02075 in progress > 5151:30:47 > Tiering Migration Functionality: FFPrimary: success > [root@Glus1 ~]# > > What can cause GlusterFS to stop demoting files and allow it to completely > fill the Hot Tier? > > Than