>> nfs.mount-rmtab testdir
>>>>
>>>>
>>>> Does this mean the option value is not set properly in the script? Need
>>>> your help in debugging this.
>>>>
>>>> @Nigel
>>>> I noticed that test is timing out.
>>>>
>>>> *20:28:39* ./tests/basic/mount-nfs-auth.t timed out after 200 seconds
>>>>
>>>> Can this be infra issue where nfs was taking too much time to mount?
>>>>
>>>> [1] https://build.gluster.org/job/centos7-regression/316/console
>>>>
>>>> regards,
>>>> Raghavendra
>>>>
>>>
>>>
>>>
>>> --
>>> nigelb
>>>
>>
>>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
e a client container as per the document above, and mount the
>>>>> above
>>>>> volume and create 1 file, 1 directory and a file within that directory.
>>>>>
>>>>> Now we start the upgrade process (as laid out for 3.13 here
>>>>> http://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_3.13/ ):
>>>>> - killall glusterfs glusterfsd glusterd
>>>>> - yum install
>>>>> http://cbs.centos.org/kojifiles/work/tasks/1548/311548/cento
>>>>> s-release-gluster40-0.9-1.el7.centos.x86_64.rpm
>>>>> - yum upgrade --enablerepo=centos-gluster40-test glusterfs-server
>>>>>
>>>>> < Go back to the client and edit the contents of one of the files and
>>>>> change the permissions of a directory, so that there are things to heal
>>>>> when we bring up the newly upgraded server>
>>>>>
>>>>> - gluster --version
>>>>> - glusterd
>>>>> - gluster v status
>>>>> - gluster v heal patchy
>>>>>
>>>>> The above starts failing as follows,
>>>>> [root@centos-glfs-server1 /]# gluster v heal patchy
>>>>> Launching heal operation to perform index self heal on volume patchy
>>>>> has
>>>>> been unsuccessful:
>>>>> Commit failed on centos-glfs-server2.glfstest20. Please check log file
>>>>> for details.
>>>>> Commit failed on centos-glfs-server3. Please check log file for
>>>>> details.
>>>>>
>>>>> From here, if further files or directories are created from the
>>>>> client,
>>>>> they just get added to the heal backlog, and heal does not catchup.
>>>>>
>>>>> As is obvious, I cannot proceed, as the upgrade procedure is broken.
>>>>> The
>>>>> issue itself may not be selfheal deamon, but something around
>>>>> connections, but as the process fails here, looking to you guys to
>>>>> unblock this as soon as possible, as we are already running a day's
>>>>> slip
>>>>> in the release.
>>>>>
>>>>> Thanks,
>>>>> Shyam
>>>>>
>>>>
>>>>
>>>
>> ___
>> maintainers mailing list
>> maintain...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/maintainers
>>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
On Fri, Feb 23, 2018 at 6:33 AM, J. Bruce Fields
wrote:
> On Thu, Feb 22, 2018 at 01:17:58PM +0530, Raghavendra G wrote:
> > On Wed, Oct 11, 2017 at 7:32 PM, J. Bruce Fields
> > wrote:
> >
> > > On Wed, Oct 11, 2017 at 04:11:51PM +0530, Raghavendra G wrote:
> &
On Thu, Feb 22, 2018 at 1:17 PM, Raghavendra G
wrote:
>
>
> On Wed, Oct 11, 2017 at 7:32 PM, J. Bruce Fields
> wrote:
>
>> On Wed, Oct 11, 2017 at 04:11:51PM +0530, Raghavendra G wrote:
>> > On Thu, Mar 31, 2016 at 1:22 AM, J. Bruce Fields
>> > wrote:
&g
ier, live in Buenos Aires, Argentina, and work in the
> >>>> Network Operations Center of an Internet Service Provider as a Linux
> >>>> Sysadmin. I would like to contribute on the GlusterFS project if
> there is
> >>>> something where I can be useful.
> >>>>
> >>>> Thanks for your kind attention.
> >>>>
> >>>> Best Regards,
> >>>>
> >>>>
> >>>> Javier Romero
> >>>> E-mail: xavi...@gmail.com
> >>>> Skype: xavinux
> >>>>
> >>>>
> >>>> ___
> >>>> Gluster-devel mailing list
> >>>> Gluster-devel@gluster.org
> >>>> http://lists.gluster.org/mailman/listinfo/gluster-devel
> >>>
> >>>
> >>
> >>
> >> ___
> >> Gluster-devel mailing list
> >> Gluster-devel@gluster.org
> >> http://lists.gluster.org/mailman/listinfo/gluster-devel
> >
> >
> >
> >
> > --
> > Amar Tumballi (amarts)
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
On Wed, Oct 11, 2017 at 7:32 PM, J. Bruce Fields
wrote:
> On Wed, Oct 11, 2017 at 04:11:51PM +0530, Raghavendra G wrote:
> > On Thu, Mar 31, 2016 at 1:22 AM, J. Bruce Fields
> > wrote:
> >
> > > On Mon, Mar 28, 2016 at 04:21:00PM -0400, Vijay Bellur wr
> > From: "Pranith Kumar Karampuri"
> > To: "Raghavendra G"
> > Cc: "Gluster Devel"
> > Sent: Friday, February 9, 2018 2:30:59 PM
> > Subject: Re: [Gluster-devel] Glusterfs and Structured data
> >
> >
> >
> > On Thu,
gt;> regards,
>> Raghavendra
>> _______
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
//fosdem.org/2018/schedule/track/software_defined_storage/
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
e it in this list, shout out!
> >
> > >
> > > Only exception could be: https://review.gluster.org/#/c/19223/
> > >
> > > Thanks,
> > > Shyam
> > > ___
> > > Gluster-devel mailing
;>> [1] github issues marked for 4.0:
>>>> https://github.com/gluster/glusterfs/milestone/3
>>>>
>>>> [2] Review focus for features planned to land in 4.0:
>>>> https://review.gluster.org/#/q/owner:srangana%2540redhat.com+is:starred
>>>>
>>> [3] Releas
On Mon, Jan 22, 2018 at 11:32 AM, Kaushal M wrote:
> Did you run autogen.sh after installing libxml2-devel?
>
I hadn't. I did it now and configure succeeds. Thanks Kaushal.
> On Mon, Jan 22, 2018 at 11:10 AM, Raghavendra G
> wrote:
> > All,
> >
> > # ./
s.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Pranith
>
> _______
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
issue is very similar to one filed in:
https://bugzilla.redhat.com/show_bug.cgi?id=64134
Has anyone encountered this? How did you workaround this?
regards,
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http
*
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
ments here?
>>
>
> When stat is "zero filled", understanding is that the higher layer
> protocol doesn't send stat value to the kernel and a separate lookup is
> sent by the kernel to get the latest stat value. In which protocol are you
> seeing this issue? Fuse/NFS/SMB?
>
&g
entify
this.
On Tue, Sep 12, 2017 at 11:31 AM, Raghavendra G
wrote:
> Update. Two more days to go for the deadline. Till now, there are no open
> issues identified against this patch.
>
> On Fri, Sep 8, 2017 at 6:54 AM, Raghavendra Gowdappa
> wrote:
>
>>
>>
>>
tp://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
ob/centos6-regression/7281/console
>>> [2] review.gluster.org/18681
>>>
>>> regards,
>>> Raghavendra
>>>
>>
>>
>>
>> --
>> nigelb
>>
>
>
>
> --
> nigelb
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
+Brian Foster
On Wed, Oct 11, 2017 at 4:11 PM, Raghavendra G
wrote:
> We ran into a regression [2][3]. Hence reviving this thread.
>
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1500269
> [3] https://review.gluster.org/18463
>
> On Thu, Mar 31, 2016 at 1:22 AM, J. Bru
e return, but if we return
> ESTALE at least the problem should be more obvious to the
> person debugging.
> - for ESTALE-aware applications, the ESTALE/ENOENT distinction
> is useful.
>
Another place to not convert is for those cases where kernel retries the
operation on se
It shows correct value for directories for those
> xattr fop were executed successfully.
>
> Note: The patch will resolve xattr healing problem only for fuse mount
> not for nfs mount.
>
> BUG: 1371806
> Signed-off-by: Mohit Agrawal
>
>
"Raghavendra Gowdappa"
> > Cc: "Raghavendra G" , "Nithya Balachandran" <
> nbala...@redhat.com>, anoo...@redhat.com,
> > "Gluster Devel" , "Raghavendra Bhat" <
> raghaven...@redhat.com>
> > Sent: Thursday, September 7, 2017
vior of other filesystems would be a concern.
I've not really thought through this suggestion of tuning /proc/sys/vm
tunables and I am not even an expert who knows what tunables are at our
disposal. Just wanted to bring this idea to notice of wider audience.
> Csaba
>
> On Wed, Sep 6, 2
A gentle reminder. We are midway through the proposed timeline.
On Wed, Aug 30, 2017 at 10:28 AM, Raghavendra Gowdappa
wrote:
>
>
> - Original Message -
> > From: "Raghavendra Gowdappa"
> > To: "Raghavendra G"
> > Cc: "Nithya
On Wed, Sep 6, 2017 at 10:27 AM, Raghavendra G
wrote:
> Also we've an article for sysadmins which has a section:
>
>
>
> With GlusterFS, many users with a lot of storage and many small files
> easily end up using a lot of RAM on the server side
>
>
I think this ar
inodes
> > > >>>>> >
> > > >>>>> > Hi,
> > > >>>>> >
> > > >>>>> > One of the reasons for the memory consumption in gluster fuse
> > > >>>>> > mounts
> > > >>>>> is the
> > > >>>>> > number of inodes in the table which are never kicked out.
> > > >>>>> >
> > > >>>>> > Is there any way to default to an entry-timeout and
> > > >>>>> attribute-timeout value
> > > >>>>> > while mounting Gluster using Fuse? Say 60s each so those
> entries
> > > >>>>> will be
> > > >>>>> > purged periodically?
> > > >>>>>
> > > >>>>> Once the entry timeouts, inodes won't be purged. Kernel sends a
> > > >>>>> lookup
> > > >>>>> to revalidate the mapping of path to inode. AFAIK, reverse
> > > >>>>> invalidation
> > > >>>>> (see inode_invalidate) is the only way to make kernel forget
> > > >>>>> inodes/attributes.
> > > >>>>>
> > > >>>>> Is that something that can be done from the Fuse mount ? Or is
> this
> > > >>>> something that needs to be added to Fuse?
> > > >>>>
> > > >>>>> >
> > > >>>>> > Regards,
> > > >>>>> > Nithya
> > > >>>>> >
> > > >>>>>
> > > >>>>
> > > >>>>
> > > >>>
> > > >>
> > > >
> > >
> >
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
; Soumya
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1463191
>
> ___________
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
a reasonable
time.
regards,
Raghavendra
> On 24 August 2017 at 10:13, Raghavendra G
> wrote:
>
>> Note that we need to consider xlators on brick stack too. I've added
>> maintainers/peers of xlators on brick stack. Please explicitly ack/nack
>> whether this patch
n dht
> > > * Whether EC works fine if a non-lookup fop (like open(dir), stat,
> chmod
> > > etc) hits EC without a single lookup performed on file/inode
> > >
> > > Can you please comment on the patch? I'll take care of dht part.
> >
ifc semantic or specific behavior --, then
> I don't see much point in force-fitting this kind of tuning into the
> fadvise
> syscall. We can just as well operate then via xattrs.
>
> Csaba
> ___
> Gluster-devel mailing list
>
t; ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
narios are explicitly handled by setting uid/gid to 0 while doing
these operations (like linkto file creation etc). Even if we run into bugs
after removing this, explicit setting of credentials should be preferred.
>
>> Regards,
>> Amar
>>
>> Thanks,
>> Raghavendra Talur
>>
>>
>>
>>
>> --
>> Amar Tumballi (amarts)
>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
On Tue, Jul 25, 2017 at 10:39 AM, Amar Tumballi wrote:
>
>
> On Tue, Jul 25, 2017 at 9:33 AM, Raghavendra Gowdappa > wrote:
>
>>
>>
>> - Original Message -
>> > From: "Pranith Kumar Karampuri"
>> > To: "Raghavendra G&quo
+Milind
On Mon, Jul 24, 2017 at 5:11 PM, Raghavendra G
wrote:
>
>
> On Fri, Jul 21, 2017 at 6:39 PM, Vijay Bellur wrote:
>
>>
>> On Fri, Jul 21, 2017 at 3:26 AM, Raghavendra Gowdappa <
>> rgowd...@redhat.com> wrote:
>>
>>> Hi all,
>>>
whose size is
greater than the size eligible for caching quick-read. IOW, read-ahead gets
disabled if file size is less than 64KB. Thanks for the suggestion.
>
> Thanks,
> Vijay
>
> ___
> Gl
gt;>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>>
>>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Pranith
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
l update the thread as and when I can think of a valid suspect.
[1] https://review.gluster.org/17391
>
> Xavi
>
>
>
>> -Krutika
>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
+gluster-users
On Mon, May 29, 2017 at 8:46 AM, Raghavendra G
wrote:
> Replying to all queries here:
>
> * Is it a bug or performance enhancement?
> Its a performance enhancement. No functionality is broken if this patch
> is not taken in.
>
> * Are there performance nu
>> only be supported until 3.12 (or 4.0) gets released, which is approx.
>> 3
>> months from now according to our schedule.
>>
>> Niels
>>
>> ___
>> maintainers mailing list
>> mai
oller thread reads a ping
packet, invokes its actor and hands the response msg to transport queue.
Change-Id: I526268c10bdd5ef93f322a4f95385137550a6a49
<https://review.gluster.org/#/q/I526268c10bdd5ef93f322a4f95385137550a6a49>
Signed-off-by: Raghavendra G
BUG: 1421938 <https://bugzi
iew.gluster.org/#/c/15036/
>
> regards,
> Raghavendra
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
On Thu, May 11, 2017 at 3:58 AM, Vijay Bellur wrote:
>
>
> On Wed, May 10, 2017 at 9:01 AM, Jeff Darcy wrote:
>
>>
>>
>>
>> On Wed, May 10, 2017, at 06:30 AM, Raghavendra G wrote:
>>
>> marking it bad won't help as even bad tests are run by
___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
17 at 3:49 PM, Raghavendra G
wrote:
> The same test is causing failures for [1] too. On checking it failed on
> master too. I've submitted a patch [2] to mark it as failure
>
> [1] https://review.gluster.org/#/c/15036
> [2] https://review.gluster.org/#/c/17234/
>
> On
___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
tinfo/gluster-devel
>>
>>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
On Fri, Apr 21, 2017 at 11:43 AM, Raghavendra G
wrote:
> Summing up various discussions I had on this,
>
> 1. Current ping frame work should measure just the responsiveness of
> network and rpc layer. This means poller threads shouldn't be winding the
> individual fops at
th K-9 Mail. Please excuse my brevity.
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
level options as
>> documented in tcp(7).
>>
>> Please share your thoughts about the risks or effectiveness of the
>> decoupling.
>>
>> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
traffic ON the WIRE and through tcp/ip stack). While I
don't have strong objections to it, I feel that its partial solution and
might be inconsequential (just an hunch, no data). However, I can accept
the patch, if we feel it helps.
> Regards,
> Vijay
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
On Thu, Jan 19, 2017 at 3:59 PM, Raghavendra G
wrote:
> The more relevant question would be with TCP_KEEPALIVE and
> TCP_USER_TIMEOUT on sockets, do we really need ping-pong framework in
> Clients? We might need that in transport/rdma setups, but my question is
> concentrating on tr
, Raghavendra G
wrote:
>
>
> On Thu, Jan 19, 2017 at 1:50 PM, Mohammed Rafi K C
> wrote:
>
>> Hi,
>>
>> The patch for priority based ping packets [1] are ready to review. As
>> Shyam mentioned in the comment on patch set 12, it doesn't solve the
>> pro
tream is closed. If you
need to be sure that the data is physically stored use fsync(2). (It will
depend on the disk hardware at this point.)
If not, do you have any comments for this inconvenient?
>
>
> Best Regards,
> George
> __
egards
>
> Rafi KC
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
On Thu, Jan 12, 2017 at 4:44 PM, Raghavendra G
wrote:
>
>
> On Thu, Jan 12, 2017 at 4:35 PM, Raghavendra G
> wrote:
>
>>
>>
>> On Thu, Jan 12, 2017 at 9:20 AM, B.K.Raghuram wrote:
>>
>>> cc'ing devel as well for some developer insight..
&
On Thu, Jan 12, 2017 at 4:35 PM, Raghavendra G
wrote:
>
>
> On Thu, Jan 12, 2017 at 9:20 AM, B.K.Raghuram wrote:
>
>> cc'ing devel as well for some developer insight..
>>
>> -- Forwarded message --
>> To: gluster-users
>>
>
luster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
> --
> Raghavendra G
>
> <http://www.gluster.org/mailman/listinfo/gluster-devel>
<http://www.gluster.org/mailman/listinfo/gluster-devel>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
assured that you are
> building a nice piece of software, at least IMHO.
>
> Keep-up the good work and Merry Christmas.
>
> Ivan Rossi
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman
oost readdir performance on a directory containing subdirectories. For
files it has no effect.
On a similar note, I think we can also skip linkto files in readdirp (on
brick) as dht_readdirp picks the dentry from subvol containing data-file.
> Regards,
> Vijay
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
. I am not sure though how often users would hit
> this - Updating from 3.6 to latest versions. From 3.7 to latest, its fine,
> this has nothing to do with this patch.
>
> On Nov 10, 2016 8:03 PM, "Pranith Kumar Karampuri"
> wrote:
>
>>
>>
>> On Thu, Nov
On Thu, Nov 10, 2016 at 8:03 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Thu, Nov 10, 2016 at 7:43 PM, Raghavendra G
> wrote:
>
>>
>>
>> On Thu, Nov 10, 2016 at 2:14 PM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wr
d any other
>>>>> crucial patch please let us know.
>>>>>
>>>>> Will make the release as soon as this patch is merged.
>>>>>
>>>>> --
>>>>> Pranith & Aravinda
>>>>>
>>>>> ___
>>>>> maintainers mailing list
>>>>> maintain...@gluster.org
>>>>> http://www.gluster.org/mailman/listinfo/maintainers
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> ~ Atin (atinm)
>>>>
>>>
>>>
>>>
>>> --
>>> Pranith
>>>
>>
>>
>>
>> --
>>
>> ~ Atin (atinm)
>>
>
>
>
> --
> Pranith
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
ike huge improvement.
>
> From 13 minutes to 23 seconds, not from 13 seconds :)
>
Yeah. That was one confused reply :). Sorry about that.
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://w
contain all the directories and we might miss out those
>> directories in listing.
>>
>> Your feedback is important for us and will help us to prioritize and
>> improve things.
>>
>> [1] https://www.gluster.org/pipermail/gluster-users/2016-October
>> /028703.html
>>
>> regards,
on this.
> >
> > regards,
> > Raghavendra
> >
> >>
> >>
> >>
> >>
> >>
> >>
> >> ___
> >> Gluster-devel mailing list
> >> Gluster-devel@gluster.org
> >> http://www.gluster.org/mailman/listinfo/gluster-devel
> > ___
> > Gluster-users mailing list
> > gluster-us...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On Thu, Nov 3, 2016 at 10:27 AM, Raghavendra G
wrote:
>
>
> On Thu, Nov 3, 2016 at 7:16 AM, Lian, George (Nokia - CN/Hangzhou) <
> george.l...@nokia.com> wrote:
>
>> >Yes. I was assuming that the previous results were tested with:
>> >1. write-behind on wit
But, however its
a POSIX requirement.
>
> 发自网易邮箱大师
> On 11/03/2016 12:52, Raghavendra G wrote:
>
>
>
> On Wed, Nov 2, 2016 at 9:38 AM, Raghavendra Gowdappa
> wrote:
>
>>
>>
>> - Original Message -
>> > From: "Keiviw"
syslog >/dev/null
> tail: syslog: file truncated
> tail: syslog: file truncated
>
> FYI,
>
Thanks George. I'll take a look.
>
> Best Regards,
> George
>
>
> -Original Message-
> From: Raghavendra Gowdappa [mailto:rgowd...@redhat.com]
> Sen
port this can be a good
> feature.
>
> +gluster-users to get an opinion on this.
>
> regards,
> Raghavendra
>
> >
> >
> >
> >
> >
> >
> > _______
> > Gluster-devel mailing list
> > Gl
{
>> > > > error (0, 0, _("%s: file truncated"), quotef (name));
>> > > > /* Assume the file was truncated to 0,
>> > > > and therefore output all "new" data. */
>> > > > xlseek (fd, 0, SEEK_SET, name);
>> > > > f[i].size = 0;
>> > > > }
>> > > > When stats.st_size < f[i].size, what mean the size report by fstat
>> is
>> > > > less
>> > > > than “tail” had read, it lead to “file truncated”, we also use
>> “strace”
>> > > > tools to trace the tail application, the related tail strace log as
>> the
>> > > > below:
>> > > > nanosleep({1, 0}, NULL) = 0
>> > > > fstat(3, {st_mode=S_IFREG|0644, st_size=192543105, ...}) = 0
>> > > > nanosleep({1, 0}, NULL) = 0
>> > > > fstat(3, {st_mode=S_IFREG|0644, st_size=192543105, ...}) = 0
>> > > > nanosleep({1, 0}, NULL) = 0
>> > > > fstat(3, {st_mode=S_IFREG|0644, st_size=192543105, ...}) = 0
>> > > > nanosleep({1, 0}, NULL) = 0
>> > > > fstat(3, {st_mode=S_IFREG|0644, st_size=192544549, ...}) = 0
>> > > > read(3, " Data … -"..., 8192) = 1444
>> > > > read(3, " Data.. "..., 8192) = 720
>> > > > read(3, "", 8192) = 0
>> > > > fstat(3, {st_mode=S_IFREG|0644, st_size=192544789, ...}) = 0
>> > > > write(1, “DATA…..” ) = 2164
>> > > > write(2, "tail: ", 6tail: ) = 6
>> > > > write(2, "/mnt/log/master/syslog: file tru"...,
>> 38/mnt/log/master/syslog:
>> > > > file truncated) = 38
>> > > > as the above strace log, tail has read 1444+720=2164 bytes,
>> > > > but fstat tell “tail” 192544789 – 192543105 = 1664 which less than
>> 2164,
>> > > > so
>> > > > it lead to “tail” application “file truncated”.
>> > > > And if we turn off “write-behind” feature, the issue will not be
>> > > > reproduced
>> > > > any more.
>> > >
>> > > That seems strange. There are no writes happening on the fd/inode
>> through
>> > > which tail is reading/stating from. So, it seems strange that
>> write-behind
>> > > is involved here. I suspect whether any of
>> md-cache/read-ahead/io-cache is
>> > > causing the issue. Can you,
>> > >
>> > > 1. Turn off md-cache, read-ahead, io-cache xlators
>> > > 2. mount glusterfs with --attribute-timeout=0
>> > > 3. set write-behind on
>> > >
>> > > and rerun the tests? If you don't hit the issue, you can experiment by
>> > > turning on/off of md-cache, read-ahead and io-cache translators and
>> see
>> > > what
>> > > are the minimal number of xlators that need to be turned off to not
>> hit the
>> > > issue (with write-behind on)?
>> > >
>> > > regards,
>> > > Raghavendra
>> > >
>> > > > So we think it may be related to cache consistence issue due to
>> > > > performance
>> > > > consider, but we still have concern that:
>> > > > The syslog file is used only with “Append” mode, so the size of file
>> > > > shouldn’t be reduced, when a client read the file, why “fstat” can’t
>> > > > return
>> > > > the really size match to the cache?
>> > > > From current investigation, we doubt that the current implement of
>> > > > “glusterfs” has a bug on “fstat” when cache is on.
>> > > > Your comments is our highly appreciated!
>> > > > Thanks & Best Regards
>> > > > George
>> > > >
>> > > > ___
>> > > > Gluster-devel mailing list
>> > > > Gluster-devel@gluster.org
>> > > > http://www.gluster.org/mailman/listinfo/gluster-devel
>> > >
>> >
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>>
>> --
>>
>> Pranith
>>
>>
>>
>>
>> --
>>
>> Pranith
>>
>
>
>
> --
> Pranith
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
;
>
> Alternatively let me know if this has been tried and discarded as a bad
> idea ...
>
> thanks,
>
> --
> Lindsay Mathieson
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org
On Thu, Sep 29, 2016 at 11:11 AM, Raghavendra G
wrote:
>
>
> On Wed, Sep 28, 2016 at 7:37 PM, Shyam wrote:
>
>> On 09/27/2016 04:02 AM, Poornima Gurusiddaiah wrote:
>>
>>> W.r.t Samba consuming this, it requires a great deal of code change in
>>> Sa
are the other pitfalls?
>
> [1] http://www.gluster.org/pipermail/gluster-devel/2016-August/050622.html
>
> [2] http://review.gluster.org/#/c/14784/
>
>
>>
>> Regards,
>> Poornima
>>
> ___
> Gluster-devel mailing list
> Gluste
default write 43 MB/sec.
> > >> ( I have modified Ben England's gfapi_perf_test.c for this. Attached
> the
> > >> same
> > >> for reference )
> > >>
> > >> We would like to hear how samba/ nfs-ganesha who are libgfapi users
> can
> > >> make
> > >> use of this.
> > >> Please provide your comments. Refer attached results.
> > >>
> > >> Zero copy in write patch: http://review.gluster.org/#/c/14784/
> > >>
> > >> Thanks,
> > >> Saravana
> >
> >
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
to
>>> depend
>>> itself on those new entries which
>>> arrived
>>> after it arrived (i.e, those that have a
>>> liability generation higher than itself)
>>> */
>>>
>>>
>>> So, if a single thread is doing writes on two different fds, generation
>>> numbers are sufficient to enforce the relative ordering. If writes are from
>>> two different threads/processes, I think write-behind is not obligated to
>>> maintain their order. Comments?
>>>
>>> [1] http://review.gluster.org/#/c/15380/
>>>
>>> regards,
>>> Raghavendra
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
gt;>>>>
>>>>>>
>>>>>>In this current approach problem is
>>>>>>
>>>>>>1) it will call heal function(dht_dir_xattr_heal) for every
>>>>>> directory lookup without comparing xattr.
>>>>>>
n excel sheet at regular intervals of time
>> - Plotting the graph with that data (in progress)
>> - a short demo
>>
>> Thanks & Regards,
>>
>> Arthy
>>
>>
>>
>>
>> ___
&g
On Fri, Aug 12, 2016 at 10:29 AM, Raghavendra G
wrote:
>
>
> On Thu, Aug 11, 2016 at 9:31 AM, Raghavendra G
> wrote:
>
>> Couple of more areas to explore:
>> 1. purging kernel dentry and/or page-cache too. Because of patch [1],
>> upcall notification can res
On Thu, Aug 11, 2016 at 9:31 AM, Raghavendra G
wrote:
> Couple of more areas to explore:
> 1. purging kernel dentry and/or page-cache too. Because of patch [1],
> upcall notification can result in a call to inode_invalidate, which results
> in an "invalidate" notification
some notifications. Current approach of purging cache of
> > all inodes might not be optimal as it might rollback benefits of caching.
> > Also, please note that network disconnects are not rare events.
> >
> > regards,
> > Raghavendra
> ___
> Gl
__________
> Gluster-devel mailing
> listGluster-devel@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
I've filed a bug on the issue at:
https://bugzilla.redhat.com/show_bug.cgi?id=1360689
On Fri, Jul 15, 2016 at 12:44 PM, Raghavendra G
wrote:
> Hi Patrick,
>
> Is it possible to test out whether the patch fixes your issue? There is
> nothing like validation from user experie
er
(librdmacm). So, a tcp/ip based address or pathway is necessary even while
using RDMA.
So, its a overkill to use RDMA for fetching volfiles.
>
> We should just remove it from the docs.
>
> Thanks,
> Raghavendra Talur
>
>
>>
>&
___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
.
> > let people do what they like more among these and let us also recognize
> them
> > for all their contributions. Let us celebrate their work in each monthly
> > news letter.
>
> Good idea.
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
in the long term, but (without making any promises)
> they might be sufficient and achievable in the near term. Thoughts?
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On Wed, Jun 1, 2016 at 12:50 PM, Xavier Hernandez
wrote:
> Hi,
>
> On 01/06/16 08:53, Raghavendra Gowdappa wrote:
>
>>
>>
>> - Original Message -
>>
>>> From: "Xavier Hernandez"
>>> To: "Pranith Kumar Karampuri&quo
I've filed a bug at [1] to track issue in afr.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1341429
On Tue, May 31, 2016 at 2:17 PM, Raghavendra G
wrote:
>
>
> On Tue, May 31, 2016 at 12:37 PM, Xavier Hernandez
> wrote:
>
>> Hi,
>>
>> On 31/0
gt; good/readable? Or does it wind to all subvols irrespective of whether a
>>>>> subvol is good/bad? In the latter case, what if
>>>>>a. mkdir succeeds on non-readable subvolume
>>>>>b. fails on readable subvolume
>>>>>
>>>>> What is the result reported to higher layers in the above scenario?
>>>>> If
>>>>> mkdir is failed, is it cleaned up on non-readable subvolume where it
>>>>> failed?
>>>>>
>>>>> I am interested in this case as dht-preop check relies on layout xattrs
>>>>>
>>>> and I
>>>>
>>>>> assume layout xattrs in particular (and all xattrs in general) are
>>>>> guaranteed to be correct only on a readable subvolume of afr. So, in
>>>>>
>>>> essence
>>>>
>>>>> we shouldn't be winding down mkdir on non-readable subvols as whatever
>>>>>
>>>> the
>>>>
>>>>> decision brick makes as part of pre-op check is inherently flawed.
>>>>>
>>>>> regards,
>>>>> Raghavendra
>>>>>
>>>> --
>>> Pranith
>>>
>>> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
y
> structure created but no files in directories.
> How can I fix the issue? I will try to rebalance but I don't think it
> will write to this disperse set...
>
>
>
> On Sat, Apr 30, 2016 at 9:22 AM, Raghavendra G
> wrote:
> >
> >
> > On Fri, Apr 29, 2
ffer from this penalty). Other than that there is
no significant harm unless disperse-56 is really running out of space.
regards,
Raghavendra
___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mail
4
> glusterfs-client-xlators-3.7.9-1.el6.x86_64
> glusterfs-api-devel-3.7.9-1.el6.x86_64
> python-gluster-3.7.9-1.el6.noarch
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
g to stop.
>
> Thanks,
> Patrick
>
> ___________
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
Seems like I missed adding rtalur/sakshi to cc list.
On Fri, Apr 29, 2016 at 5:25 PM, Raghavendra G
wrote:
> Raghavendra Talur reported another crash in dht_rename_lock_cbk (which is
> similar - not exactly same - to the bt presented here). I heard Sakshi is
> taking a look
ing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On Fri, Mar 4, 2016 at 2:02 PM, Raghavendra G
wrote:
>
>
> On Thu, Mar 3, 2016 at 6:26 PM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> Hi,
>>
>> Yes, with this patch we need not set conn->trans to NULL in
>> rpc_clnt_disable
t
use of rpc-clnt.
[1] http://review.gluster.org/#/c/13592
[2] http://review.gluster.org/#/c/1359
regards,
Raghavendra.
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -
> > From: "Soumya Koduri"
> > To: "Kotresh Hiremath Ravis
ores, looks like we are trying to dereference a freed
> > > changelog_rpc_clnt_t(crpc) object in changelog_rpc_notify(). Strangely
> > > this was not reported in master branch.
> > >
> > > I tried debugging but couldn't find any possible suspects. I request
> you
> > > to take a look and let me know if [1] caused any regression.
> > >
> > > Thanks,
> > > Soumya
> > >
> > > [1] http://review.gluster.org/#/c/13507/
> > >
> >
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
--
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
o through the document and
> comment/analyze/suggest, to take the thoughts forward (either on the
> google doc itself or here on the devel list).
>
>
> Thanks,
> Susant
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/g
1 - 100 of 148 matches
Mail list logo