+1
On Wed, Jul 22, 2020 at 2:34 PM Ravishankar N
wrote:
> Hi,
>
> The gluster code base has some words and terminology (blacklist,
> whitelist, master, slave etc.) that can be considered hurtful/offensive
> to people in a global open source setting. Some of words can be fixed
> trivially but
ynthia
>
>
>
> *From:* Amar Tumballi
> *Sent:* 2020年3月17日 13:18
> *To:* Zhou, Cynthia (NSB - CN/Hangzhou)
> *Cc:* Kotresh Hiremath Ravishankar ; Gluster Devel <
> gluster-devel@gluster.org>
> *Subject:* Re: [Gluster-devel] could you help to check about a glusterfs
t;
>
> *From:* Zhou, Cynthia (NSB - CN/Hangzhou)
> *Sent:* 2020年3月12日 12:53
> *To:* 'Kotresh Hiremath Ravishankar'
> *Cc:* 'Gluster Devel'
> *Subject:* RE: could you help to check about a glusterfs issue seems to
> be related to ctime
>
>
>
> From my local test onl
greater but that would open up the race when two clients are updating
the same file.
This would result in keeping the older time than the latest. This requires
code change and I don't think that should be done.
Thanks,
Kotresh
On Wed, Mar 11, 2020 at 3:02 PM Kotresh Hiremath Ravishankar <
kh
Hi Xavi,
On Tue, Jun 18, 2019 at 12:28 PM Xavi Hernandez wrote:
> Hi Kotresh,
>
> On Tue, Jun 18, 2019 at 8:33 AM Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> Hi Xavi,
>>
>> Reply inline.
>>
>> On Mon, Jun 17, 2019 at
Hi Xavi,
Reply inline.
On Mon, Jun 17, 2019 at 5:38 PM Xavi Hernandez wrote:
> Hi Kotresh,
>
> On Mon, Jun 17, 2019 at 1:50 PM Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> Hi All,
>>
>> The ctime feature is enabled by default from
s and they already had a signature. I don't know the
> reason for this. Maybe the client still keep th fd open? I opened a bug for
> this:
> https://bugzilla.redhat.com/show_bug.cgi?id=1685023
>
> Regards
> David
>
> Am Fr., 1. März 2019 um 18:29 Uhr schrieb Kotresh Hi
Interesting observation! But as discussed in the thread bitrot signing
processes depends 2 min timeout (by default) after last fd closes. It
doesn't have any co-relation with the size of the file.
Did you happen to verify that the fd was still open for large files for
some reason?
On Fri, Mar
On Tue, Dec 4, 2018 at 10:02 PM Amar Tumballi wrote:
> Looks like that is correct, but that also is failing in another regression
> shard/zero-flag.t
>
It's not related to this as it doesn't involve any code changes. Changes
are restricted to tests..
> On Tue, Dec 4, 2018 at 7:40 PM Shyam
Had forgot to add milind, ccing.
On Mon, Oct 8, 2018 at 11:41 AM Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
>
>
> On Fri, Oct 5, 2018 at 10:31 PM Shyam Ranganathan
> wrote:
>
>> On 10/05/2018 10:59 AM, Shyam Ranganathan wrote:
>> > On 10/04/20
On Fri, Oct 5, 2018 at 10:31 PM Shyam Ranganathan
wrote:
> On 10/05/2018 10:59 AM, Shyam Ranganathan wrote:
> > On 10/04/2018 11:33 AM, Shyam Ranganathan wrote:
> >> On 09/13/2018 11:10 AM, Shyam Ranganathan wrote:
> >>> RC1 would be around 24th of Sep. with final release tagging around 1st
>
On Thu, Oct 4, 2018 at 9:03 PM Shyam Ranganathan
wrote:
> On 09/13/2018 11:10 AM, Shyam Ranganathan wrote:
> > RC1 would be around 24th of Sep. with final release tagging around 1st
> > of Oct.
>
> RC1 now stands to be tagged tomorrow, and patches that are being
> targeted for a back port
On Thu, Sep 27, 2018 at 5:38 PM Kaleb S. KEITHLEY
wrote:
> On 9/26/18 8:28 PM, Shyam Ranganathan wrote:
> > Hi,
> >
> > With the introduction of default python 3 shebangs and the change in
> > configure.ac to correct these to py2 if the build is being attempted on
> > a machine that does not
On Tue, Sep 18, 2018 at 2:44 PM, Amar Tumballi wrote:
>
>
> On Tue, Sep 18, 2018 at 2:33 PM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> I have a different problem. clang is complaining on the 4.1 back port of
>> a patch which is merged
I have a different problem. clang is complaining on the 4.1 back port of a
patch which is merged in master before
clang-format is brought in. Is there a way I can get smoke +1 for 4.1 as it
won't be neat to have clang changes
in 4.1 and not in master for same patch. It might further affect the
Hi Anuradha,
To enable the c-time (consistent time) feature. Please enable following two
options.
gluster vol set utime on
gluster vol set ctime on
Thanks,
Kotresh HR
On Fri, Sep 14, 2018 at 12:18 PM, Rafi Kavungal Chundattu Parambil <
rkavu...@redhat.com> wrote:
> Hi Anuradha,
>
> We have
In the /etc/hosts, I think it is adding different IP
On Mon, Aug 13, 2018 at 5:59 PM, Rafi Kavungal Chundattu Parambil <
rkavu...@redhat.com> wrote:
> This is so nice. I tried it and succesfully created a test machine. It
> would be great if there is a provision to extend the lifetime of vm's
>
Hi Shyam/Atin,
I have posted the patch[1] for geo-rep test cases failure:
tests/00-geo-rep/georep-basic-dr-rsync.t
tests/00-geo-rep/georep-basic-dr-tarssh.t
tests/00-geo-rep/00-georep-verify-setup.t
Please include patch [1] while triggering tests.
The instrumentation patch [2] which
Hi Atin/Shyam
For geo-rep test retrials. Could you take this instrumentation patch [1]
and give a run?
I am have tried thrice on the patch with brick mux enabled and without but
couldn't hit
geo-rep failure. May be some race and it's not happening with
instrumentation patch.
[1]
Hi Du/Poornima,
I was analysing bitrot and geo-rep failures and I suspect there is a bug in
some perf xlator
that was one of the cause. I was seeing following behaviour in few runs.
1. Geo-rep synced data to slave. It creats empty file and then rsync syncs
data.
But test does "stat --format
Have attached in the Bug https://bugzilla.redhat.com/show_bug.cgi?id=1611635
On Thu, 2 Aug 2018, 22:21 Raghavendra Gowdappa, wrote:
>
>
> On Thu, Aug 2, 2018 at 5:48 PM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> I am facing different
] E [fuse-bridge.c:4382:fuse_first_lookup]
0-fuse: first lookup on root failed (Transport endpoint is not connected)
-
On Thu, Aug 2, 2018 at 5:35 PM, Nigel Babu wrote:
> On Thu, Aug 2, 2018 at 5:12 PM Kotresh Hiremath Ravishankar <
> khire...@r
On Thu, Aug 2, 2018 at 5:05 PM, Atin Mukherjee
wrote:
>
>
> On Thu, Aug 2, 2018 at 4:37 PM Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>>
>>
>> On Thu, Aug 2, 2018 at 3:49 PM, Xavi Hernandez
>> wrote:
>>
>>&
On Thu, Aug 2, 2018 at 4:50 PM, Amar Tumballi wrote:
>
>
> On Thu, Aug 2, 2018 at 4:37 PM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>>
>>
>> On Thu, Aug 2, 2018 at 3:49 PM, Xavi Hernandez
>> wrote:
>>
>>&
On Thu, Aug 2, 2018 at 11:43 AM, Xavi Hernandez
wrote:
> On Thu, Aug 2, 2018 at 6:14 AM Atin Mukherjee wrote:
>
>>
>>
>> On Tue, Jul 31, 2018 at 10:11 PM Atin Mukherjee
>> wrote:
>>
>>> I just went through the nightly regression report of brick mux runs and
>>> here's what I can summarize.
>>>
This will be very useful. Thank you.
On Mon, May 21, 2018 at 11:45 PM, Vijay Bellur wrote:
>
>
> On Mon, May 21, 2018 at 2:29 AM, Amar Tumballi
> wrote:
>
>> Hi all,
>>
>> As a push towards more flexibility to our developers, and options to run
>> more
Hi Shyam,
Rafi and Me are proposing consistent time across replica feature for 4.1
https://github.com/gluster/glusterfs/issues/208
Thanks,
Kotresh H R
On Wed, Mar 21, 2018 at 2:05 PM, Ravishankar N
wrote:
>
>
> On 03/20/2018 07:07 PM, Shyam Ranganathan wrote:
>
>> On
On Mon, Jan 22, 2018 at 12:21 PM, Nigel Babu wrote:
> Hello folks,
>
> As you may have noticed, we've had a lot of centos6-regression failures
> lately. The geo-replication failures are the new ones which particularly
> concern me. These failures have nothing to do with the
Nigel,
Could you give a machine where geo-rep is failing even with bashrc fix to
debug ?
Thanks,
Kotresh HR
On Fri, Jan 12, 2018 at 3:54 PM, Amar Tumballi wrote:
> Can we have a separate test case to validate for all the basic necessity
> for whole test suite to pass?
Hi Amudhan,
Please go through the following that would clarify up-gradation concerns
from DHT to RIO in 4.0
1. RIO would not deprecate DHT. Both DHT and RIO would co-exist.
2. DHT volumes would not be migrated to RIO. DHT volumes would still be
using DHT code.
3. The new volume
Answers inline.
On Sat, Jul 22, 2017 at 1:36 AM, Shyam wrote:
> Hi,
>
> Prepare for a lengthy mail, but needed for the 3.12 release branching, so
> here is a key to aid the impatient,
>
> Key:
> 1) If you asked for an exception to a feature (meaning delayed backport to
>
Hi,
Following patches are targeted for 3.12. It has undergone few reviews and
yet it
to merged. Please take some time to review and merge if it looks good.
https://review.gluster.org/#/c/17744/
https://review.gluster.org/#/c/17785/
--
Thanks and Regards,
Kotresh H R
Hi,
The above mentioned distribute test case generated a core
which is not related to the patch.
https://build.gluster.org/job/centos6-regression/5218/consoleFull
Here is the backtrace.
#0 0x7f3222fbaebf in dht_build_root_loc (inode=0xa800,
loc=0x7f3220b10e50) at
rnative available in the majority of
> distributions, this is the only sensible approach. Much of the code in
> contrib/ is not maintained at all. We should prevent this from happening
> with new code and assigning an owner/maintainer and peer(s) just like
> for ot
Sure, I can do that.
On Tue, Jun 27, 2017 at 12:28 PM, Amar Tumballi <atumb...@redhat.com> wrote:
>
>
> On Tue, Jun 27, 2017 at 12:25 PM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> Hi,
>>
>> We were looking for faster non-crypt
Hi,
We were looking for faster non-cryptographic hash to be used for the
gfid2path infra [1]
The initial testing was done with md5 128bit checksum which was a slow,
cryptographic hash
and using it makes software not complaint to FIPS [2]
On searching online a bit we found out xxhash [3] seems to
aintainers meeting also, no one volunteered to
> fix encryption translator. So, I am fine with taking it out for now. Anyone
> has objections?
>
> -Amar
>
>
>> On Wed, Jun 21, 2017 at 10:41 PM, Kotresh Hiremath Ravishankar <
>> khire...@redhat.com> wrote:
>>
Hi
./tests/encryption/crypt.t fails regression on
https://build.gluster.org/job/centos6-regression/5112/consoleFull
with a core. It doesn't seem to be related to the patch. Can somebody take
a look at it? Following is the backtrace.
Program terminated with signal SIGSEGV, Segmentation fault.
#0
Hi Shyam,
Following RFE is merged in master with github issue and would go in 3.11
GitHub issue: https://github.com/gluster/glusterfs/issues/191
Patch:https://review.gluster.org/#/c/15472/
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1443373
Thanks and Regards,
Kotresh H R
this is an RFE, it would be available from 3.11 and would not
be back ported to 3.10.x
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Serkan Çoban" <cobanser...@gmail.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: &quo
Hi
https://github.com/gluster/glusterfs/issues/188 is merged in master
and needs to go in 3.11
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Kaushal M"
> To: "Shyam"
> Cc: gluster-us...@gluster.org, "Gluster Devel"
impossible to upgrade to
latest version, atleast 3.7.20 would do. It has minimal
conflicts. I can help you out with that.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> To: "Kotresh Hiremath Ravishankar&
..@redhat.com>
> Cc: "Gluster Devel" <gluster-devel@gluster.org>, "gluster-users"
> <gluster-us...@gluster.org>, "Kotresh Hiremath
> Ravishankar" <khire...@redhat.com>
> Sent: Monday, April 24, 2017 11:30:57 AM
> Subject:
Hi Pranith,
Please add the following feature of bitrot to 3.9 road map page. It is merged.
feature/bitrot: Ondemand scrub option for bitrot
(http://review.gluster.org/#/c/15111/)
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Poornima Gurusiddaiah"
Hi,
We would like to propose the following talk.
Title: Gluster Geo-replication
Theme: Stability and Performance
We plan to cover the following things.
- Introduction
- New Features
- Stability and Usability Improvements
- Performance Improvements.
-
Hi,
Above mentioned AFT test has failed and is not related to the below patch.
https://build.gluster.org/job/rackspace-regression-2GB-triggered/22485/consoleFull
Can someone from AFR team look into it?
Thanks and Regards,
Kotresh H R
___
Hi,
One more AFR test has failed for the patch http://review.gluster.org/14903/
and is not related to the patch. Can someone from AFR team look into it?
https://build.gluster.org/job/rackspace-regression-2GB-triggered/22357/console
Thanks and Regards,
Kotresh H R
luster-devel@gluster.org>, "Kotresh Hiremath
> Ravishankar" <khire...@redhat.com>, "Rajesh
> Joseph" <rjos...@redhat.com>, "Ravishankar N" <ravishan...@redhat.com>,
> "Ashish Pandey" <aspan...@redhat.com>
> Sent: Wednesday
Hi,
The above mentioned test has failed for the patch
http://review.gluster.org/#/c/14927/1
and is not related to my patch. Can someone from AFR team look into it?
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/18132/console
Thanks and Regards,
Kotresh H R
Hi Prasanna,
'Fix' button is visible. May be you are missing something, please check.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Prasanna Kalever"
> To: "Nigel Babu"
> Cc: "gluster-infra" ,
Hi
./tests/bugs/replicate/bug-977797.t fails in the following run.
https://build.gluster.org/job/rackspace-regression-2GB-triggered/20473/console
It succeeds in my local machine. It could be spurious.
Could someone from replication team look into it?
Thanks and Regards,
Kotresh H R
Hi Pranith,
You had a concern of consuming I/O threads when bit-rot uses rchecksum
interface to
signing, normal scrubbing and on-demand scrubbing with tiering.
http://review.gluster.org/#/c/13833/5/xlators/storage/posix/src/posix.c
As discussed over comments, the concern is valid and
Point noted, will keep informed from next time!
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Kaushal M" <kshlms...@gmail.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: "Aravinda" <avish...@redh
m>
> To: "Aravinda" <avish...@redhat.com>
> Cc: "Gluster Devel" <gluster-devel@gluster.org>, maintain...@gluster.org,
> "Kotresh Hiremath Ravishankar"
> <khire...@redhat.com>
> Sent: Thursday, March 31, 2016 6:56:18 PM
> Subject:
Inline...
- Original Message -
> From: "Aravinda" <avish...@redhat.com>
> To: "Kaushal M" <kshlms...@gmail.com>, "Gluster Devel"
> <gluster-devel@gluster.org>, maintain...@gluster.org, "Kotresh
> Hiremath Ravishankar&q
Hi,
trash.t is failing on 3.7 branch for below patch.
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/15153/console
Could someone look into it?
Thanks and Regards,
Kotresh H R
___
Gluster-devel mailing list
Congrats and all the best Aravinda!
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Venky Shankar"
> To: "Gluster Devel"
> Cc: maintain...@gluster.org
> Sent: Tuesday, March 8, 2016 10:49:46 AM
> Subject: [Gluster-devel]
Hi All,
The regression run has caused the core to generate for below patch.
https://build.gluster.org/job/rackspace-regression-2GB-triggered/18859/console
>From the initial analysis, it's a tiered setup where ec sub-volume is the cold
>tier and afr is the hot tier.
The crash has happened
Hi All,
Here is the idea, we can use geo-replication as backup solution using gluster
volume
snapshots on slave side. One of the drawbacks of geo-replication is that it's a
continuous asynchronous replication and would not help in getting the last
week's or
yesterday's data. So if we use
Hi,
Yes, with this patch we need not set conn->trans to NULL in rpc_clnt_disable
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Soumya Koduri" <skod...@redhat.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>, &q
://review.gluster.org/#/c/13592/
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> To: "Soumya Koduri" <skod...@redhat.com>
> Cc: "Raghavendra G" <raghaven...@glust
ll. And also the cleanup happens in failure path.
So the memory leak can happen, if there is no try for rpc invocation after
DISCONNECT.
It will be cleaned up otherwise.
[1] http://review.gluster.org/#/c/13507/
Thanks and Regards,
Kotresh H R
- Original Message -----
> From: "Kotresh
Hi Soumya,
I just tested that it is reproducible only with your patch both in master and
3.76 branch.
The geo-rep test cases are marked bad in master. So it's not hit in master. rpc
is introduced
in changelog xlator to communicate to applications via libgfchangelog. Venky/Me
will check
why is
I will take care of putting up the patch upstream.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Venky Shankar" <vshan...@redhat.com>
> To: "FNU Raghavendra Manjunath" <rab...@redhat.com>
> Cc: "Gluster Devel" <gluste
The crash reported on the above link is same as bug 1221629.
But stack trace mentioned below looks to be from different regression run?
Can I get the link for the same?
It is strange the bt says 'rpcsvc_record_build_header' calling
'gf_history_changelog'
which does not! Am I missing something?
el...@redhat.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>, "Manikandan
> Selvaganesh" <mselv...@redhat.com>
> Cc: gluster-devel@gluster.org, "cyril peponnet"
> <cyril.pepon...@alcatel-lucent.com>
> Sent: Tuesday, February 9
ique causes,
> wouldn't it?
>
> On 02/09/2016 08:27 PM, Kotresh Hiremath Ravishankar wrote:
> > Hi,
> >
> > This crash can't be same as BZ 1221629. The crash in the BZ 1221629
> > is with the rpc introduced in changelog in 3.7 along with bitrot.
> > Co
Hi,
This bug is already tracked BZ 1221629
I will start working on this and will update once it is fixed.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Manikandan Selvaganesh"
> To: "Emmanuel Dreyfus"
> Cc:
I will have a look at it!
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Venky Shankar" <vshan...@redhat.com>
> To: "Sakshi Bansal" <saban...@redhat.com>
> Cc: "Gluster Devel" <gluster-devel@gluster.org>, "
]
[client-rpc-fops.c:2664:client3_3_readdirp_cbk] 0-patchy-client-0: remote
operation failed [Invalid argument]
I talked to Soumya (nfs team) and she will be looking into it.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Kotresh Hiremath Ravishankar" <khi
Hi Atin,
The same test caused glusterd crash for my patch as well.
https://build.gluster.org/job/rackspace-regression-2GB-triggered/17289/consoleFull
Core was generated by `glusterd'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x7f8a77b7e223 in dict_lookup_common
Hi Atin,
Here is the bug.
https://bugzilla.redhat.com/show_bug.cgi?id=1296004
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Atin Mukherjee" <amukh...@redhat.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: &qu
Geo-rep requirements inline.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Pranith Kumar Karampuri"
> To: "Vijay Bellur" , "Jeff Darcy" ,
> "Raghavendra Gowdappa"
> , "Ira Cooper"
Hi,
I am consistently getting the following errors for my patch.
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/12193/console
Building remotely on nbslave7i.cloud.gluster.org (netbsd7_regression) in
workspace
said deleting changelogs is not recommended. If you don't
have use cases
of above kind, you can delete changelogs.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "John Sincock [FLCPTY]" <j.sinc...@fugro.com>
> To: "Kotresh Hiremath Ravishankar&
Hi John,
The changelog files are generated every 15 secs recording the changes happened
to filesystem
within that span. So every 15 sec, once the new changelog file is generated,
it is ready
to be consumed by glusterfind or any other consumers. The 15 sec time period is
a tune-able.
e.g.,
Thanks Michael!
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Michael Scherer" <msche...@redhat.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: "Krutika Dhananjay" <kdhan...@redhat.com>, "
If it takes sometime, we should consider moving all geo-rep testcases under
bad tests
till then.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Michael Scherer" <msche...@redhat.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redh
Thank you:) and also please check the script I had given passes in all machines
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Michael Scherer" <msche...@redhat.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc:
l Message -
> From: "Michael Scherer" <msche...@redhat.com>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> Cc: "Krutika Dhananjay" <kdhan...@redhat.com>, "Atin Mukherjee"
> <amukh...@redhat.com>, "Gaurav Garg&
As of now it is supported only on Linux, it has known issues with other
platforms
such as NetBSD...
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Michael Scherer" <msche...@redhat.com>
> To: "Kotresh Hiremath Ravishankar" <khire
;Krutika Dhananjay" <kdhan...@redhat.com>
> To: "Atin Mukherjee" <amukh...@redhat.com>
> Cc: "Gluster Devel" <gluster-devel@gluster.org>, "Gaurav Garg"
> <gg...@redhat.com>, "Aravinda" <avish...@redhat.com>,
> &quo
er version.
---
#!/bin/bash
function SSHM()
{
ssh -q \
-oPasswordAuthentication=no \
-oStrictHostKeyChecking=no \
-oControlMaster=yes \
"$@";
}
function cmd_slave()
{
local cmd_line;
cmd_line=$(cat < From: "Kotresh
Hi DHT Team and Others,
Changelog is a server side translator sits above POSIX and records FOPs.
Hence, the order of operation is true only for that brick and the order
of operation is lost across bricks.
e.g.,(f1 hashes to brick1 and f2 to brick2)
brick1 brick2
Hi Aravinda,
I used it yesterday. It greatly simplifies the geo-rep setup.
It would be great if it is enhanced to troubleshoot what's
wrong in already corrupted setup.
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Aravinda"
> To: "Gluster Devel"
Original Message -
> From: "Emmanuel Dreyfus" <m...@netbsd.org>
> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>, "Gluster Devel"
> <gluster-devel@gluster.org>
> Cc: "Aravinda" <avish...@redhat.com&
Hi Emmanuel and others,
Geo-rep has few issues that's needs to be addressed to work with NetBSD.
The following bug is raised to track the same.
https://bugzilla.redhat.com/show_bug.cgi?id=1257847
So till these issues are fixed, I think we should disable only in
NetBSD and let it run in linux.
Dreyfus m...@netbsd.org
To: Kotresh Hiremath Ravishankar khire...@redhat.com, Avra Sengupta
aseng...@redhat.com
Cc: gluster-infra gluster-in...@gluster.org, Gluster Devel
gluster-devel@gluster.org
Sent: Wednesday, August 19, 2015 12:28:18 PM
Subject: Re: [Gluster-devel] NetBSD regression
-
From: Kotresh Hiremath Ravishankar khire...@redhat.com
To: Avra Sengupta aseng...@redhat.com
Cc: gluster-infra gluster-in...@gluster.org, Gluster Devel
gluster-devel@gluster.org
Sent: Tuesday, August 18, 2015 11:50:00 AM
Subject: Re: [Gluster-devel] NetBSD regression failures
Yes
Yes, it makes sense to move both geo-rep test to bad tests for now till.
the issue gets fixed in netBSD. I am looking into it netbsd failures.
Thanks and Regards,
Kotresh H R
- Original Message -
From: Avra Sengupta aseng...@redhat.com
To: Atin Mukherjee amukh...@redhat.com, Gluster
Thanks Emmanuel, I could not look into it as I was out of station.
I will debug it today.
Thanks and Regards,
Kotresh H R
- Original Message -
From: Emmanuel Dreyfus m...@netbsd.org
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent
for other netbsd slave machines when those are brought
up.
Thanks and Regards,
Kotresh H R
- Original Message -
From: Kotresh Hiremath Ravishankar khire...@redhat.com
To: Susant Palai spa...@redhat.com, Emmanuel Dreyfus m...@netbsd.org
Cc: Gluster Devel gluster-devel@gluster.org
Sent
Thanks Emmanuel.
Thanks and Regards,
Kotresh H R
- Original Message -
From: Emmanuel Dreyfus m...@netbsd.org
To: Kotresh Hiremath Ravishankar khire...@redhat.com, Gluster Devel
gluster-devel@gluster.org
Sent: Sunday, July 5, 2015 12:52:23 AM
Subject: Re: [Gluster-devel] NetBSD
Hi
NetBSD regressions are not initializing because of following error consistently
with multiple re-triggers.
I see the same error for quite a few patches.
http://review.gluster.org/#/c/11443/
Building remotely on nbslave72.cloud.gluster.org (netbsd7_regression) in
workspace
Comments inline.
Thanks and Regards,
Kotresh H R
- Original Message -
From: Susant Palai spa...@redhat.com
To: Sachin Pandit span...@redhat.com
Cc: Kotresh Hiremath Ravishankar khire...@redhat.com, Gluster Devel
gluster-devel@gluster.org
Sent: Thursday, July 2, 2015 12:35:08 PM
logs to new logging framework.
Thanks and Regards,
Kotresh H R
- Original Message -
From: Kotresh Hiremath Ravishankar khire...@redhat.com
To: Gluster Devel gluster-devel@gluster.org
Sent: Sunday, June 28, 2015 12:01:22 PM
Subject: [Gluster-devel] Build and Regression failure in master
Message -
From: Atin Mukherjee atin.mukherje...@gmail.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Sunday, June 28, 2015 12:56:21 PM
Subject: Re: [Gluster-devel] Build and Regression failure in master branch!
-Atin
Sent from one
Ok, Thanks. I have re-triggered it.
Thanks and Regards,
Kotresh H R
- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: Kotresh Hiremath Ravishankar khire...@redhat.com, Gluster Devel
gluster-devel@gluster.org
Sent: Thursday, June 25, 2015 11:55:22 AM
Subject
Hi All,
The above mentioned testcase failed for me which is not related to the patch.
Could someone look into it?
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11267/consoleFull
Thanks and Regards,
Kotresh H R
___
Gluster-devel
Hi,
I see the above test case failing for my patch which is not related.
Could some one from AFR team look into it?
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11332/consoleFull
Thanks and Regards,
Kotresh H R
___
Gluster-devel
1 - 100 of 126 matches
Mail list logo