On Wed, Jul 22, 2020 at 2:34 PM Ravishankar N
wrote:
> Hi,
>
> The gluster code base has some words and terminology (blacklist,
> whitelist, master, slave etc.) that can be considered hurtful/offensive
> to people in a global open source setting. Some of words can be fixed
> trivially but the Geo
Apologies for spamming. I haven't checked the mail sent by Rinku about Test
day before sending.
On Wed, Jun 24, 2020 at 5:09 PM Sanju Rakonde wrote:
> Hi glusterfs community,
>
> We are planning a Test day on 29th Jun 2020, mainly to focus on upgrade
> testing to release-8.0RC.
Hi glusterfs community,
We are planning a Test day on 29th Jun 2020, mainly to focus on upgrade
testing to release-8.0RC.
Also, we are planning to automate the upgrade test flow. @Prajith Kesava
Prasad be automating the rolling upgrade flow using
ansible. We need to capture the tests which have
680ee91d3]
> >> -->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef]
> >> -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-:
> >> received signum (1), shutting down
> >> [2019-09-25 10:53:37.872399] I
> >> [socket.c:3754:socket_s
Hi, The below errors indicate that brick process is failed to start. Please
attach brick log.
[glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a
fresh brick process for brick /mnt/p01-d01/glusterv01
[2019-09-25 05:17:26.722717] E [MSGID: 106005]
[glusterd-utils.c:6317:glusterd_b
Hello Barak,
It's great that you could resolve the issues. I was searching about how to
resolve "rpcgen" issue, usually ./configure --without-libtirp works. I will
try to help you with your other issues.
On Tue, Jul 23, 2019 at 3:37 PM Barak Sason wrote:
> Hello Sanju,
>
> I greatly appreciate
I apologize for the wrong mail. This .t failed only for one patch and I
don't think it is spurious. Closing this bug as not a bug.
On Thu, May 23, 2019 at 4:04 PM Sanju Rakonde wrote:
> I see a lot of patches are failing regressions due to the .t mentioned in
> the subject line. I
I see a lot of patches are failing regressions due to the .t mentioned in
the subject line. I've filed a bug[1] for the same.
https://bugzilla.redhat.com/show_bug.cgi?id=1713284
--
Thanks,
Sanju
___
Community Meeting Calendar:
APAC Schedule -
Every 2n
7:28 PM, FNU Raghavendra Manjunath wrote:
>
>
> I am working on other uss issue. i.e. the occasional failure of uss.t due
> to delays in the brick-mux regression. Rafi? Can you please look into this?
>
> Regards,
> Raghavendra
>
> On Thu, May 16, 2019 at 9:48 AM Sanju
In most of the regression jobs ./tests/bugs/snapshot/bug-1399598-uss-with-ssl.t
is dumping core, hence the regression is failing for many patches.
Rafi/Raghavendra, can you please look into this issue?
--
Thanks,
Sanju
___
Community Meeting Calendar:
I see the failures occurring on just one of the builder (builder206). I'm
> taking it back offline for now.
>
> On Tue, May 7, 2019 at 9:42 PM Michael Scherer
> wrote:
>
>> Le mardi 07 mai 2019 à 20:04 +0530, Sanju Rakonde a écrit :
>> > Looks like is_nfs_expo
Looks like is_nfs_export_available started failing again in recent
centos-regressions.
Michael, can you please check?
On Wed, Apr 24, 2019 at 5:30 PM Yaniv Kaul wrote:
>
>
> On Tue, Apr 23, 2019 at 5:15 PM Michael Scherer
> wrote:
>
>> Le lundi 22 avril 2019 à 22:57 +0530, Atin Mukherjee a écr
>>
>> These are the initial steps to create and start volume. Why
>> trusted.glusterfs.volume-id extended attribute is absent is not sure. The
>> analysis in [1] had errors of ENOENT (i.e. export directory itself was
>> absent).
>> I suspect this to be bec
Hi Raghavendra,
./tests/basic/uss.t is timing out in release-6 branch consistently. One
such instance is https://review.gluster.org/#/c/glusterfs/+/22641/. Can you
please look into this?
--
Thanks,
Sanju
___
Gluster-devel mailing list
Gluster-devel@glu
rom /lib64/libpthread.so.0
>
> #1 0x7f9eeb224545 in gf_timer_proc (data=0x1808580) at timer.c:164
>
> #2 0x7f9ee9fce5da in start_thread () from /lib64/libpthread.so.0
>
> #3 0x7f9ee98a4eaf in clone () from /lib64/libc.so.6
>
>
>
> Thread 1 (Thread 0x7f9eeb707780 (LW
Can you please capture output of "pstack $(pidof glusterd)" and send it to
us? We need to capture this information when glusterd is struck.
On Mon, Apr 8, 2019 at 8:05 AM Zhou, Cynthia (NSB - CN/Hangzhou) <
cynthia.z...@nokia-sbell.com> wrote:
> Hi glusterfs experts,
>
> Good day!
>
> In my test
Abhishek,
We need below information on investigate this issue.
1. gluster --version
2. Please run glusterd in gdb, so that we can capture the backtrace. I see
some rpc errors in log, but backtrace will be more helpful.
To run glusterd in gdb, you need start glusterd in gdb (i.e. gdb
glusterd,
is nightly job green sooner than later :-) .
>
> On Thu, 4 Oct 2018 at 15:07, Deepshikha Khandelwal
> wrote:
>
>> On Thu, Oct 4, 2018 at 6:10 AM Sanju Rakonde wrote:
>> >
>> >
>> >
>> > On Wed, Oct 3, 2018 at 3:26 PM Deepshikha Khandelwal <
&
On Wed, Oct 3, 2018 at 3:26 PM Deepshikha Khandelwal
wrote:
> Hello folks,
>
> Distributed-regression job[1] is now a part of Gluster's
> nightly-master build pipeline. The following are the issues we have
> resolved since we started working on this:
>
> 1) Collecting gluster logs from servers.
>
On Wed, Sep 26, 2018 at 7:53 PM Shyam Ranganathan
wrote:
> Hi,
>
> Updates on the release and shout out for help is as follows,
>
> RC0 Release packages for testing are available see the thread at [1]
>
> These are the following activities that we need to complete for calling
> the release as GA
Hi Abhishek,
Can you please share the output of "t a a bt" with us?
Thanks,
Sanju
On Fri, Sep 21, 2018 at 2:55 PM, ABHISHEK PALIWAL
wrote:
>
> We have seen a SIGSEGV crash on glusterfs process on kernel restart at
> start up.
>
> (gdb) bt
> #0 0x3fffad4463b0 in _IO_unbuffer_all () at geno
Hi Shyam,
I need to work on this, but couldn't spend much time till now. I will try
to spend as much time as I can and get these fixed.
Mohit is also working on this AFAIK.
Thanks,
Sanju
On Wed, Jul 25, 2018 at 12:27 AM, Shyam Ranganathan
wrote:
> Hi,
>
> Multiplex regression jobs are failing
Hi Nigel,
I have a suggestion here. It will be good if we have a option like request
for extension of the VM duration, and the option will be automatically
activated after 3 hours of usage of VM. If somebody is using the VM after 3
hours and they feel like they need it for 2 more hours they will r
Hi all,
You can use the below link, if you are accessing the document via your
gmail accout.
https://docs.google.com/document/d/1u8o4-wocrsuPDI8BwuBU6yi_x4xA_pf2qSrFY6WEQpo/edit?usp=sharing
Thanks,
Sanju
On Fri, Dec 29, 2017 at 6:58 AM, Sanju Rakonde wrote:
> Okay Nigel, I'll share
Hi All,
To reduce the overall time taken by every regression job for all the
glusterd tests, we are thinking of optimizing glusterd tests so that some
of the duplicate tests can be avoided if we can group them based on types
of volumes, operations to be performed, test to be performed on single no
25 matches
Mail list logo