[Gluster-infra] Fwd: My email not delivered to the list

2019-07-17 Thread Ravishankar N

Hello,

I have created https://bugzilla.redhat.com/show_bug.cgi?id=1730962 for 
this.


Regards,
Ravi


 Forwarded Message 
Subject:My email not delivered to the list
Date:   Wed, 17 Jul 2019 08:55:07 +0530
From:   Ravishankar N 
To: gluster-users-ow...@gluster.org



Hi,

I had replied to the thread "Graceful gluster server retire/poweroff" on 
gluster-users but I don't see it being delivered to the list. The 
original poster has got my email though, and his reply to it has made it 
to the list. Any idea what the problem could be?


Regards,
Ravi
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] review.gluster.org is not accessible.

2019-06-13 Thread Ravishankar N

Hi,

I have raised https://bugzilla.redhat.com/show_bug.cgi?id=1720453. The 
issue seems to be intermittent. I was not able to access it on both 
firefox and chrome. Now chrome works but not firefox.


Regards,
Ravi
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] Announcing Softserve- serve yourself a VM

2018-08-11 Thread Ravishankar N



On 02/28/2018 06:56 PM, Deepshikha Khandelwal wrote:


Hi,


We have launched the alpha version of  SOFTSERVE[1], which allows 
Gluster Github organization members to provision virtual machines for 
a specified duration of time. These machines will be deleted 
automatically afterwards.



Now you don’t need to file a bug to get VM. It’s just a form away with 
a dashboard to monitor the machines.



Once the machine is up, you can access it via SSH and run your 
debugging (test regression runs).



We’ve enabled certain limits for this application:

1.

Maximum allowance of 5 VM at a time across all the users. User
have to wait until a slot is available for them after 5 machines
allocation.

2.

User will get the requesting machines maximum upto 4 hours.

3.

Access to only Gluster organization members.


These limits may be resolved in the near future. This service is ready 
to use and if you find any problems, feel free to file an issue on the 
github repository[2].



Hi,
While https://github.com/gluster/softserve/issues/31 gets some 
attention, I've sent a pr [*]  based on grepping through source code to 
allow a 24 hour reservation slot. Please have a look.

Thanks!
Ravi
[*] https://github.com/gluster/softserve/pull/46



[1]https://softserve.gluster.org

[2]https://github.com/gluster/softserve

Thanks,

Deepshikha




___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-users] lists.gluster.org issues this weekend

2017-11-09 Thread Ravishankar N

[Removing users and devel from the recipient list]
Hello,
I sent a response today to the users ML but it hasn't been delivered. 
Wondering if there is some server problem again or something wrong with 
my email account?

Thanks,
Ravi

On 09/22/2017 07:10 AM, Ravishankar N wrote:

Hello,
Are our servers still facing the overload issue? My replies to 
gluster-users ML are not getting delivered to the list.

Regards,
Ravi

On 09/19/2017 10:03 PM, Michael Scherer wrote:

Le samedi 16 septembre 2017 à 20:48 +0530, Nigel Babu a écrit :

Hello folks,

We have discovered that for the last few weeks our mailman server was
used
for a spam attack. The attacker would make use of the + feature
offered by
gmail and hotmail. If you send an email toexam...@hotmail.com,
example+...@hotmail.com,example+...@hotmail.com, it goes to the same
inbox. We were constantly hit with requests to subscribe to a few
inboxes.
These requests overloaded our mail server so much that it gave up. We
detected this failure because a postmortem email to
gluster-infra@gluster.org  bounced. Any emails sent to our mailman
server
may have been on hold for the last 24 hours or so. They should be
processed
now as your email provider re-attempts.

For the moment, we've banned subscribing with an email address with a
+ in
the name. If you are already subscribed to the lists with a + in your
email
address, you will continue to be able to use the lists.

We're looking at banning the spam IP addresses from being able to hit
the
web interface at all. When we have a working alternative, we will
look at
removing the current ban of using + in address.

So we have a alternative in place, I pushed a blacklist using
mod_security and a few DNS blacklist:
https://github.com/gluster/gluster.org_ansible_configuration/commit/2f4
c1b8feeae16e1d0b7d6073822a6786ed21dde





Apologies for the outage and a big shout out to Michael for taking
time out
of his weekend to debug and fix the issue.

Well, you can thanks the airport in Prague for being less interesting
than a spammer attacking us.



___
Gluster-users mailing list
gluster-us...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users




___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-users] lists.gluster.org issues this weekend

2017-09-22 Thread Ravishankar N

Hello,
Are our servers still facing the overload issue? My replies to 
gluster-users ML are not getting delivered to the list.

Regards,
Ravi

On 09/19/2017 10:03 PM, Michael Scherer wrote:

Le samedi 16 septembre 2017 à 20:48 +0530, Nigel Babu a écrit :

Hello folks,

We have discovered that for the last few weeks our mailman server was
used
for a spam attack. The attacker would make use of the + feature
offered by
gmail and hotmail. If you send an email to exam...@hotmail.com,
example+...@hotmail.com, example+...@hotmail.com, it goes to the same
inbox. We were constantly hit with requests to subscribe to a few
inboxes.
These requests overloaded our mail server so much that it gave up. We
detected this failure because a postmortem email to
gluster-infra@gluster.org bounced. Any emails sent to our mailman
server
may have been on hold for the last 24 hours or so. They should be
processed
now as your email provider re-attempts.

For the moment, we've banned subscribing with an email address with a
+ in
the name. If you are already subscribed to the lists with a + in your
email
address, you will continue to be able to use the lists.

We're looking at banning the spam IP addresses from being able to hit
the
web interface at all. When we have a working alternative, we will
look at
removing the current ban of using + in address.

So we have a alternative in place, I pushed a blacklist using
mod_security and a few DNS blacklist:
https://github.com/gluster/gluster.org_ansible_configuration/commit/2f4
c1b8feeae16e1d0b7d6073822a6786ed21dde





Apologies for the outage and a big shout out to Michael for taking
time out
of his weekend to debug and fix the issue.

Well, you can thanks the airport in Prague for being less interesting
than a spammer attacking us.



___
Gluster-users mailing list
gluster-us...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] NetBSD machine required to debug a core

2016-05-30 Thread Ravishankar N



On 05/13/2016 11:29 AM, Raghavendra Talur wrote:



On Thu, May 12, 2016 at 7:48 PM, Kaushal M <kshlms...@gmail.com 
<mailto:kshlms...@gmail.com>> wrote:


On Thu, May 12, 2016 at 6:02 PM, Raghavendra Talur
<rta...@redhat.com <mailto:rta...@redhat.com>> wrote:
> slave26.cloud.gluster.org <http://slave26.cloud.gluster.org> is
taken down for you.
> please update here when done.

Isn't this a CentOS machine?


I blame it on lack of caffeine and bad headache :-/
I have brought back slave26.cloud.gluster.org 
<http://slave26.cloud.gluster.org> and taken down 
nbslave7j.cloud.gluster.org <http://nbslave7j.cloud.gluster.org> for Ravi.


Thanks Raghavendra, I am done with the machine. Please take it back up.
Regards,
Ravi




>
> Thanks,
> Raghavendra Talur
>
>
> On Wed, May 11, 2016 at 2:49 PM, Ravishankar N
<ravishan...@redhat.com <mailto:ravishan...@redhat.com>>
> wrote:
>>
>> Hello,
>>
>> I would need a NetBSD machine to debug a crash
>>

https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/16749/consoleFull
>>
>> The test that is failing is a .t that I have written as a part
of the
>> patch against which the regression was triggered.
>>
>> Thanks,
>>
>> Ravi
>>
>> ___
>> Gluster-infra mailing list
>> Gluster-infra@gluster.org <mailto:Gluster-infra@gluster.org>
>> http://www.gluster.org/mailman/listinfo/gluster-infra
>
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org <mailto:Gluster-infra@gluster.org>
> http://www.gluster.org/mailman/listinfo/gluster-infra




___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] NetBSD machine required to debug a core

2016-05-12 Thread Ravishankar N

I'll wait for confirmation before I proceed with using the machine

On 05/12/2016 07:21 PM, Michael Scherer wrote:

Le jeudi 12 mai 2016 à 18:02 +0530, Raghavendra Talur a écrit :

slave26.cloud.gluster.org is taken down for you.
please update here when done.

Isn't slave26 a Linux builder ?


Thanks,
Raghavendra Talur

On Wed, May 11, 2016 at 2:49 PM, Ravishankar N <ravishan...@redhat.com>
wrote:


Hello,

I would need a NetBSD machine to debug a crash
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/16749/consoleFull

The test that is failing is a .t that I have written as a part of the
patch against which the regression was triggered.

Thanks,

Ravi

___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra



___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [ovirt-users] [Attention needed] GlusterFS repository down - affects CI / Installations

2016-04-27 Thread Ravishankar N

@gluster infra  - FYI.

On 04/27/2016 02:20 PM, Nadav Goldin wrote:

Hi,
The GlusterFS repository became unavailable this morning, as a result 
all Jenkins jobs that use the repository will fail, the common error 
would be:



http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-7/noarch/repodata/repomd.xml:
[Errno 14] HTTP Error 403 - Forbidden


Also, installations of oVirt will fail.

We are working on a solution and will update asap.

Nadav.



___
Users mailing list
us...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] Jenkins regression for release-3.7 messed up?

2016-03-05 Thread Ravishankar N

'brick_up_status' is used by the following .ts in release-3.7

[root@ravi1 glusterfs]# git grep -w brick_up_status

tests/bugs/bitrot/bug-1288490.t:EXPECT_WITHIN $PROCESS_UP_TIMEOUT "Y" 
brick_up_status $V0 $H0 $B0/brick0
tests/bugs/bitrot/bug-1288490.t:EXPECT_WITHIN $PROCESS_UP_TIMEOUT "Y" 
brick_up_status $V0 $H0 $B0/brick1
tests/bugs/glusterd/bug-1225716-brick-online-validation-remove-brick.t:EXPECT_WITHIN 
$PROCESS_UP_TIMEOUT "Y" brick_up_status $V0 $H0 $B0/${V0}1
tests/bugs/glusterd/bug-857330/normal.t:EXPECT_WITHIN 
$PROCESS_UP_TIMEOUT "Y" brick_up_status $V0 $H0 $B0/${V0}3
tests/bugs/glusterd/bug-857330/xml.t:EXPECT_WITHIN $PROCESS_UP_TIMEOUT 
"Y" brick_up_status $V0 $H0 $B0/${V0}3


There seems to be a bug in this function. (It is another matter that the 
function is different in master but let us ignore that for now).  So all 
these tests should fail on release-3.7 and they do fail on my 
machine.*But for some reason, they succeed on jenkins. Why is that? They 
are not in bad_tests on 3.7 either.**

*
This fixes the function:
diff --git a/tests/volume.rc b/tests/volume.rc
index 9bd9eca..6040c5f 100644
--- a/tests/volume.rc
+++ b/tests/volume.rc
@@ -24,7 +24,7 @@ function brick_up_status {
 local host=$2
 local brick=$3
 brick_pid=$(get_brick_pid $vol $host $brick)
-gluster volume status | grep $brick_pid | awk '{print $4}'
+gluster volume status | grep $brick_pid | awk '{print $5}'
 }

and all the tests pass with the fix on my machine. I had send the fix as 
a part of a patch[1] and it *fails* on jenkins.Why?


Thanks,
Ravi


[1] http://review.gluster.org/#/c/13609/2



___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Jenkins accounts for all devs.

2016-01-21 Thread Ravishankar N

On 01/14/2016 12:16 PM, Kaushal M wrote:

On Thu, Jan 14, 2016 at 10:33 AM, Raghavendra Talur <rta...@redhat.com> wrote:


On Thu, Jan 14, 2016 at 10:32 AM, Ravishankar N <ravishan...@redhat.com>
wrote:

On 01/08/2016 12:03 PM, Raghavendra Talur wrote:

P.S: Stop using the "universal" jenkins account to trigger jenkins build
if you are not a maintainer.
If you are a maintainer and don't have your own jenkins account then get
one soon!


I would request for a jenkins account for non-maintainers too, at least
for the devs who are actively contributing code (as opposed to random
one-off commits from persons). That way, if the regression failure is
*definitely* not in my patch (or) is a spurious failure (or) is something
that I need to take a netbsd slave offline to debug etc.,  I don't have to
be blocked on the Maintainer. Since the accounts are anyway tied to an
individual, it should be easy to spot if someone habitually re-trigger
regressions without any initial debugging.


+1

We'd like to give everyone accounts. But the way we're providing
accounts now gives admin accounts to all. This is not very secure.

This was one of the reasons misc setup freeipa.gluster.org, to provide
controlled accounts for all. But it hasn't been used yet. We would
need to integrate jenkins and the slaves with freeipa, which would
give everyone easy access.


Hi Michael,
Do you think it is possible to have this integration soon so that all 
contributors can re-trigger/initiate builds by themselves?

Regards,
Ravi

-Ravi



___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] NetBSD tests not running to completion.

2016-01-10 Thread Ravishankar N

On 01/08/2016 09:57 AM, Emmanuel Dreyfus wrote:

I am a bit disturbed by the fact that people raise the
"NetBSD regression ruins my life" issue without doing the work of
listing the actual issues encountered.
I already did earlier- the lack of infrastructure to even find out what 
caused the issue in the first place.  Taking salves offline for personal 
debugging just won't work. It only adds to the list of pending jobs. 
Just see the no. of jobs queued up at the time of writing this mail:




Whatever interval we decide to run the regressions, we would either need 
local VMs or perhaps one personal slave VM per developer on jenkins to 
look into failures.


-Ravi
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] NetBSD tests not running to completion.

2016-01-08 Thread Ravishankar N

On 01/08/2016 03:57 PM, Emmanuel Dreyfus wrote:

On Fri, Jan 08, 2016 at 05:11:22AM -0500, Jeff Darcy wrote:

[08:45:57] ./tests/basic/afr/arbiter-statfs.t ..
[08:43:03] ./tests/basic/afr/arbiter-statfs.t ..
[08:40:06] ./tests/basic/afr/arbiter-statfs.t ..
[08:08:51] ./tests/basic/afr/arbiter-statfs.t ..
[08:06:44] ./tests/basic/afr/arbiter-statfs.t ..


I'm guessing that all of these are test #5.

./tests/basic/afr/arbiter-statfs.t (Wstat: 256 Tests: 5 Failed: 1)
  Failed test:  5
  Non-zero exit status: 1
  Parse errors: Bad plan.  You planned 22 tests but ran 5.


Atin had just pinged me on IRC with one such run 
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/13292/consoleFull
The reason is the same. The test exits early if setting up the loopback device 
fails:

+++ vnconfig -l
vnconfig: VNDIOCGET: Bad file descriptor
++ vnd=
++ '[' x = x ']'
++ echo 'no more vnd'
no more vnd
++ return 1

The issue was hit earlier too. 
http://comments.gmane.org/gmane.comp.file-systems.gluster.devel/13192
The slave had to be eventually rebooted. I don't want to add it to bad tests if 
test#5 failed.





[08:00:54] ./tests/basic/afr/self-heal.t ..
[07:59:56] ./tests/basic/afr/entry-self-heal.t ..
[18:05:23] ./tests/basic/quota-anon-fd-nfs.t ..
[18:06:37] ./tests/basic/quota-nfs.t ..
[18:49:32] ./tests/basic/quota-anon-fd-nfs.t ..
[18:51:46] ./tests/basic/quota-nfs.t ..
[14:25:37] ./tests/basic/quota-anon-fd-nfs.t ..
[14:26:44] ./tests/basic/quota-nfs.t ..
[14:45:13] ./tests/basic/tier/record-metadata-heat.t ..

That is 6 tests, they could be disabled or ignored.


So some of us *have* done that work, in a repeatable way.  Note that the
list doesn't include tests which *hang* instead of failing cleanly,
which has recently been causing the entire NetBSD queue to get stuck
until someone manually stops those jobs.  What I find disturbing is the
idea that a feature with no consistently-available owner or identifiable
users can be allowed to slow or block every release unless every
developer devotes extra time to its maintenance.  Even if NetBSD itself
is worth it, I think that's an unhealthy precedent to set for the
project as a whole.

For that point, we could start the regression script by:
( sleep 7200 && /sbin/reboot -n ) &

And end it with:
kill %1

Does it seems reasonable? That way nothing can hang more than 2 hours.



___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] NetBSD tests failing despite the tests being part of bad test

2015-12-22 Thread Ravishankar N

Hi,

https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/12978/consoleFull 
is one such example. All tests that failed are part of is_bad_test() and 
yet the regression fails.

Could the person in charge take a look please?

Thanks and regards,
Ravi
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] I've taken nbslave75.cloud.gluster.org offline

2015-12-18 Thread Ravishankar N

I have brought it back online.


On 12/16/2015 12:31 PM, Ravishankar N wrote:

$subject.

tests/basic/afr/self-heal.t is constantly failing on this slave for 
http://review.gluster.org/#/c/12894/, I need to debug why.


-Ravi



___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra