On 05/05/2014 04:45 PM, Vijay Bellur wrote:
On 04/30/2014 09:28 AM, Vijay Bellur wrote:
We plan to perform update gerrit to provide access to sub-maintainers by
end of this week (i.e. 4th May). If you have any objections, concerns or
feedback on this process, please feel free to provide
On 05/21/2014 11:42 AM, Atin Mukherjee wrote:
On 05/21/2014 10:54 AM, SATHEESARAN wrote:
Guys,
This is the issue pointed out by Pranith with regard to Barrier.
I was reading through it.
But I wanted to bring it to concern
-- S
Original Message
Subject: Re
Pranith,
Regression test mentioned in $SUBJECT failed (testcase : 14 16)
Console log can be found at
http://build.gluster.org/job/rackspace-regression-2GB/227/consoleFull
My initial suspect is on HEAL_TIMEOUT (set to 60 seconds) where healing
might not have been completed within this time
On 06/18/2014 10:04 AM, Pranith Kumar Karampuri wrote:
On 06/18/2014 09:39 AM, Atin Mukherjee wrote:
Pranith,
Regression test mentioned in $SUBJECT failed (testcase : 14 16)
Console log can be found at
http://build.gluster.org/job/rackspace-regression-2GB/227/consoleFull
My initial
On 06/26/2014 01:58 PM, Sachin Pandit wrote:
Hi all,
We had some concern regarding the snapshot delete force option,
That is the reason why we thought of getting advice from everyone out here.
Currently when we give gluster snapshot delete snapname, It gives a
notification
saying that
Hi All,
Patch [1] introduces a new feature of glusterd where one can list down a
specific or all volume options with a glusterd command.
Request you to let me know your thoughts. I will be also creating a
feature page for the same in sometime.
1. http://review.gluster.org/#/c/8305/
Regards,
On 07/25/2014 03:59 PM, Atin Mukherjee wrote:
Hi All,
Patch [1] introduces a new feature of glusterd where one can list down a
specific or all volume options with a glusterd command.
Request you to let me know your thoughts. I will be also creating a
feature page for the same
IIRC, we were marking the verified as +1 in case of a known spurious
failure, can't we continue to do the same for the known spurious
failures just to unblock the patches getting merged till the problems
are resolved?
~Atin
On 08/22/2014 11:44 PM, Vijay Bellur wrote:
On 06/12/2014 09:06 PM,
Hi Devel-list,
Current implementation of gd_commit_op_phase sets op_ret to a non zero
value if any of the commit operation fails and the transaction fails.
Cluster comprises of 2 nodes.
1. Stop the volume at Node 1
2. Start the volume at Node 1 and while volume was starting up bring
down Node 2.
Hello Folks,
While I was testing some volume set options I could find out that
translator initialization at brick level always fails with following error:
[2014-08-30 05:57:49.429453] W [options.c:898:xl_opt_validate]
10-test-vol-server: option 'listen-port' is deprecated, preferred is
I would appreciate if the following patches can get some review attention:
- http://review.gluster.org/#/c/8358/
- http://review.gluster.org/#/c/8375/
- http://review.gluster.org/#/c/8380/
- http://review.gluster.org/#/c/8571/
- http://review.gluster.org/#/c/8572/
~Atin
On 09/06/2014 05:55 PM, Pranith Kumar Karampuri wrote:
On 09/05/2014 03:51 PM, Kaushal M wrote:
GlusterD performs the following functions as the management daemon for
GlusterFS:
- Peer membership management
- Maintains consistency of configuration data across nodes
(distributed
On 09/07/2014 06:21 AM, Emmanuel Dreyfus wrote:
Hi
I try getting tests/basic/pump.t to pass on NetBSD, but after a few
experiments, it seems the brick replace functionality is just broken.
I run that steps one by one on a fresh install:
netbsd0# glusterd
netbsd0# $CLI volume create
On 09/07/2014 12:43 PM, Emmanuel Dreyfus wrote:
Atin Mukherjee amukh...@redhat.com wrote:
I suggest you should check the glusterd log file at node
078015de-2186-4bd7-a4d1-017e39c16dd3, if you don't find the reason of
why glusterd did not release the lock work around will be to restart
A gentle reminder!!
Regards,
Atin
On 09/01/2014 11:07 AM, Atin Mukherjee wrote:
I would appreciate if the following patches can get some review attention:
- http://review.gluster.org/#/c/8358/
- http://review.gluster.org/#/c/8375/
- http://review.gluster.org/#/c/8380/
- http
Hi Folks,
I have worked on a patch [1] to ensure glusterd statedump captures some
important in-memory data structure on per gluster node. This will
definitely help on root causing an issue.
Following are the list of in-memory data members which are targeted in
this patch:
* Supported max/min
Kiran,
Master branch has the volume get feature. I am not sure which version of
gluster you are using.
git show c080403393987f807b9ca81be140618fa5e994f1
Regards,
Atin
On 10/29/2014 04:27 PM, Kiran Patil wrote:
Is that document up to date since it does not contain the option
Hi Kiran,
This would be available from 3.7 release onwards. We didn't have any
plans to make it for 3.6.
~Atin
On 10/30/2014 10:05 AM, Kiran Patil wrote:
Hello Atin,
When will you push to release-3.6 branch ?
Thanks,
Kiran.
On Wed, Oct 29, 2014 at 4:31 PM, Atin Mukherjee amukh
On 08/24/2014 11:41 PM, Justin Clift wrote:
On 24/08/2014, at 11:05 AM, Vijay Bellur wrote:
snip
On Sat, Aug 23, 2014 at 12:02 PM, Harshavardhana har...@harshavardhana.net
wrote:
On Fri, Aug 22, 2014 at 10:23 PM, Atin Mukherjee amukh...@redhat.com wrote:
IIRC, we were marking
On 10/31/2014 07:08 PM, Justin Clift wrote:
On Fri, 31 Oct 2014 10:17:28 +0530
Atin Mukherjee amukh...@redhat.com wrote:
snip
Justin,
For last three runs, I've observed the same failure. I think its
really the time to debug this without any further delay. Can you
please share a rackspace
On 11/03/2014 06:15 PM, Justin Clift wrote:
On Sun, 02 Nov 2014 21:41:02 +0530
Atin Mukherjee amukh...@redhat.com wrote:
On 10/31/2014 07:08 PM, Justin Clift wrote:
On Fri, 31 Oct 2014 10:17:28 +0530
Atin Mukherjee amukh...@redhat.com wrote:
snip
Justin,
For last three runs, I've
On 11/08/2014 05:21 AM, Justin Clift wrote:
On Wed, 05 Nov 2014 14:58:06 +0530
Atin Mukherjee amukh...@redhat.com wrote:
snip
Can there be any cases where glusterd instance may go down
unexpectedly without a crash?
[1] http://build.gluster.org/job/rackspace-regression-2GB-triggered
/2319
On 11/12/2014 06:13 PM, Pranith Kumar Karampuri wrote:
On 11/11/2014 05:25 PM, Kiran Patil wrote:
I have installed gluster v3.6.1
from
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/
The /tests/bugs/bug-1112559.t testcase passed in all 3 runs and rest
of the
+1, excellent idea, this will definitely give an additional comfort zone
for learning glusterfs faster.
On 11/12/2014 05:47 PM, Krishnan Parthasarathi wrote:
All,
We have come across behaviours and features of GlusterFS that are left
unexplained for various reasons. Thanks to Justin Clift
Folks,
While I was looking into glusterd backlogs I could see there are few BZs
which were marked as needinfo on the reporter as the information was
not sufficient enough for further analysis and the reporter hasn't
gotten back with the required details.
Ideally we should close these bugs saying
Folks,
netbsd7 smoke is failing constantly with the following error:
Fetching upstream changes from origin
ERROR: Problem fetching from origin / origin - could be unavailable.
Continuing anyway
hudson.plugins.git.GitException: Command git fetch -t origin
refs/changes/08/9108/6 returned status
On 11/18/2014 10:35 PM, Pranith Kumar Karampuri wrote:
On 11/12/2014 04:52 PM, Kiran Patil wrote:
I have create zpool with name d and mnt and they appear in filesystem
as follows.
d on /d type zfs (rw,xattr)
mnt on /mnt type zfs (rw,xattr)
Debug enabled output of quota.t testcase is at
On 11/19/2014 10:32 AM, Pranith Kumar Karampuri wrote:
On 11/19/2014 10:30 AM, Atin Mukherjee wrote:
On 11/18/2014 10:35 PM, Pranith Kumar Karampuri wrote:
On 11/12/2014 04:52 PM, Kiran Patil wrote:
I have create zpool with name d and mnt and they appear in filesystem
as follows.
d
Rajesh/Avra,
For one of the regression run [1] few uss testcases failed with core file. Can
you please take a look?
[1]
http://build.gluster.org/job/rackspace-regression-2GB-triggered/2821/consoleFull
~Atin
___
Gluster-devel mailing list
On 12/01/2014 12:31 PM, Vijay Bellur wrote:
Hi All,
Shall we set a goal for ourselves to be Coverity Scan clean by GlusterFS
3.7?
I think fixing problems reported in the incremental reports here would
be a good way of keeping the number of static analysis defects in
control. It would
On 12/02/2014 08:37 PM, Jeff Darcy wrote:
I've been thinking and experimenting around some of the things we need
in this area to support 4.0 features, especially data classification
http://www.gluster.org/community/documentation/index.php/Features/data-classification
Before I suggest
On 12/03/2014 07:36 PM, Justin Clift wrote:
On Tue, 02 Dec 2014 10:05:36 +0530
Atin Mukherjee amukh...@redhat.com wrote:
snip
Its on my radar, I am in progress of analysing it. The last patch set
was on the clean up part of the test cases, I felt the changes could
have solved it, but I am
Justin,
I could only get a chance to look into this yesterday as last week I was
in RHEL 7 training. I've some interesting facts to share which is as
follows:
When I execute the script which sets options for two different volumes
(one in background) in a loop, after few iterations the memory
On 12/09/2014 06:24 PM, Jeff Darcy wrote:
I would like to propose refactoring of the code managing
various daemons in glusterd. Unlike other high(er) level proposals
about feature design, this one is at the implementation
level. Please go through the details of the proposal below
and share
Hi Justin,
The current regression.sh code doesn't capture /var/log/messages which
means whether the system ran out of memory or not can not be justified
looking at the the log archive. I strongly believe we should capture
this message as part of archiving the log.
Would you be able to do this
will be
addressed in a different patch.
~Atin
On 12/09/2014 09:46 AM, Atin Mukherjee wrote:
Justin,
I could only get a chance to look into this yesterday as last week I was
in RHEL 7 training. I've some interesting facts to share which is as
follows:
When I execute the script which sets options for two
Hi Pranith/Ravi,
I could see a spurious failure in [1] coming from data-self-heal.t. Can
anyone of you please look at it?
[1]
http://build.gluster.org/job/rackspace-regression-2GB-triggered/3121/consoleFull
[05:59:48] ./tests/basic/afr/data-self-heal.t
On 12/17/2014 01:01 PM, Lalatendu Mohanty wrote:
On 12/17/2014 12:56 PM, Krishnan Parthasarathi wrote:
I was looking into a Coverity issue (CID 1228603) in GlusterFS.
I sent a patch[1] before I fully understood why this was an issue.
After searching around in the internet for explanations, I
Can you please take in http://review.gluster.org/#/c/9328/ for 3.6.2?
~Atin
On 12/19/2014 02:05 PM, Raghavendra Bhat wrote:
Hi,
glusterfs-3.6.2beta1 has been released. I am planning to make 3.6.2
before end of this year. If there are some patches that has to go in for
3.6.2, please send
On 12/25/2014 12:09 PM, Vijay Bellur wrote:
A single bug reported by covscan this time.
KP, Kaushal - can you please check this out?
http://review.gluster.org/#/c/9338/ should solve it.
~Atin
Thanks,
Vijay
Forwarded Message
Subject: New Defects reported by
Hi Vijai,
It seems like lots of regression test cases are failing due to auxiliary
mount failure in cli and thats because of left over auxiliary mount points.
[2014-12-30 10:21:15.875965] E [fuse-bridge.c:5338:init] 0-fuse:
Mountpoint /var/run/gluster/patchy/ seems to have a stale mount, run
Hi Emmanuel,
To debug the volume-status.t failure in netbsd for the regression run
[1] I was trying to download the log archive however was not able to
connect to the machine.
Can you please help me on getting the log file, is the server not
responsive?
~Atin
Missed out the link, here it goes
[1]
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/662/consoleFull
~Atin
On 01/16/2015 10:38 AM, Atin Mukherjee wrote:
Hi Emmanuel,
To debug the volume-status.t failure in netbsd for the regression run
[1] I was trying to download
On 01/14/2015 04:44 PM, Vijay Bellur wrote:
On 01/14/2015 04:34 PM, Vijay Bellur wrote:
Hi All,
We have been normally following a reactive process to backport patches
to release branches. The backport guidelines page [1] describes it in
more detail. Given the rate at which our master
I am also in process of migrating (just started) existing rebalance
daemon code into the new framework. bricks will be the next in the
queue. As pointed out by KP, please allow the new daemons to follow this
framework and believe me managing these daemons will be much more easier
than we had
--
Task : Rebalance
ID : 6d4c6c4e-16da-48c9-9019-dccb7d2cfd66
Status : completed
-- Original Message --
From: Atin Mukherjee amukh...@redhat.com
To: Pranith Kumar Karampuri pkara...@redhat.com; Justin Clift
On 01/27/2015 07:33 AM, Pranith Kumar Karampuri wrote:
On 01/26/2015 09:41 PM, Justin Clift wrote:
On 26 Jan 2015, at 14:50, David F. Robinson
david.robin...@corvidtec.com wrote:
I have a server with v3.6.2 from which I cannot mount using NFS. The
FUSE mount works, however, I cannot get
Hi Raghavendra,
Can you please consider the following patches for next 3.6 release? All
of them have passed regression now
- http://review.gluster.org/#/c/9393/
- http://review.gluster.org/#/c/9328/
~Atin
___
Gluster-devel mailing list
January 2015 04:38 PM, Atin Mukherjee wrote:
Hi Vijai,
It seems like lots of regression test cases are failing due to auxiliary
mount failure in cli and thats because of left over auxiliary mount
points.
[2014-12-30 10:21:15.875965] E [fuse-bridge.c:5338:init] 0-fuse:
Mountpoint /var/run/gluster
I've been personally following the same model while working on upstream
bugs. IMO, this needs no additional effort but just the discipline.
Because of lack of transition of bug states, our upstream BZ statistics
doesn't show the exact state where we are in now. I am pretty sure
numbers will look
Can I get some review attention for http://review.gluster.org/#/c/9462/ ?
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On 02/27/2015 07:10 AM, Cary Tsai wrote:
Assume I have 4 bricks in a replica (count=2) volume:
Volume Name: data-vol
Number of Bricks: 2 x 2 = 4
Brick1: 192.168.1.101/brick
Brick2: 192.168.1.102/brick
Brick3: 192.168.1.103/brick
Brick4: 192.168.1.104/brick
Something happens
I've got responses from couple of folks, would also love hear from others.
~Atin
On 03/31/2015 11:49 AM, Atin Mukherjee wrote:
Folks,
There are some projects which uses compiler/glibc features to strengthen
the security claims. Popular distros suggest to harden daemon with
RELRO/PIE flags
This question was originally asked by me in the review. I was thinking
if replace brick nature is always commit force in nature then why not to
just abandon the parameters, Admin needs to be cautious enough about its
usage by any means.
~Atin
On 04/03/2015 11:17 AM, Gaurav Garg wrote:
Hi all,
On 04/02/2015 06:43 PM, Justin Clift wrote:
On 2 Apr 2015, at 14:08, Niels de Vos nde...@redhat.com wrote:
On Thu, Apr 02, 2015 at 01:21:57PM +0100, Justin Clift wrote:
On 31 Mar 2015, at 08:15, Niels de Vos nde...@redhat.com wrote:
On Tue, Mar 31, 2015 at 12:20:19PM +0530, Kaushal M wrote:
How about
Gluster : Redefine storage
~Atin
On 04/01/2015 05:44 PM, Tom Callaway wrote:
Hello Gluster Ant People!
Right now, if you go to gluster.org, you see our current slogan in giant
text:
Write once, read everywhere
However, no one seems to be super-excited about that slogan.
On 04/14/2015 02:19 PM, Mohammed Rafi K C wrote:
On 04/14/2015 02:07 PM, Niels de Vos wrote:
On Tue, Apr 14, 2015 at 08:27:01AM +, Emmanuel Dreyfus wrote:
Hello
This morning NetBSD regression is busted by two new problems:
1) tests/geo-rep are all broken because of bad CFLAGS (fix
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
[1] has core file generated by tests/geo-rep/georep-tarssh-hybrid.t. Is
it something alarming or http://review.gluster.org/#/c/10340/ would take
care of it?
[1]
http://build.gluster.org/job/rackspace-regression-2GB-triggered/7345/consoleFull
--
~Atin
On 04/20/2015 08:35 AM, Vijay Bellur wrote:
On 04/20/2015 04:25 AM, Justin Clift wrote:
The good news:
1) Gerrit is kind of :/ updated. The very very latest versions
(released friday) don't work properly for us. So, we're running
on the slightly older v2.9.4 release of Gerrit.
On 04/21/2015 05:47 PM, Avra Sengupta wrote:
Hi,
Today whenever a gluster command fails the ret code is always 1.
Irrespective of the failure. I have sent the following patch which takes
a first step towards bringing some order to this chaos.
Could you explain a bit more about the current
On 04/24/2015 08:36 PM, Anusha Rao wrote:
I had a few doubts regarding the conversion from gf_log to gf_msg:
1) Is it necessary to convert all gf_log messages to gf_msg ?
IMO, it's not until and unless it has a real meaning to the admin. So
you may skip those log messages and choose not to
+ gluster-devel
On 04/23/2015 09:20 AM, Atin Mukherjee wrote:
On 04/23/2015 01:46 AM, Dan Lambright wrote:
KP, Atin,
Currently in tiered volumes, we attach a tier and then start the rebalance
daemon in separate steps.
The problem here is if someone does readdir on the mount point
On 04/23/2015 10:22 AM, Emmanuel Dreyfus wrote:
Hi
After retriggering regresion again and again and hitting the same
spurious errors unrelated to the changes I submitted, I took the
path to override regresion results for a few patchsets. I made it
clear it was manual override from me
On 04/20/2015 06:44 PM, Jeff Darcy wrote:
The same problems that affect mainline are affecting release-3.7 too. We
need to get over this soon.
I think it's time to start skipping (or even deleting) some tests. For
example, volume-snapshot-clone.t alone is responsible for a huge number
of
As we know, we have a patch from Manu which re-triggers a given failed
test. The idea was to reduce the burden of re-triggering the regression,
but I've been noticing it is failing in 2nd attempt as well and I've
seen this happening multiple times for patch [1]. I am not sure whether
I am damn
On 04/28/2015 06:08 PM, Vijay Bellur wrote:
On 04/28/2015 05:48 PM, Kaushal M wrote:
AFAIU we've not officially supported 32-bit architectures for sometime
(possibly for ever) as a community. But we had users running it
anyway.
3.7 as it is currently, cannot run on 32-bit platforms. I've
On 04/28/2015 10:40 PM, Niels de Vos wrote:
On Tue, Apr 28, 2015 at 05:56:56PM +0100, Justin Clift wrote:
This sounds like it might be useful for us:
https://gerrit-documentation.storage.googleapis.com/Documentation/2.9.4/config-labels.html#label_copyAllScoresOnTrivialRebase
I've seen few cases where we are using mainline bugs in submitting
patches in 3.7 release. Process wise this looks incorrect to me. IIRC,
earlier the smoke used to complaint about it, but I am not seeing that
either these days. Any recent changes on this?
IMO, we should clone all the mainline
The following two patches need to go in as they are critical IMO:
http://review.gluster.org/#/c/10271
http://review.gluster.org/#/c/10304
Also following are non-critical but good to have :
http://review.gluster.org/#/c/10229/
http://review.gluster.org/#/c/10272/
Can the maintainers please have
On 04/28/2015 01:24 PM, Emmanuel Dreyfus wrote:
On Tue, Apr 28, 2015 at 12:15:11PM +0530, Atin Mukherjee wrote:
I see netbsd regression doesn't execute peer probe from any other tests
apart from mgmt_v3-locks.t, if it had that would have also failed. So
the conclusion is peer probe doesn't
On 05/02/2015 08:54 AM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
Seems like glusterd failure from the looks of it: +glusterd folks.
Running tests in file ./tests/basic/cdc.t
volume delete: patchy: failed: Another transaction is in progress for
patchy.
On 05/02/2015 09:08 AM, Atin Mukherjee wrote:
On 05/02/2015 08:54 AM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
Seems like glusterd failure from the looks of it: +glusterd folks.
Running tests in file ./tests/basic/cdc.t
volume delete: patchy: failed
On 05/02/2015 11:59 AM, Atin Mukherjee wrote:
On 05/02/2015 09:08 AM, Atin Mukherjee wrote:
On 05/02/2015 08:54 AM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
Seems like glusterd failure from the looks of it: +glusterd folks.
Running tests in file
On 05/03/2015 11:32 PM, Vijay Bellur wrote:
On 05/03/2015 10:25 PM, Atin Mukherjee wrote:
On 05/03/2015 05:25 PM, Pranith Kumar Karampuri wrote:
Execute the following command on replicate volume:
root@pranithk-laptop - ~
17:23:02 :( ⚡ gluster v set r2 cluster.client-log-level 0
On 05/03/2015 11:26 PM, Atin Mukherjee wrote:
On 05/02/2015 08:52 PM, Emmanuel Dreyfus wrote:
Atin Mukherjee amukh...@redhat.com wrote:
Although I couldn't reproduce cdc.t failure but georep-setup.t failed
consistently and glusterd backtrace showed that it hangs on gverify.sh
On 04/30/2015 01:04 AM, Niels de Vos wrote:
On Wed, Apr 29, 2015 at 09:14:37PM +0200, Niels de Vos wrote:
On Wed, Apr 29, 2015 at 12:30:20PM -0400, Kaleb S. KEITHLEY wrote:
Any reason you don't want to run it as: `glusterd --xlator-option
*.upgrade=on -N /dev/null 21` ?
I think it would
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
On 05/06/2015 02:40 PM, Niels de Vos wrote:
Hi,
many patches need to get backported from the master branch to
release-3.7 or older stable releases. It helps reviewers and maintainers
enourmously if the backports are in some kind of standardized format.
The wiki contains the workflow for
On 05/08/2015 09:58 AM, Emmanuel Dreyfus wrote:
Pranith Kumar Karampuri pkara...@redhat.com wrote:
I just sent a mail about the known issues we found in ec :-). We have
fix for one submitted by Xavi but the other one will take a bit of time.
These bugs were there in 3.6.0 as well, so they
On 05/07/2015 03:00 PM, Krishnan Parthasarathi wrote:
Atin would be doing this, since he is looking into it.
HTH,
KP
- Original Message -
On 05/07/2015 02:53 PM, Krishnan Parthasarathi wrote:
- Original Message -
On 05/07/2015 02:41 PM, Krishnan Parthasarathi wrote:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8782/consoleFull
Failed test case : tests/bugs/replicate/bug-976800.t
I've added it in the etherpad as well.
--
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
On 05/02/2015 08:52 PM, Emmanuel Dreyfus wrote:
Atin Mukherjee amukh...@redhat.com wrote:
Although I couldn't reproduce cdc.t failure but georep-setup.t failed
consistently and glusterd backtrace showed that it hangs on gverify.sh
That suggests the script itself blocks forever. Runnig
On 05/09/2015 01:36 PM, Pranith Kumar Karampuri wrote:
On 05/09/2015 11:08 AM, Krishnan Parthasarathi wrote:
Ah! now I understood the confusion. I never said maintainer should fix
all the bugs in tests. I am only saying that they maintain tests, just
like we maintain code. Whether you
On 05/09/2015 04:23 PM, Kotresh Hiremath Ravishankar wrote:
Hi,
There are few regression failures with changelog translator init being failed
and a core is generated
as explained below.
1. Why changelog translator init failed?
In snapshot test cases, virtual multiple peers
On 05/07/2015 09:26 AM, Atin Mukherjee wrote:
On 05/07/2015 01:24 AM, Jeff Darcy wrote:
* tests/basic/volume-snapshot-clone.t
* http://review.gluster.org/#/c/10053/
* Came back on April 9
* http://build.gluster.org/job/rackspace-regression-2GB-triggered/6658/
Rafi - does
On 05/09/2015 01:25 AM, Pranith Kumar Karampuri wrote:
On 05/08/2015 09:14 AM, Krishnan Parthasarathi wrote:
- Original Message -
hi,
I think we fixed quite a few heavy hitters in the past week and
reasonable number of regression runs are passing which is a good sign.
Gaurav,
Can you quickly check [1]
[1]
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8881/consoleFull
--
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On 05/09/2015 11:42 AM, Atin Mukherjee wrote:
Gaurav,
Can you quickly check [1]
[1]
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8881/consoleFull
http://review.gluster.org/10702 should fix all of these spurious
failures coming from bitrot.
Rafi,
You would need
One more from quota
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8901/consoleFull
~Atin
On 05/08/2015 03:47 PM, Atin Mukherjee wrote:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8782/consoleFull
Failed test case : tests/bugs/replicate/bug-976800.t
On 05/15/2015 10:16 PM, Niels de Vos wrote:
On Fri, May 15, 2015 at 03:13:52PM +, Emmanuel Dreyfus wrote:
On Fri, May 15, 2015 at 05:07:38PM +0200, Niels de Vos wrote:
nfs.disable should be set for all volumes on all your Gluster servers.
/var/lib/glusterd/nfs/nfs-server.vol has option
On 05/15/2015 07:30 PM, Avra Sengupta wrote:
Hi,
A shared storage meta-volume is currently being used by
snapshot-scheduler, geo-replication, and nfs-ganesha. In order to
simplify the creation and set-up of the same, we are introducing a
global volume set
Here is the issue:
Locking on a volume fails with the following error:
[2015-05-18 09:47:56.038463] E
[glusterd-syncop.c:562:_gd_syncop_mgmt_lock_cbk] 0-management: Could not
find peer with ID 70e65fb9-cc9d-16ba-a4f4-5fb90100
[2015-05-18 09:47:56.038527] E
On 05/18/2015 02:15 PM, Emmanuel Dreyfus wrote:
On Mon, May 18, 2015 at 01:48:52AM -0400, Krishnan Parthasarathi wrote:
I am not sure why volume-status isn't working.
My understanding is that glusterd considers a lock is held by the
NFS comonent, while it is not started.
No that's not
On 04/16/2015 08:38 AM, Justin Clift wrote:
On 16 Apr 2015, at 03:56, Jeff Darcy jda...@redhat.com wrote:
Noticing several of the recent regression tests are being marked as SUCCESS
in Jenkins (then Gerrit), when they're clearly failing.
eg:
On 04/17/2015 04:51 PM, Raghavendra Talur wrote:
On Friday 17 April 2015 03:53 PM, Poornima Gurusiddaiah wrote:
Hi,
There are two concerns in the usage of libgfapi which have been
present from day one, but now
with new users of libgfapi its a necessity to fix these:
1. When libgfapi is
On 04/13/2015 12:22 PM, Mohammed Rafi K C wrote:
On 04/11/2015 12:33 PM, Atin Mukherjee wrote:
+Rafi
707ef16edcf4b14f46bb515b3464fa4368ce9b7c has caused it.
Manu,
you could alternatively remove optval declaration to make smoke pass.
We cannot remove optval, because we are using
On 04/15/2015 02:39 PM, Kaushal M wrote:
On Wed, Apr 15, 2015 at 1:20 PM, Emmanuel Dreyfus m...@netbsd.org wrote:
On Wed, Apr 15, 2015 at 12:58:32PM +0530, Kaushal M wrote:
I'm looking at this currently. But I'm having a slightly hard time
getting Gluster running on Netbsd. I have a
On 04/14/2015 05:07 PM, Niels de Vos wrote:
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(
On 04/07/2015 03:51 PM, Vijay Bellur wrote:
Hi All,
I am planning to branch release-3.7 by the end of this week. Here is a
list of tasks that we would need to accomplish by then:
1. Review and merge as many fixes for coverity found issues. [1]
2. Review and merge as many logging
1 - 100 of 701 matches
Mail list logo