Comments inline
- Original Message -
> From: "Benjamin Turner"
> To: "Susant Palai"
> Cc: "Vijay Bellur" , "Gluster Devel"
>
> Sent: Monday, May 4, 2015 8:58:13 PM
> Subject: Re: [Gluster-devel] Rebalance improvement design
>
> I see:
>
> #define GF_DECIDE_DEFRAG_THROTTLE_COUNT(throt
Emmanuel Dreyfus wrote:
> > I sent http://review.gluster.org/10540 to address it completely. Not
> > sure if it works on netBSD. Emmanuel help!!
>
> I launched test runs in a loop on nbslave70. More later.
Failed on first pass:
Test Summary Report
---
./tests/basic/ec/ec-3-1.t
On 05/05/2015 10:48 AM, Avra Sengupta wrote:
On 05/05/2015 10:43 AM, Pranith Kumar Karampuri wrote:
On 05/05/2015 10:32 AM, Avra Sengupta wrote:
Hi,
As already discussed, if you encounter this or any other snapshot
tests, it would be great to provide the regression run instance so
that we
On 05/05/2015 10:43 AM, Pranith Kumar Karampuri wrote:
On 05/05/2015 10:32 AM, Avra Sengupta wrote:
Hi,
As already discussed, if you encounter this or any other snapshot
tests, it would be great to provide the regression run instance so
that we can have a look at the logs if there are any. A
On 05/05/2015 10:32 AM, Avra Sengupta wrote:
Hi,
As already discussed, if you encounter this or any other snapshot
tests, it would be great to provide the regression run instance so
that we can have a look at the logs if there are any. Also I tried
running the test in a loop as you suggested
On 05/05/2015 10:31 AM, Kotresh Hiremath Ravishankar wrote:
Geo-rep runs /usr/local/libexec/glusterfs/gverify.sh to compare
gluster version between master and slave volume. It runs following
command
gluster --version | head -1 | cut -f2 -d " "
locally in the master and over ssh in slave.
You can
Hi,
As already discussed, if you encounter this or any other snapshot tests,
it would be great to provide the regression run instance so that we can
have a look at the logs if there are any. Also I tried running the test
in a loop as you suggested. After an hour and a half I stopped it so
tha
Geo-rep runs /usr/local/libexec/glusterfs/gverify.sh to compare
gluster version between master and slave volume. It runs following
command
gluster --version | head -1 | cut -f2 -d " "
locally in the master and over ssh in slave.
If for some reason, version returned is empty string. It could
happe
Pranith Kumar Karampuri wrote:
> I sent http://review.gluster.org/10540 to address it completely. Not
> sure if it works on netBSD. Emmanuel help!!
I launched test runs in a loop on nbslave70. More later.
You can attach root's screen(1) on the machine if you want to check what
is going on befo
hi Avra/Rajesh,
Any update on this test?
* tests/basic/volume-snapshot-clone.t
* http://review.gluster.org/#/c/10053/
* Came back on April 9
* http://build.gluster.org/job/rackspace-regression-2GB-triggered/6658/
Pranith
___
Gluster-de
Avra,
Is it reproducible on your setup? If not do you want to move it
to end of the page in
https://public.pad.fsfe.org/p/gluster-spurious-failures
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/
hi,
Doesn't seem like obvious failure. It does say there is
version mismatch, I wonder how? Could you look into it.
Gluster version mismatch between master and slave.
Geo-replication session between master and slave21.cloud.gluster.org::slave
does not exist.
[08:27:15] ./tests/geo-r
On 05/05/2015 08:39 AM, Pranith Kumar Karampuri wrote:
Vijai/Sachin,
Did you get a chance to work on this?
http://review.gluster.com/10166 failed Just now again in ec because
http://review.gluster.org/10069 is merged yesterday which can lead to
same problem. I sent http://review.gluste
hi Vijai/Sachin,
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8268/console
Doesn't seem like an obvious failure. Know anything about it?
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/l
Vijai/Sachin,
Did you get a chance to work on this?
http://review.gluster.com/10166 failed Just now again in ec because
http://review.gluster.org/10069 is merged yesterday which can lead to
same problem. I sent http://review.gluster.org/10539 to address the
issue for now. Please look i
It looks like my issue was due to a change in the way name resolution is
now handled in 3.6.3. I'll send in an explanation tomorrow in case
anyone else is having a similar issue.
David
-- Original Message --
From: "David Robinson"
To: "gluster-us...@gluster.org" ; "Gluster
Devel"
On 05/05/2015 08:10 AM, Jeff Darcy wrote:
Jeff's patch failed again with same problem:
http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/4531/console
Wouldn't have expected anything different. This one looks like a
problem in the Jenkins/Gerrit infrastructure.
Sorry for the
> Jeff's patch failed again with same problem:
> http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/4531/console
Wouldn't have expected anything different. This one looks like a
problem in the Jenkins/Gerrit infrastructure.
___
Gluster-
Just saw two more failures in the same place for netbsd regressions. I
am ignoring NetBSD status for the test fixes for now. I am not sure how
this needs to be fixed. Please help!
Pranith
On 05/05/2015 07:17 AM, Pranith Kumar Karampuri wrote:
On 05/05/2015 06:12 AM, Pranith Kumar Karampuri wr
On 05/05/2015 06:12 AM, Pranith Kumar Karampuri wrote:
On 05/05/2015 12:58 AM, Justin Clift wrote:
On 4 May 2015, at 08:06, Vijay Bellur wrote:
Hi All,
There has been a spate of regression test failures (due to broken
tests or race conditions showing up) in the recent past [1] and I am
in
hi,
I fixed it along with the patch on which this test failed
@http://review.gluster.org/10391. Letting everyone know in case they
face the same issue.
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailm
On 05/05/2015 12:58 AM, Justin Clift wrote:
On 4 May 2015, at 08:06, Vijay Bellur wrote:
Hi All,
There has been a spate of regression test failures (due to broken tests or race
conditions showing up) in the recent past [1] and I am inclined to block 3.7.0
GA along with acceptance of patches
Sachin Pandit wrote:
> In this test case we are writing "content" into the files (fd's).
> But for some reason the data "content" is not being written
> into the files and because of that quota fails to account the
> size.
I did some tests and here are the results:
# exec 16>/mnt/nfs/0/0/1/2/xx
> Also, one of us should
> go through the last however-many failures and determine the relative
> frequency of failures caused by each test, so we can prioritize.
I started doing this, and very quickly found a runaway "winner" -
data-self-heal.t, which also happens to be the very first test we
run
Hey folks,
Do we know when the ubuntu PPA will be up-to-date? I'll be doing a major
upgrade on my infrastructure and don't want to have to do it more than once.
Thanks,
Josh
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.glus
> There has been a spate of regression test failures (due to broken tests
> or race conditions showing up) in the recent past [1] and I am inclined
> to block 3.7.0 GA along with acceptance of patches until we fix *all*
> regression test failures. We seem to have reached a point where this
> seems
On 4 May 2015, at 08:06, Vijay Bellur wrote:
> Hi All,
>
> There has been a spate of regression test failures (due to broken tests or
> race conditions showing up) in the recent past [1] and I am inclined to block
> 3.7.0 GA along with acceptance of patches until we fix *all* regression test
>
I agree completely. This is the one that speaks volumes all on three words.
On 05/04/2015 09:08 AM, Josh Boon wrote:
Gluster: Software {re}defined storage
is one I really like. I wouldn't want to eliminate Gluster completely
as newcomers would then wonder about the binaries, package names etc.
Gluster: Software {re}defined storage
is one I really like. I wouldn't want to eliminate Gluster completely as
newcomers would then wonder about the binaries, package names etc. The tagline
speaks to that we've taken the time to consider some of the common pitfalls of
storage and makes things
I see:
#define GF_DECIDE_DEFRAG_THROTTLE_COUNT(throttle_count, conf) { \
\
throttle_count = MAX ((get_nprocs() - 4), 4);
\
\
On May 4, 2015 20:17, Niels de Vos wrote:
>
> On Mon, May 04, 2015 at 07:57:23PM +0530, Raghavendra Talur wrote:
> > On Thursday 30 April 2015 02:28 PM, Kaushal M wrote:
> > >The log-level is set by default to
> > >/var/log/etc-glusterfs-glusterd.vol.log, even when running in
> > >`-N` mode.
On Mon, May 04, 2015 at 07:57:23PM +0530, Raghavendra Talur wrote:
> On Thursday 30 April 2015 02:28 PM, Kaushal M wrote:
> >The log-level is set by default to
> >/var/log/etc-glusterfs-glusterd.vol.log, even when running in
> >`-N` mode. Only when running in `--debug` the log itself is redirected
On Thursday 30 April 2015 02:28 PM, Kaushal M wrote:
The log-level is set by default to
/var/log/etc-glusterfs-glusterd.vol.log, even when running in
`-N` mode. Only when running in `--debug` the log itself is redirected
to stdout and stderr.
Redirecting the output as Kaleb suggestes is the easi
As per the latest design/implementation changes, since the
cache-invalidation and lease-lock states are maintained separately (in
different inode contexts) and there is no correlation between these two
use-cases, we have decided to move lease-lk changes into a separate
xlator to avoid conflicts
Ben,
On no. of threads:
Sent throttle patch here:http://review.gluster.org/#/c/10526/ to limit
thread numbers[Not merged]. The rebalance process in current model spawns 20
threads and in addition to that there will be a max 16 syncop threads.
Crash:
The crash should be fixed by
Thanks Vijay! I forgot to upgrade the kernel(thinp 6.6 perf bug gah)
before I created this data set, so its a bit smaller:
total threads = 16
total files = 7,060,700 (64 kb files, 100 files per dir)
total data = 430.951 GB
88.26% of requested files processed, minimum is 70.00
10101.355737 sec
Kotresh Hiremath Ravishankar wrote:
> I have faced a similar issue of data not being written to file with
> geo-rep setup.
Just on NetBSD, or also on Linux?
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing lis
On Mon, May 04, 2015 at 11:33:45AM +0530, Vijay Bellur wrote:
> Hi All,
>
> There are about 70+ dependent bugs in New and Assigned states on the 3.7.0
> tracker [1]. I suspect that a good number of them need a state change to
> reflect the current status. If you happen to own a bug or have sent ac
I have faced a similar issue of data not being written to file with
geo-rep setup. It is aux-gfid-mount though. The root cause could
be same for both. In geo-rep file creation phase is successful
but the data (rsync) was hung with no data being synced.
I have not RCA'ed it yet. I will try to repro
On Mon, May 04, 2015 at 03:24:38AM -0400, Sachin Pandit wrote:
> 83 TEST_IN_LOOP ! fd_write $i "content"
(...)
> In this test case we are writing "content" into the files (fd's).
> But for some reason the data "content" is not being written
(...)
> We suspect this could be because of NFS
On 05/04/2015 01:44 PM, Avra Sengupta wrote:
Hi Pranith,
Could you please provide a regression instance where the snapshot
tests failed. I had a look at
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8148/consoleFull
but, the logs for bug-1162498.t are not present for that
On 05/04/2015 12:31 PM, Sachin Pandit wrote:
- Original Message -
From: "Sachin Pandit"
To: "Pranith Kumar Karampuri"
Cc: "Gluster Devel"
Sent: Monday, May 4, 2015 11:31:01 AM
Subject: Re: [Gluster-devel] regarding spurious failure
tests/bugs/snapshot/bug-1162498.t
- Origin
On Mon, May 04, 2015 at 02:13:19PM +0530, Kaushal M wrote:
> io-threads should be in the client package. It is possible to have
> io-threads on the client by enabling performance.client-io-threads
> option. This is not a common option, but it someone could use it.
In that case, it should probably
io-threads should be in the client package. It is possible to have
io-threads on the client by enabling performance.client-io-threads
option. This is not a common option, but it someone could use it.
On Mon, May 4, 2015 at 1:25 PM, Niels de Vos wrote:
> Hi all,
>
> [TLDR; jump down to the list of
On Mon, May 04, 2015 at 09:20:45AM +0530, Atin Mukherjee wrote:
> I see the following log from the brick process:
>
> [2015-05-04 03:43:50.309769] E [socket.c:823:__socket_server_bind]
> 4-tcp.patchy-server: binding to failed: Address already in use
This happens before the failing test 52 (volum
Hi Pranith,
Could you please provide a regression instance where the snapshot tests
failed. I had a look at
http://build.gluster.org/job/rackspace-regression-2GB-triggered/8148/consoleFull
but, the logs for bug-1162498.t are not present for that instance.
Similarly other instances recorded i
Hi all,
[TLDR; jump down to the list of xlators and see if they are in the right
package, please reply with corrections]
Many new features introduce new xlators. It is not always straight
forward to see if an xlator is intended for server-side usage,
client-side or maybe even both. There a
- Original Message -
> From: "Sachin Pandit"
> To: "Emmanuel Dreyfus"
> Cc: gluster-devel@gluster.org
> Sent: Monday, April 27, 2015 10:58:21 AM
> Subject: Re: [Gluster-devel] NetBSD regression status upate
>
>
>
> - Original Message -
> > From: "Emmanuel Dreyfus"
> > To: gl
Hi All,
There has been a spate of regression test failures (due to broken tests
or race conditions showing up) in the recent past [1] and I am inclined
to block 3.7.0 GA along with acceptance of patches until we fix *all*
regression test failures. We seem to have reached a point where this
se
- Original Message -
> From: "Sachin Pandit"
> To: "Pranith Kumar Karampuri"
> Cc: "Gluster Devel"
> Sent: Monday, May 4, 2015 11:31:01 AM
> Subject: Re: [Gluster-devel] regarding spurious failure
> tests/bugs/snapshot/bug-1162498.t
>
>
> - Original Message -
> > From:
50 matches
Mail list logo