Re: [Gluster-devel] DHT xlator, read and write a file during creating a new file

2017-05-02 Thread Tahereh Fattahi
Thank you
I need this information divided to bricks.

On Tue, May 2, 2017 at 11:07 PM, Vijay Bellur  wrote:

>
>
> On Tue, May 2, 2017 at 8:00 AM, Tahereh Fattahi 
> wrote:
>
>> Hi
>>
>> I want to use a file as a counter when I create a file in dht xlator.
>> I mean, after creating a new file,  I want open a file in the same
>> directory with a special name, read that, update the counter and write
>> back.
>> I think for this purpose I  should open in dht_create_cbk, read in
>> dht_open_cbk and write in dht_readv_cbk.
>> I think I should use dht_open , dht_readv and dht_writev. Maybe I could
>> create inputs for these function expect frame! is it correct to use the
>> frame fro dht_create function?
>>
>> Is this scenario correct or there is better way?
>>
>>
> Have you tried the object count feature [1] ?
>
> Regards,
> Vijay
>
> [1] http://gluster-documentations.readthedocs.io/en/latest/
> Features/quota-object-count/
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] DHT xlator, read and write a file during creating a new file

2017-05-02 Thread Vijay Bellur
On Tue, May 2, 2017 at 8:00 AM, Tahereh Fattahi 
wrote:

> Hi
>
> I want to use a file as a counter when I create a file in dht xlator.
> I mean, after creating a new file,  I want open a file in the same
> directory with a special name, read that, update the counter and write
> back.
> I think for this purpose I  should open in dht_create_cbk, read in
> dht_open_cbk and write in dht_readv_cbk.
> I think I should use dht_open , dht_readv and dht_writev. Maybe I could
> create inputs for these function expect frame! is it correct to use the
> frame fro dht_create function?
>
> Is this scenario correct or there is better way?
>
>
Have you tried the object count feature [1] ?

Regards,
Vijay

[1]
http://gluster-documentations.readthedocs.io/en/latest/Features/quota-object-count/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] [Brick-Multiplexing] Failure to create .trashcan in all bricks per node

2017-05-02 Thread Jiffin Tony Thottan
Following bugs were reported against trash translator in a brick 
multiplexing enabled environment:


1447389 
1447390 
1447392 

In all the above cases trash directory, namely .trashcan, was being 
created only on one brick per node.
Trash directory is usually created within notify() inside trash 
translator on receiving CHILD_UP event
from posix translator. [trash.c:2367]. This CHILD_UP event is sent by 
posix translator on receiving PARENT_UP.


When brick multiplexing is enabled it seems that notify() is invoked 
only on the first brick which follows normal
path, but not in other bricks. On further debugging, we could see that 
in glusterfs_graph_attach(), apart from
graph preparation and initialization it lacks xlator_notify()  and 
parent_up calls as in normal cases.


Can you please shed some light on how we can move forward with these bugs?

Also thanks Atin and Nithya for help in debugging above issues.

Regards,
Jiffin

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Tests that fail with multiplexing turned on

2017-05-02 Thread Atin Mukherjee
On Tue, May 2, 2017 at 2:36 AM, Jeff Darcy  wrote:

> Since the vast majority of our tests run without multiplexing, I'm going
> to start running regular runs of all tests with multiplexing turned on.
> You can see the patch here:
>
> https://review.gluster.org/#/c/17145/
>
> There are currently two tests that fail with multiplexing.  Note that
> these are all tests that passed as of when multiplexing was introduced.
> I don't know about these specific tests, but most tests had passed with
> multiplexing turned *many times* - sometimes literally over a hundred
> because I did more runs that that during development.  These are tests
> that have been broken since then, because without regular tests the
> people making changes could not have known how their changes interact
> with multiplexing.
>
> 19:14:41
> ./tests/bugs/glusterd/bug-1367478-volume-start-validation-after-glusterd-
> restart.t
> ..
> 19:14:41 not ok 17 Got "0" instead of "1", LINENUM:37
> 19:14:41 FAILED COMMAND: 1 brick_up_status_1 patchy1 127.1.1.2
> /d/backends/2/patchy12
>

This is one of the problem we are trying to address through
https://review.gluster.org/#/c/17101 and this test was broken by
https://review.gluster.org/16866 .


20:52:10 ./tests/features/trash.t ..
> 20:52:10 not ok 53 Got "2" instead of "1", LINENUM:221
> 20:52:10 FAILED COMMAND: 1 online_brick_count
> 20:52:10 ok 54, LINENUM:223
> 20:52:10 ok 55, LINENUM:226
> 20:52:10 not ok 56 Got "3" instead of "2", LINENUM:227
> 20:52:10 FAILED COMMAND: 2 online_brick_count
> 20:52:10 ok 57, LINENUM:228
> 20:52:10 ok 58, LINENUM:233
> 20:52:10 ok 59, LINENUM:236
> 20:52:10 ok 60, LINENUM:237
> 20:52:10 not ok 61 , LINENUM:238
> 20:52:10 FAILED COMMAND: [ -e /mnt/glusterfs/0/abc -a ! -e
> /mnt/glusterfs/0/.trashcan ]
>

IMO, nothing specific to brick-mux. online_brick_count function has a flaw.
It basically looks for pids for all the processes instead of looking for
only the bricks. In this test one of the volume was replicate and hence shd
was up and you'd see one additional pidfile placed. This was actually
caught by Mohit while we were (and still are) working on patch 17101. The
last failure needs to be looked at.



>
> Do we have any volunteers to look into these?  I looked at the first one
> a bit and didn't find any obvious clues; I haven't looked at the second.
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-02 Thread Nithya Balachandran
On 2 May 2017 at 16:59, Shyam  wrote:

> Talur,
>
> Please wait for this fix before releasing 3.10.2.
>
> We will take in the change to either prevent add-brick in
> sharded+distrbuted volumes, or throw a warning and force the use of --force
> to execute this.
>
> IIUC, the problem is less the add brick operation and more the
rebalance/fix-layout. It is those that need to be prevented (as someone
could trigger those without an add-brick).

Nithya

> Let's get a bug going, and not wait for someone to report it in bugzilla,
> and also mark it as blocking 3.10.2 release tracker bug.
>
> Thanks,
> Shyam
>
> On 05/02/2017 06:20 AM, Pranith Kumar Karampuri wrote:
>
>>
>>
>> On Tue, May 2, 2017 at 9:16 AM, Pranith Kumar Karampuri
>> mailto:pkara...@redhat.com>> wrote:
>>
>> Yeah it is a good idea. I asked him to raise a bug and we can move
>> forward with it.
>>
>>
>> +Raghavendra/Nitya who can help with the fix.
>>
>>
>>
>> On Mon, May 1, 2017 at 9:07 PM, Joe Julian > > wrote:
>>
>>
>> On 04/30/2017 01:13 AM, lemonni...@ulrar.net
>>  wrote:
>>
>> So I was a little but luck. If I has all the hardware
>> part, probably i
>> would be firesd after causing data loss by using a
>> software marked as stable
>>
>> Yes, we lost our data last year to this bug, and it wasn't a
>> test cluster.
>> We still hear from it from our clients to this day.
>>
>> Is known that this feature is causing data loss and
>> there is no evidence or
>> no warning in official docs.
>>
>> I was (I believe) the first one to run into the bug, it
>> happens and I knew it
>> was a risk when installing gluster.
>> But since then I didn't see any warnings anywhere except
>> here, I agree
>> with you that it should be mentionned in big bold letters on
>> the site.
>>
>> Might even be worth adding a warning directly on the cli
>> when trying to
>> add bricks if sharding is enabled, to make sure no-one will
>> destroy a
>> whole cluster for a known bug.
>>
>>
>> I absolutely agree - or, just disable the ability to add-brick
>> with sharding enabled. Losing data should never be allowed.
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org 
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>> 
>>
>>
>>
>>
>> --
>> Pranith
>>
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org 
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>> 
>>
>>
>>
>>
>> --
>> Pranith
>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Meeting minutes of todays Bug Triage

2017-05-02 Thread Niels de Vos
Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-02/weely_gluster_bug_triage.2017-05-02-12.12.html
Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-02/weely_gluster_bug_triage.2017-05-02-12.12.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-05-02/weely_gluster_bug_triage.2017-05-02-12.12.log.html

Agenda for next week: 
https://github.com/gluster/glusterfs/wiki/Bug-Triage-Meeting

Before the meeting started, Amar mentioned that he and Shyam are working
on triage guidelines for GitHub issues.

14:01 < amarts> Should bug-triage meeting extend to github issues too?
14:02 < amarts> at least making sure the milestone and labels are set?
14:09 < ndevos> amarts: we have quite detailed steps for BZ in 
http://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Triage/
14:10 < ndevos> amarts: do we have something similar, and queries for untriaged 
GitHub issues?
14:10 < amarts> ndevos, thanks
14:10 < amarts> shyam and me are working on one
14:10 < amarts> will post that in a week
14:11 < ndevos> amarts: also, should we only do the glusterfs GitHub issues, or 
also for other GitHub projects?
14:11 < amarts> for now, i guess glusterfs project is good, but we should 
expand the horizon soon


Meeting summary
---
* Roll Call  (ndevos, 12:13:08)

* Next weeks host  (ndevos, 12:14:11)
  * ACTION: rafi will host the next gluster bug triage meeting  (ndevos,
12:15:23)
  * Agenda is at
https://github.com/gluster/glusterfs/wiki/Bug-Triage-Meeting
(ndevos, 12:15:58)

* group triage  (ndevos, 12:16:59)
  * LINK: http://bit.ly/gluster-bugs-to-triage   (ndevos, 12:17:00)
  * LINK:
https://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Triage/
(ndevos, 12:17:01)

* open floor  (ndevos, 12:41:24)

Meeting ended at 12:43:17 UTC.




Action Items

* rafi will host the next gluster bug triage meeting




Action Items, by person
---
* rafi
  * rafi will host the next gluster bug triage meeting
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* ndevos (31)
* rafi (16)
* zodbot (4)
* ankitr (3)
* jiffin (2)
* amarts (0)
* kkeithley (0)



signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] DHT xlator, read and write a file during creating a new file

2017-05-02 Thread Tahereh Fattahi
Thank you very much
I had test this before. I want all the client can see this counter and can
update that, so I add an attribute for directory.
setxattr for directory was very in-efficient (I saw a lot of lookup request
to server because of setxattr) for every create file when our workload
generate a lot of file.
So which one is better in your opinion?

On Tue, May 2, 2017 at 5:06 PM, Amar Tumballi  wrote:

>
>
> On Tue, May 2, 2017 at 5:30 PM, Tahereh Fattahi 
> wrote:
>
>> Hi
>>
>> I want to use a file as a counter when I create a file in dht xlator.
>> I mean, after creating a new file,  I want open a file in the same
>> directory with a special name, read that, update the counter and write
>> back.
>> I think for this purpose I  should open in dht_create_cbk, read in
>> dht_open_cbk and write in dht_readv_cbk.
>> I think I should use dht_open , dht_readv and dht_writev. Maybe I could
>> create inputs for these function expect frame! is it correct to use the
>> frame fro dht_create function?
>>
>> Is this scenario correct or there is better way?
>>
>> This is correct. But very in-efficient (for so many different fops for
> one fop from user). See if you can keep another extended attribute itself
> which you can update. That way, you can just handle the counter management
> using 'xattrop' or `setxattr()' fops
>
> Regards,
> Amar
>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Amar Tumballi (amarts)
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2017-05-02-07cc8679 (master branch)

2017-05-02 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-05-02-07cc8679
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] DHT xlator, read and write a file during creating a new file

2017-05-02 Thread Amar Tumballi
On Tue, May 2, 2017 at 5:30 PM, Tahereh Fattahi 
wrote:

> Hi
>
> I want to use a file as a counter when I create a file in dht xlator.
> I mean, after creating a new file,  I want open a file in the same
> directory with a special name, read that, update the counter and write
> back.
> I think for this purpose I  should open in dht_create_cbk, read in
> dht_open_cbk and write in dht_readv_cbk.
> I think I should use dht_open , dht_readv and dht_writev. Maybe I could
> create inputs for these function expect frame! is it correct to use the
> frame fro dht_create function?
>
> Is this scenario correct or there is better way?
>
> This is correct. But very in-efficient (for so many different fops for one
fop from user). See if you can keep another extended attribute itself which
you can update. That way, you can just handle the counter management using
'xattrop' or `setxattr()' fops

Regards,
Amar

> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] DHT xlator, read and write a file during creating a new file

2017-05-02 Thread Tahereh Fattahi
Hi

I want to use a file as a counter when I create a file in dht xlator.
I mean, after creating a new file,  I want open a file in the same
directory with a special name, read that, update the counter and write
back.
I think for this purpose I  should open in dht_create_cbk, read in
dht_open_cbk and write in dht_readv_cbk.
I think I should use dht_open , dht_readv and dht_writev. Maybe I could
create inputs for these function expect frame! is it correct to use the
frame fro dht_create function?

Is this scenario correct or there is better way?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-02 Thread Shyam

Talur,

Please wait for this fix before releasing 3.10.2.

We will take in the change to either prevent add-brick in 
sharded+distrbuted volumes, or throw a warning and force the use of 
--force to execute this.


Let's get a bug going, and not wait for someone to report it in 
bugzilla, and also mark it as blocking 3.10.2 release tracker bug.


Thanks,
Shyam

On 05/02/2017 06:20 AM, Pranith Kumar Karampuri wrote:



On Tue, May 2, 2017 at 9:16 AM, Pranith Kumar Karampuri
mailto:pkara...@redhat.com>> wrote:

Yeah it is a good idea. I asked him to raise a bug and we can move
forward with it.


+Raghavendra/Nitya who can help with the fix.



On Mon, May 1, 2017 at 9:07 PM, Joe Julian mailto:j...@julianfamily.org>> wrote:


On 04/30/2017 01:13 AM, lemonni...@ulrar.net
 wrote:

So I was a little but luck. If I has all the hardware
part, probably i
would be firesd after causing data loss by using a
software marked as stable

Yes, we lost our data last year to this bug, and it wasn't a
test cluster.
We still hear from it from our clients to this day.

Is known that this feature is causing data loss and
there is no evidence or
no warning in official docs.

I was (I believe) the first one to run into the bug, it
happens and I knew it
was a risk when installing gluster.
But since then I didn't see any warnings anywhere except
here, I agree
with you that it should be mentionned in big bold letters on
the site.

Might even be worth adding a warning directly on the cli
when trying to
add bricks if sharding is enabled, to make sure no-one will
destroy a
whole cluster for a known bug.


I absolutely agree - or, just disable the ability to add-brick
with sharding enabled. Losing data should never be allowed.
___
Gluster-devel mailing list
Gluster-devel@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-devel





--
Pranith

___
Gluster-users mailing list
gluster-us...@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-users





--
Pranith


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 3.11: Has been Branched (and pending feature notes)

2017-05-02 Thread Pranith Kumar Karampuri
On Sun, Apr 30, 2017 at 9:01 PM, Shyam  wrote:

> Hi,
>
> Release 3.11 for gluster has been branched [1] and tagged [2].
>
> We have ~4weeks to release of 3.11, and a week to backport features that
> slipped the branching date (May-5th).
>
> A tracker BZ [3] has been opened for *blockers* of 3.11 release. Request
> that any bug that is determined as a blocker for the release be noted as a
> "blocks" against this bug.
>
> NOTE: Just a heads up, all bugs that are to be backported in the next 4
> weeks need not be reflected against the blocker, *only* blocker bugs
> identified that should prevent the release, need to be tracked against this
> tracker bug.
>
> We are not building beta1 packages, and will build out RC0 packages once
> we cross the backport dates. Hence, folks interested in testing this out
> can either build from the code or wait for (about) a week longer for the
> packages (and initial release notes).
>
> Features tracked as slipped and expected to be backported by 5th May are,
>
> 1) [RFE] libfuse rebase to latest? #153 (@amar, @csaba)
>
> 2) SELinux support for Gluster Volumes #55 (@ndevos, @jiffin)
>   - Needs a +2 on https://review.gluster.org/13762
>
> 3) Enhance handleops readdirplus operation to return handles along with
> dirents #174 (@skoduri)
>
> 4) Halo - Initial version (@pranith)
>

I merged the patch on master. Will send out the port on Thursday. I have to
leave like right now to catch train and am on leave tomorrow, so will be
back on Thursday and get the port done. Will also try to get the other
patches fb guys mentioned post that preferably by 5th itself.


>
> Thanks,
> Kaushal, Shyam
>
> [1] 3.11 Branch: https://github.com/gluster/glusterfs/tree/release-3.11
>
> [2] Tag for 3.11.0beta1 : https://github.com/gluster/glu
> sterfs/tree/v3.11.0beta1
>
> [3] Tracker BZ for 3.11.0 blockers: https://bugzilla.redhat.com/sh
> ow_bug.cgi?id=glusterfs-3.11.0
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-02 Thread Pranith Kumar Karampuri
On Tue, May 2, 2017 at 9:16 AM, Pranith Kumar Karampuri  wrote:

> Yeah it is a good idea. I asked him to raise a bug and we can move forward
> with it.
>

+Raghavendra/Nitya who can help with the fix.


>
> On Mon, May 1, 2017 at 9:07 PM, Joe Julian  wrote:
>
>>
>> On 04/30/2017 01:13 AM, lemonni...@ulrar.net wrote:
>>
>>> So I was a little but luck. If I has all the hardware part, probably i
 would be firesd after causing data loss by using a software marked as
 stable

>>> Yes, we lost our data last year to this bug, and it wasn't a test
>>> cluster.
>>> We still hear from it from our clients to this day.
>>>
>>> Is known that this feature is causing data loss and there is no evidence
 or
 no warning in official docs.

 I was (I believe) the first one to run into the bug, it happens and I
>>> knew it
>>> was a risk when installing gluster.
>>> But since then I didn't see any warnings anywhere except here, I agree
>>> with you that it should be mentionned in big bold letters on the site.
>>>
>>> Might even be worth adding a warning directly on the cli when trying to
>>> add bricks if sharding is enabled, to make sure no-one will destroy a
>>> whole cluster for a known bug.
>>>
>>
>> I absolutely agree - or, just disable the ability to add-brick with
>> sharding enabled. Losing data should never be allowed.
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Pranith
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] regression in master

2017-05-02 Thread Niels de Vos
On Tue, May 02, 2017 at 11:57:16AM +0530, Atin Mukherjee wrote:
> With latest HEAD, all volume set operation fails for a volume which is in
> STARTED state. I've figured out that commit 83abcba has caused it, what
> makes me baffled is this patch had passed all the regression. Are we
> installing nfs so files in slave machines which can only way to avoid these
> failures? With source install I hit it 10 out of 10 times.
> 
> https://review.gluster.org/#/c/17149/ addresses the issue. Appreciate your
> review.

The Gluster/NFS server .so is not built by default, but the test scripts
were extended to have the "./configure --enable-gnfs" option. If you do
source installs and want/need to test Gluster/NFS, you would need to
pass this option as well.

Our tests in the glusterfs repository assume that most configurable
features are available. I do not think we have a way to pass the
./configure or build options to the tests and run only a selection. Many
tests use Gluster/NFS, and skipping all NFS related tests when the
xlator is not present does not look straight forward to me.

HTH,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Tests that fail with multiplexing turned on

2017-05-02 Thread Amar Tumballi
Thanks for this patch Jeff.

Helps me to run tests with Brick multiplexing enabled.

Instead of manually triggering the run for this patch, as Niels mentioned,
it would be good to have a nightly run which will run with a given patch
(or patches) and all regressions. Planning to take this as a part of Good
Builds effort Nigel is getting done [1].

Regards,
Amar

[1] -
http://lists.gluster.org/pipermail/gluster-devel/2017-March/052245.html




On Tue, May 2, 2017 at 2:36 AM, Jeff Darcy  wrote:

> Since the vast majority of our tests run without multiplexing, I'm going
> to start running regular runs of all tests with multiplexing turned on.
> You can see the patch here:
>
> https://review.gluster.org/#/c/17145/
>
> There are currently two tests that fail with multiplexing.  Note that
> these are all tests that passed as of when multiplexing was introduced.
> I don't know about these specific tests, but most tests had passed with
> multiplexing turned *many times* - sometimes literally over a hundred
> because I did more runs that that during development.  These are tests
> that have been broken since then, because without regular tests the
> people making changes could not have known how their changes interact
> with multiplexing.
>
> 19:14:41
> ./tests/bugs/glusterd/bug-1367478-volume-start-validation-after-glusterd-
> restart.t
> ..
> 19:14:41 not ok 17 Got "0" instead of "1", LINENUM:37
> 19:14:41 FAILED COMMAND: 1 brick_up_status_1 patchy1 127.1.1.2
> /d/backends/2/patchy12
>
> 20:52:10 ./tests/features/trash.t ..
> 20:52:10 not ok 53 Got "2" instead of "1", LINENUM:221
> 20:52:10 FAILED COMMAND: 1 online_brick_count
> 20:52:10 ok 54, LINENUM:223
> 20:52:10 ok 55, LINENUM:226
> 20:52:10 not ok 56 Got "3" instead of "2", LINENUM:227
> 20:52:10 FAILED COMMAND: 2 online_brick_count
> 20:52:10 ok 57, LINENUM:228
> 20:52:10 ok 58, LINENUM:233
> 20:52:10 ok 59, LINENUM:236
> 20:52:10 ok 60, LINENUM:237
> 20:52:10 not ok 61 , LINENUM:238
> 20:52:10 FAILED COMMAND: [ -e /mnt/glusterfs/0/abc -a ! -e
> /mnt/glusterfs/0/.trashcan ]
>
> Do we have any volunteers to look into these?  I looked at the first one
> a bit and didn't find any obvious clues; I haven't looked at the second.
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel