Re: [Gluster-devel] [Gluster-Maintainers] Meeting minutes (7th March)

2018-03-07 Thread Atin Mukherjee
On Thu, 8 Mar 2018 at 11:43, Kaushal M  wrote:

> On Thu, Mar 8, 2018 at 10:21 AM, Amar Tumballi 
> wrote:
> > Meeting date: 03/07/2018 (March 3rd, 2018. 19:30IST, 14:00UTC, 09:00EST)
> >
> > BJ Link
> >
> > Bridge: https://bluejeans.com/205933580
> > Download : https://bluejeans.com/s/mOGb7
> >
> > Attendance
> >
> > [Sorry Note] : Atin (conflicting meeting), Michael Adam, Amye, Niels de
> Vos,
> > Amar, Nigel, Jeff, Shyam, Kaleb, Kotresh
> >
> > Agenda
> >
> > AI from previous meeting:
> >
> > Email on version numbers: Still pending - Amar/Shyam
> >
> > Planning to do this by Friday (9th March)
> >
> > can we run regression suite with GlusterD2
> >
> > OK with failures, but can we run?
> > Nigel to run tests and give outputs
>
> Apologies for not attending this meeting.
>
> I can help get this up and running.
>
> But, I also wanted to setup a smoke job to run GD2 CI against glusterfs
> patches.
> This will help us catch changes that adversly affect GD2, in
> particular changes to the option_t and xlator_api_t structs.
> Will not be a particularly long test to run. On average the current
> GD2 centos-ci jobs finish in under 4 minutes.
> I expect that building glusterfs will add about 5 minutes more.
> This job should be simple enough to get setup, and I'd like it if can
> set this up first.


+1, this is definitely needed going forward.


>
> >
> > Line coverage tests:
> >
> > SIGKILL was sent to processes, so the output was not proper.
> > Patch available, Nigel to test with the patch and give output before
> > merging.
> > [Nigel] what happens with GD2 ?
> >
> > [Shyam] https://github.com/gojp/goreportcard
> > [Shyam] (what I know)
> > https://goreportcard.com/report/github.com/gluster/glusterd2
> >
> > Gluster 4.0 is tagged:
> >
> > Retrospect meeting: Can this be google form?
> >
> > It usually is, let me find and paste the older one:
> >
> > 3.10 retro:
> >
> http://lists.gluster.org/pipermail/gluster-users/2017-February/030127.html
> > 3.11 retro: https://www.gluster.org/3-11-retrospectives/
> >
> > [Nigel] Can we do it a less of form, and keep it more generic?
> > [Shyam] Thats what mostly the form tries to do. Prefer meeting & Form
> >
> > Gluster Infra team is testing the distributed testing framework
> contributed
> > from FB
> >
> > [Nigel] Any issues, would like to collaborate
> > [Jeff] Happy to collaborate, let me know.
> >
> > Call out for features on 4-next
> >
> > should the next release be LTM and 4.1 and then pick the version number
> > change proposal later.
> >
> > Bugzilla Automation:
> >
> > Planning to test it out next week.
> > AI: send the email first, and target to take the patches before next
> > maintainers meeting.
> >
> > Round Table
> >
> > [Kaleb] space is tight on download.gluster.org
> > * may we delete, e.g. purpleidea files? experimental (freebsd stuff from
> > 2014)?
> > * any way to get more space?
> > * [Nigel] Should be possible to do it, file a bug
> > * AI: Kaleb to file a bug
> > *
> >
> > yesterday I noticed that some files (…/3.12/3.12.2/Debian/…) were not
> owned
> > by root:root. They were rsync_aide:rsync_aide. Was there an aborted rsync
> > job or something that left them like that?
> >
> > most glusterfs 4.0 packages are on download.g.o now. Starting on gd2
> > packages now.
> >
> > el7 packages on on buildroot if someone (shyam?) wants to get a head
> start
> > on testing them
> >
> > [Nigel] Testing IPv6 (with IPv4 on too), only 4 tests are consistently
> > failing. Need to look at it.
> >
> >
> >
> > --
> > Amar Tumballi (amarts)
> >
> > ___
> > maintainers mailing list
> > maintain...@gluster.org
> > http://lists.gluster.org/mailman/listinfo/maintainers
> >
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel

-- 
--Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Meeting minutes (7th March)

2018-03-07 Thread Kaushal M
On Thu, Mar 8, 2018 at 10:21 AM, Amar Tumballi  wrote:
> Meeting date: 03/07/2018 (March 3rd, 2018. 19:30IST, 14:00UTC, 09:00EST)
>
> BJ Link
>
> Bridge: https://bluejeans.com/205933580
> Download : https://bluejeans.com/s/mOGb7
>
> Attendance
>
> [Sorry Note] : Atin (conflicting meeting), Michael Adam, Amye, Niels de Vos,
> Amar, Nigel, Jeff, Shyam, Kaleb, Kotresh
>
> Agenda
>
> AI from previous meeting:
>
> Email on version numbers: Still pending - Amar/Shyam
>
> Planning to do this by Friday (9th March)
>
> can we run regression suite with GlusterD2
>
> OK with failures, but can we run?
> Nigel to run tests and give outputs

Apologies for not attending this meeting.

I can help get this up and running.

But, I also wanted to setup a smoke job to run GD2 CI against glusterfs patches.
This will help us catch changes that adversly affect GD2, in
particular changes to the option_t and xlator_api_t structs.
Will not be a particularly long test to run. On average the current
GD2 centos-ci jobs finish in under 4 minutes.
I expect that building glusterfs will add about 5 minutes more.
This job should be simple enough to get setup, and I'd like it if can
set this up first.

>
> Line coverage tests:
>
> SIGKILL was sent to processes, so the output was not proper.
> Patch available, Nigel to test with the patch and give output before
> merging.
> [Nigel] what happens with GD2 ?
>
> [Shyam] https://github.com/gojp/goreportcard
> [Shyam] (what I know)
> https://goreportcard.com/report/github.com/gluster/glusterd2
>
> Gluster 4.0 is tagged:
>
> Retrospect meeting: Can this be google form?
>
> It usually is, let me find and paste the older one:
>
> 3.10 retro:
> http://lists.gluster.org/pipermail/gluster-users/2017-February/030127.html
> 3.11 retro: https://www.gluster.org/3-11-retrospectives/
>
> [Nigel] Can we do it a less of form, and keep it more generic?
> [Shyam] Thats what mostly the form tries to do. Prefer meeting & Form
>
> Gluster Infra team is testing the distributed testing framework contributed
> from FB
>
> [Nigel] Any issues, would like to collaborate
> [Jeff] Happy to collaborate, let me know.
>
> Call out for features on 4-next
>
> should the next release be LTM and 4.1 and then pick the version number
> change proposal later.
>
> Bugzilla Automation:
>
> Planning to test it out next week.
> AI: send the email first, and target to take the patches before next
> maintainers meeting.
>
> Round Table
>
> [Kaleb] space is tight on download.gluster.org
> * may we delete, e.g. purpleidea files? experimental (freebsd stuff from
> 2014)?
> * any way to get more space?
> * [Nigel] Should be possible to do it, file a bug
> * AI: Kaleb to file a bug
> *
>
> yesterday I noticed that some files (…/3.12/3.12.2/Debian/…) were not owned
> by root:root. They were rsync_aide:rsync_aide. Was there an aborted rsync
> job or something that left them like that?
>
> most glusterfs 4.0 packages are on download.g.o now. Starting on gd2
> packages now.
>
> el7 packages on on buildroot if someone (shyam?) wants to get a head start
> on testing them
>
> [Nigel] Testing IPv6 (with IPv4 on too), only 4 tests are consistently
> failing. Need to look at it.
>
>
>
> --
> Amar Tumballi (amarts)
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Meeting minutes (7th March)

2018-03-07 Thread Amar Tumballi
Meeting date: 03/07/2018 (March 3rd, 2018. 19:30IST, 14:00UTC, 09:00EST)
BJ Link

   - Bridge: https://bluejeans.com/205933580
   - Download : https://bluejeans.com/s/mOGb7

Attendance

   - [Sorry Note] : Atin (conflicting meeting), Michael Adam, Amye, Niels
   de Vos,
   - Amar, Nigel, Jeff, Shyam, Kaleb, Kotresh

Agenda

   -

   AI from previous meeting:
   - Email on version numbers: Still pending - Amar/Shyam
 - Planning to do this by Friday (9th March)
  -

   can we run regression suite with GlusterD2
   - OK with failures, but can we run?
  - Nigel to run tests and give outputs
   -

   Line coverage tests:
   - SIGKILL was sent to processes, so the output was not proper.
  - Patch available, Nigel to test with the patch and give output
  before merging.
  - [Nigel] what happens with GD2 ?
 - [Shyam] https://github.com/gojp/goreportcard
 - [Shyam] (what I know)
 https://goreportcard.com/report/github.com/gluster/glusterd2
  -

   Gluster 4.0 is tagged:
   - Retrospect meeting: Can this be google form?
 - It usually is, let me find and paste the older one:
- 3.10 retro:

http://lists.gluster.org/pipermail/gluster-users/2017-February/030127.html
- 3.11 retro: https://www.gluster.org/3-11-retrospectives/
 - [Nigel] Can we do it a less of form, and keep it more generic?
 - [Shyam] Thats what mostly the form tries to do. Prefer meeting &
 Form
  -
   -

   Gluster Infra team is testing the distributed testing framework
   contributed from FB
   - [Nigel] Any issues, would like to collaborate
  - [Jeff] Happy to collaborate, let me know.
   -

   Call out for features on 4-next
   - should the next release be LTM and 4.1 and then pick the version
  number change proposal later.
   -

   Bugzilla Automation:
   - Planning to test it out next week.
  - AI: send the email first, and target to take the patches before
  next maintainers meeting.
   -

   Round Table
   -

  [Kaleb] space is tight on download.gluster.org
  * may we delete, e.g. purpleidea files? experimental (freebsd stuff
  from 2014)?
  * any way to get more space?
  * [Nigel] Should be possible to do it, file a bug
  * AI: Kaleb to file a bug
  *
  -

  yesterday I noticed that some files (…/3.12/3.12.2/Debian/…) were not
  owned by root:root. They were rsync_aide:rsync_aide. Was there an aborted
  rsync job or something that left them like that?
  -

  most glusterfs 4.0 packages are on download.g.o now. Starting on gd2
  packages now.
  -

  el7 packages on on buildroot if someone (shyam?) wants to get a head
  start on testing them
  -

  [Nigel] Testing IPv6 (with IPv4 on too), only 4 tests are
  consistently failing. Need to look at it.



-- 
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Nfs-ganesha-devel] nfsv3 client writing file gets Invalid argument on glusterfs with quota on

2018-03-07 Thread Daniel Gryniewicz

On 03/06/2018 10:10 PM, Kinglong Mee wrote:

On 2018/3/7 10:59, Kinglong Mee wrote:

When using nfsv3 on glusterfs-3.13.1-1.el7.x86_64 and 
nfs-ganesha-2.6.0-0.2rc3.el7.centos.x86_64,
I gets strange "Invalid argument" when writing file.

1. With quota disabled;
nfs client mount nfs-ganesha share, and do 'll' in the testing directory.

2. Enable quota;
# getfattr -d -m . -e hex /root/rpmbuild/gvtest/nfs-ganesha/testfile92
getfattr: Removing leading '/' from absolute path names
# file: root/rpmbuild/gvtest/nfs-ganesha/testfile92
trusted.gfid=0xe2edaac0eca8420ebbbcba7e56bbd240
trusted.gfid2path.b3250af8fa558e66=0x3966313434352d653530332d343831352d396635312d3236633565366332633137642f7465737466696c653932
trusted.glusterfs.quota.9f1445ff-e503-4815-9f51-26c5e6c2c17d.contri.3=0x0201

Notice: testfile92 without trusted.pgfid xattr.


The trusted.pgfid will be created by the next name lookup; nameless lookup 
don't create it.


3. restart glusterfs volume by "gluster volume stop/start gvtest"


Restarting glusterfsd here cleanup all inode cache from memory;
after starting, inode of testfile92's parent is NULL.


4. echo somedata > testfile92


Because, nfs-ganesha and nfs client has cache for testfile92,
before write fops, no name lookup happens that trusted.pgfid is not created for 
testfile92.

Quota_writev call quota_build_ancestry building the ancestry in 
quota_check_limit,
but testfile92 doesn't contain trusted.pgfid, so that write fops failed with 
Invalid argument.

I have no idea of fixing this problem, any comments are welcome.



I think, ideally, Gluster would send an invalidate upcall under the 
circumstances, causing Ganesha do drop it's cached entry.


Daniel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] tendrl-release v1.6.1 (milestone-3 2018) is available

2018-03-07 Thread Rohan Kanade
Hello,

The Tendrl team is happy to present tendrl-release v1.6.1 (milestone-3 2018)

Install docs:
https://github.com/Tendrl/documentation/wiki/Tendrl-release-v1.6.1-(install-guide)

Metrics: https://github.com/Tendrl/documentation/wiki/Metrics

Changes:
UI
- https://github.com/Tendrl/ui/milestone/3

API
- https://github.com/Tendrl/api/milestone/3

Backend
- Ability to auto manage/expand new nodes in clusters managed by Tendrl

Backend (details)
- https://github.com/Tendrl/commons/milestone/3
- https://github.com/Tendrl/node-agent/milestone/3
- https://github.com/Tendrl/gluster-integration/milestone/3
- https://github.com/Tendrl/monitoring-integration/milestone/3
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2018-03-07-2a1adc5c (master branch)

2018-03-07 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-03-07-2a1adc5c
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: RC1 tagged

2018-03-07 Thread Javier Romero
2018-03-07 9:29 GMT-03:00 Shyam Ranganathan :
> On 03/05/2018 09:05 AM, Javier Romero wrote:
>>> I am about halfway through my own upgrade testing (using centOS7
>>> containers), and it is patterned around this [1], in case that helps.
>> Taking a look at this.
>>
>>
>
>
> Thanks for confirming the install of the bits.
>
> On the upgrade front, I did find some issues that are since fixed. We
> are in the process for rolling out the GA (general availability)
> packages for 4.0.0, and if you have not started on the upgrades, I would
> wait till these are announced, before testing them out.
>
> We usually test the upgrades (and package sanity all over again on the
> GA bits) before announcing the release.
>
> Thanks again,
> Shyam


Ok, so will wait till those packages are announced before testing them out.

Thanks for your attention.

Regards,
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: RC1 tagged

2018-03-07 Thread Shyam Ranganathan
On 03/05/2018 09:05 AM, Javier Romero wrote:
>> I am about halfway through my own upgrade testing (using centOS7
>> containers), and it is patterned around this [1], in case that helps.
> Taking a look at this.
> 
> 


Thanks for confirming the install of the bits.

On the upgrade front, I did find some issues that are since fixed. We
are in the process for rolling out the GA (general availability)
packages for 4.0.0, and if you have not started on the upgrades, I would
wait till these are announced, before testing them out.

We usually test the upgrades (and package sanity all over again on the
GA bits) before announcing the release.

Thanks again,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] namespace.t fails with brick multiplexing enabled

2018-03-07 Thread Atin Mukherjee
On Wed, Mar 7, 2018 at 7:38 AM, Varsha Rao  wrote:

> Hello Atin,
>
> On Tue, Mar 6, 2018 at 10:23 PM, Atin Mukherjee 
> wrote:
> > Looks like the failure is back again. Refer
> > https://build.gluster.org/job/regression-test-with-multiplex/663/console
> and
> > this has been failing in other occurrences too.
>
> Is the core dump file available for this? As on testing locally, the
> namespace test does not fail.
>

The test didn't generate any core. It failed functionally.

*09:32:12* not ok 15 Got "N" instead of "Y", LINENUM:85*09:32:12*
FAILED COMMAND: Y check_samples CREATE 28153613 /namespace/foo patchy3


You may have to file a bug under gluster-infrastructure component asking
for a machine to debug this further given this is locally not reproducible.


>
> Thanks,
> Varsha
>
> > On Mon, Feb 26, 2018 at 2:58 PM, Varsha Rao  wrote:
> >>
> >> Hi Atin,
> >>
> >> On Mon, Feb 26, 2018 at 12:18 PM, Atin Mukherjee 
> >> wrote:
> >> > Hi Varsha,
> >> >
> >> > Thanks for your first feature "namespace" in GlusterFS! As we run a
> >> > periodic
> >> > regression jobs with brick multiplexing, we have seen that
> >> > tests/basic/namespace.t fails constantly with brick multiplexing
> >> > enabled. I
> >> > just went through the function check_samples () in the test file and
> it
> >> > looked to me the function was written with an assumption that every
> >> > process
> >> > will be associated with one brick instance and will have one log file
> >> > which
> >> > is not the case for brick multiplexing [1] . If you need further
> >> > question on
> >> > brick multiplexing, feel free to ask.
> >>
> >> Yes, it was written with that assumption. I will send a patch to fix it.
> >>
> >> Thanks,
> >> Varsha
> >
> >
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel