Re: [Gluster-devel] mount_dir value seems clobbered in all /var/lib/glusterd/vols//bricks/: files

2016-08-04 Thread Avra Sengupta

In that case it is expected behaviour. Thanks.

Regards,
Avra

On 08/05/2016 11:36 AM, Milind Changire wrote:

The bricks are NOT lvm mounted.
The bricks are just directories on the root file-system.

Milind

On 08/05/2016 11:25 AM, Avra Sengupta wrote:

Hi Milind,

Are the bricks lvm mounterd bricks. This field is populated for lvm
mounted bricks, and used by them. For regular bricks, which don't have a
mount point this valus is ignored.

Regards,
Avra

On 08/04/2016 07:44 PM, Atin Mukherjee wrote:

glusterd_get_brick_mount_dir () does a brick_dir++  which seems to
cause this problem and removing this line fixes the problem. Commit
f846e54b introduced it.

Ccing Avra/Rajesh

mount_dir is used by snapshot, however I am just wondering how are we
surviving this case.

~Atin

On Thu, Aug 4, 2016 at 5:39 PM, Milind Changire mailto:mchan...@redhat.com>> wrote:

here's one of the brick definition files for a volume named 
"twoXtwo"


[root@f24node0 bricks]# cat f24node1\:-glustervols-twoXtwo-dir
hostname=f24node1
path=/glustervols/twoXtwo/dir
real_path=/glustervols/twoXtwo/dir
listen-port=0
rdma.listen-port=0
decommissioned=0
brick-id=twoXtwo-client-1
mount_dir=/lustervols/twoXtwo/dir  <-- shouldn't the 
value be

/glustervols/...
   there's a missing 
'g'

   after the first '/'
snap-status=0


This *should* happen for all volumes and for all such brick 
definition

files or whatever they are called.
BTW, I'm working with the upstream mainline sources, if that helps.

I'm running a 2x2 distribute-replicate volume.
4 nodes with 1 brick per node.
1 brick for the hot tier for tiering.

As far as I can tell, I haven't done anything fancy with the setup.
And I have confirmed that there is no directory named '/lustervols'
on any of my cluster nodes.

--
Milind
___
Gluster-devel mailing list
Gluster-devel@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-devel




--

--Atin




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] mount_dir value seems clobbered in all /var/lib/glusterd/vols//bricks/: files

2016-08-04 Thread Milind Changire

The bricks are NOT lvm mounted.
The bricks are just directories on the root file-system.

Milind

On 08/05/2016 11:25 AM, Avra Sengupta wrote:

Hi Milind,

Are the bricks lvm mounterd bricks. This field is populated for lvm
mounted bricks, and used by them. For regular bricks, which don't have a
mount point this valus is ignored.

Regards,
Avra

On 08/04/2016 07:44 PM, Atin Mukherjee wrote:

glusterd_get_brick_mount_dir () does a brick_dir++  which seems to
cause this problem and removing this line fixes the problem. Commit
f846e54b introduced it.

Ccing Avra/Rajesh

mount_dir is used by snapshot, however I am just wondering how are we
surviving this case.

~Atin

On Thu, Aug 4, 2016 at 5:39 PM, Milind Changire mailto:mchan...@redhat.com>> wrote:

here's one of the brick definition files for a volume named "twoXtwo"

[root@f24node0 bricks]# cat f24node1\:-glustervols-twoXtwo-dir
hostname=f24node1
path=/glustervols/twoXtwo/dir
real_path=/glustervols/twoXtwo/dir
listen-port=0
rdma.listen-port=0
decommissioned=0
brick-id=twoXtwo-client-1
mount_dir=/lustervols/twoXtwo/dir  <-- shouldn't the value be
   /glustervols/...
   there's a missing 'g'
   after the first '/'
snap-status=0


This *should* happen for all volumes and for all such brick definition
files or whatever they are called.
BTW, I'm working with the upstream mainline sources, if that helps.

I'm running a 2x2 distribute-replicate volume.
4 nodes with 1 brick per node.
1 brick for the hot tier for tiering.

As far as I can tell, I haven't done anything fancy with the setup.
And I have confirmed that there is no directory named '/lustervols'
on any of my cluster nodes.

--
Milind
___
Gluster-devel mailing list
Gluster-devel@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-devel




--

--Atin



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] mount_dir value seems clobbered in all /var/lib/glusterd/vols//bricks/: files

2016-08-04 Thread Avra Sengupta

Hi Milind,

Are the bricks lvm mounterd bricks. This field is populated for lvm 
mounted bricks, and used by them. For regular bricks, which don't have a 
mount point this valus is ignored.


Regards,
Avra

On 08/04/2016 07:44 PM, Atin Mukherjee wrote:
glusterd_get_brick_mount_dir () does a brick_dir++ which seems to 
cause this problem and removing this line fixes the problem. Commit 
f846e54b introduced it.


Ccing Avra/Rajesh

mount_dir is used by snapshot, however I am just wondering how are we 
surviving this case.


~Atin

On Thu, Aug 4, 2016 at 5:39 PM, Milind Changire > wrote:


here's one of the brick definition files for a volume named "twoXtwo"

[root@f24node0 bricks]# cat f24node1\:-glustervols-twoXtwo-dir
hostname=f24node1
path=/glustervols/twoXtwo/dir
real_path=/glustervols/twoXtwo/dir
listen-port=0
rdma.listen-port=0
decommissioned=0
brick-id=twoXtwo-client-1
mount_dir=/lustervols/twoXtwo/dir  <-- shouldn't the value be
 /glustervols/...
   there's a missing 'g'
   after the first '/'
snap-status=0


This *should* happen for all volumes and for all such brick definition
files or whatever they are called.
BTW, I'm working with the upstream mainline sources, if that helps.

I'm running a 2x2 distribute-replicate volume.
4 nodes with 1 brick per node.
1 brick for the hot tier for tiering.

As far as I can tell, I haven't done anything fancy with the setup.
And I have confirmed that there is no directory named '/lustervols'
on any of my cluster nodes.

-- 
Milind

___
Gluster-devel mailing list
Gluster-devel@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-devel




--

--Atin


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] regression burn-in summary over the last 7 days

2016-08-04 Thread Vijay Bellur
On Wed, Aug 3, 2016 at 1:37 PM, Atin Mukherjee  wrote:
> I think we should compare n-1th week report with nth and any common tests
> out of that comparison having more than or equal to 3 instances of failures
> SHOULD immediately marked as bad and any component having more than 5 bad
> tests should be BLOCKED for further merges till bad tests are fixed. What do
> others think about it?
>

If we can automate this workflow, I am in favor of the proposal.

-Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] GlusterFS-3.7.14 released

2016-08-04 Thread Serkan Çoban
Thanks Pranith,
I am waiting for RPMs to show, I will do the tests as soon as possible
and inform you.

On Wed, Aug 3, 2016 at 11:19 PM, Pranith Kumar Karampuri
 wrote:
>
>
> On Thu, Aug 4, 2016 at 1:47 AM, Pranith Kumar Karampuri
>  wrote:
>>
>>
>>
>> On Thu, Aug 4, 2016 at 12:51 AM, Serkan Çoban 
>> wrote:
>>>
>>> I use rpms for installation. Redhat/Centos 6.8.
>>
>>
>> http://review.gluster.org/#/c/15084 is the patch. In some time the rpms
>> will be built actually.
>
>
> In the same URL above it will actually post the rpms for fedora/el6/el7 at
> the end of the page.
>
>>
>>
>> Use gluster volume set  disperse.shd-max-threads > (range: 1-64)>
>>
>> While testing this I thought of ways to decrease the number of crawls as
>> well. But they are a bit involved. Try to create same set of data and see
>> what is the time it takes to complete heals using number of threads as you
>> increase the number of parallel heals from 1 to 64.
>>
>>>
>>> On Wed, Aug 3, 2016 at 10:16 PM, Pranith Kumar Karampuri
>>>  wrote:
>>> >
>>> >
>>> > On Thu, Aug 4, 2016 at 12:45 AM, Serkan Çoban 
>>> > wrote:
>>> >>
>>> >> I prefer 3.7 if it is ok for you. Can you also provide build
>>> >> instructions?
>>> >
>>> >
>>> > 3.7 should be fine. Do you use rpms/debs/anything-else?
>>> >
>>> >>
>>> >>
>>> >> On Wed, Aug 3, 2016 at 10:12 PM, Pranith Kumar Karampuri
>>> >>  wrote:
>>> >> >
>>> >> >
>>> >> > On Thu, Aug 4, 2016 at 12:37 AM, Serkan Çoban
>>> >> > 
>>> >> > wrote:
>>> >> >>
>>> >> >> Yes, but I can create 2+1(or 8+2) ec using two servers right? I
>>> >> >> have
>>> >> >> 26 disks on each server.
>>> >> >
>>> >> >
>>> >> > On which release-branch do you want the patch? I am testing it on
>>> >> > master-branch now.
>>> >> >
>>> >> >>
>>> >> >>
>>> >> >> On Wed, Aug 3, 2016 at 9:59 PM, Pranith Kumar Karampuri
>>> >> >>  wrote:
>>> >> >> >
>>> >> >> >
>>> >> >> > On Thu, Aug 4, 2016 at 12:23 AM, Serkan Çoban
>>> >> >> > 
>>> >> >> > wrote:
>>> >> >> >>
>>> >> >> >> I have two of my storage servers free, I think I can use them
>>> >> >> >> for
>>> >> >> >> testing. Is two server testing environment ok for you?
>>> >> >> >
>>> >> >> >
>>> >> >> > I think it would be better if you have at least 3. You can test
>>> >> >> > it
>>> >> >> > with
>>> >> >> > 2+1
>>> >> >> > ec configuration.
>>> >> >> >
>>> >> >> >>
>>> >> >> >>
>>> >> >> >> On Wed, Aug 3, 2016 at 9:44 PM, Pranith Kumar Karampuri
>>> >> >> >>  wrote:
>>> >> >> >> >
>>> >> >> >> >
>>> >> >> >> > On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban
>>> >> >> >> > 
>>> >> >> >> > wrote:
>>> >> >> >> >>
>>> >> >> >> >> Hi,
>>> >> >> >> >>
>>> >> >> >> >> May I ask if multi-threaded self heal for distributed
>>> >> >> >> >> disperse
>>> >> >> >> >> volumes
>>> >> >> >> >> implemented in this release?
>>> >> >> >> >
>>> >> >> >> >
>>> >> >> >> > Serkan,
>>> >> >> >> > At the moment I am a bit busy with different work, Is
>>> >> >> >> > it
>>> >> >> >> > possible
>>> >> >> >> > for you to help test the feature if I provide a patch?
>>> >> >> >> > Actually
>>> >> >> >> > the
>>> >> >> >> > patch
>>> >> >> >> > should be small. Testing is where lot of time will be spent
>>> >> >> >> > on.
>>> >> >> >> >
>>> >> >> >> >>
>>> >> >> >> >>
>>> >> >> >> >> Thanks,
>>> >> >> >> >> Serkan
>>> >> >> >> >>
>>> >> >> >> >> On Tue, Aug 2, 2016 at 5:30 PM, David Gossage
>>> >> >> >> >>  wrote:
>>> >> >> >> >> > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson
>>> >> >> >> >> >  wrote:
>>> >> >> >> >> >>
>>> >> >> >> >> >> On 2/08/2016 5:07 PM, Kaushal M wrote:
>>> >> >> >> >> >>>
>>> >> >> >> >> >>> GlusterFS-3.7.14 has been released. This is a regular
>>> >> >> >> >> >>> minor
>>> >> >> >> >> >>> release.
>>> >> >> >> >> >>> The release-notes are available at
>>> >> >> >> >> >>>
>>> >> >> >> >> >>>
>>> >> >> >> >> >>>
>>> >> >> >> >> >>>
>>> >> >> >> >> >>>
>>> >> >> >> >> >>>
>>> >> >> >> >> >>> https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md
>>> >> >> >> >> >>
>>> >> >> >> >> >>
>>> >> >> >> >> >> Thanks Kaushal, I'll check it out
>>> >> >> >> >> >>
>>> >> >> >> >> >
>>> >> >> >> >> > So far on my test box its working as expected.  At least
>>> >> >> >> >> > the
>>> >> >> >> >> > issues
>>> >> >> >> >> > that
>>> >> >> >> >> > prevented it from running as before have disappeared.  Will
>>> >> >> >> >> > need
>>> >> >> >> >> > to
>>> >> >> >> >> > see
>>> >> >> >> >> > how
>>> >> >> >> >> > my test VM behaves after a few days.
>>> >> >> >> >> >
>>> >> >> >> >> >
>>> >> >> >> >> >
>>> >> >> >> >> >> --
>>> >> >> >> >> >> Lindsay Mathieson
>>> >> >> >> >> >>
>>> >> >> >> >> >> ___
>>> >> >> >> >> >> Gluster-users mailing list
>>> >> >> >> >> >> gluster-us...@gluster.org
>>> >> >> >> >> >> http://www.gluster.org/mailman/listinfo/gluster-users
>>> >> >> >> >> >
>>> >> >> >> >> >
>>> >> >> >> >> >
>>> >> >> >> >> > ___
>>> >> >> >> >> > Gluster-users mailing list
>>> >> >>

Re: [Gluster-devel] mount_dir value seems clobbered in all /var/lib/glusterd/vols//bricks/: files

2016-08-04 Thread Atin Mukherjee
glusterd_get_brick_mount_dir () does a brick_dir++  which seems to cause
this problem and removing this line fixes the problem. Commit f846e54b
introduced it.

Ccing Avra/Rajesh

mount_dir is used by snapshot, however I am just wondering how are we
surviving this case.

~Atin

On Thu, Aug 4, 2016 at 5:39 PM, Milind Changire  wrote:

> here's one of the brick definition files for a volume named "twoXtwo"
>
> [root@f24node0 bricks]# cat f24node1\:-glustervols-twoXtwo-dir
> hostname=f24node1
> path=/glustervols/twoXtwo/dir
> real_path=/glustervols/twoXtwo/dir
> listen-port=0
> rdma.listen-port=0
> decommissioned=0
> brick-id=twoXtwo-client-1
> mount_dir=/lustervols/twoXtwo/dir  <-- shouldn't the value be
>/glustervols/...
>there's a missing 'g'
>after the first '/'
> snap-status=0
>
>
> This *should* happen for all volumes and for all such brick definition
> files or whatever they are called.
> BTW, I'm working with the upstream mainline sources, if that helps.
>
> I'm running a 2x2 distribute-replicate volume.
> 4 nodes with 1 brick per node.
> 1 brick for the hot tier for tiering.
>
> As far as I can tell, I haven't done anything fancy with the setup.
> And I have confirmed that there is no directory named '/lustervols'
> on any of my cluster nodes.
>
> --
> Milind
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 

--Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] regression burn-in summary over the last 7 days

2016-08-04 Thread Milind Changire

On 08/04/2016 05:40 PM, Kaleb KEITHLEY wrote:

On 08/04/2016 08:07 AM, Niels de Vos wrote:

On Thu, Aug 04, 2016 at 12:00:53AM +0200, Niels de Vos wrote:

On Wed, Aug 03, 2016 at 10:30:28AM -0400, Vijay Bellur wrote:


...

./tests/bugs/gfapi/bug-1093594.t ; Failed 1 times
Regression Links:
https://build.gluster.org/job/regression-test-burn-in/1423/consoleFull


I have not seen this fail yet... All gfapi tests are running in a loop
on a test-system now, we'll see if it reproducible in a few days or so.


It seems that glfs_fini() returns -1 every now and then (once after 1027
iterations, once after 287). Some of the gfapi test cases actually
succeed their intended test, but still return an error when glfs_fini()
fails. I am tempted to just skip this error in most tests and have only
tests/basic/gfapi/libgfapi-fini-hang error out on it. (Obviously also
intend to fix the failure.)


If you fix the bug in glfs_fini() then it should not be necessary to
ignore the failure in the tests, right?

Just fix the bug, don't hack the test.

--

Kaleb




I've faced similar issues with glfs_fini() while working on the bareos
integration. When using a libgfapi built with --enable-debug, an assert
causes the process to dump core.

https://bugzilla.redhat.com/show_bug.cgi?id=1233136 may be worth addressing.

Milind
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regression burn-in summary over the last 7 days

2016-08-04 Thread Kaleb KEITHLEY
On 08/04/2016 08:07 AM, Niels de Vos wrote:
> On Thu, Aug 04, 2016 at 12:00:53AM +0200, Niels de Vos wrote:
>> On Wed, Aug 03, 2016 at 10:30:28AM -0400, Vijay Bellur wrote:
>> 
> ...
>>> ./tests/bugs/gfapi/bug-1093594.t ; Failed 1 times
>>> Regression Links:
>>> https://build.gluster.org/job/regression-test-burn-in/1423/consoleFull
>>
>> I have not seen this fail yet... All gfapi tests are running in a loop
>> on a test-system now, we'll see if it reproducible in a few days or so.
> 
> It seems that glfs_fini() returns -1 every now and then (once after 1027
> iterations, once after 287). Some of the gfapi test cases actually
> succeed their intended test, but still return an error when glfs_fini()
> fails. I am tempted to just skip this error in most tests and have only
> tests/basic/gfapi/libgfapi-fini-hang error out on it. (Obviously also
> intend to fix the failure.)

If you fix the bug in glfs_fini() then it should not be necessary to
ignore the failure in the tests, right?

Just fix the bug, don't hack the test.

--

Kaleb





signature.asc
Description: OpenPGP digital signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] mount_dir value seems clobbered in all /var/lib/glusterd/vols//bricks/: files

2016-08-04 Thread Milind Changire

here's one of the brick definition files for a volume named "twoXtwo"

[root@f24node0 bricks]# cat f24node1\:-glustervols-twoXtwo-dir
hostname=f24node1
path=/glustervols/twoXtwo/dir
real_path=/glustervols/twoXtwo/dir
listen-port=0
rdma.listen-port=0
decommissioned=0
brick-id=twoXtwo-client-1
mount_dir=/lustervols/twoXtwo/dir  <-- shouldn't the value be
   /glustervols/...
   there's a missing 'g'
   after the first '/'
snap-status=0


This *should* happen for all volumes and for all such brick definition
files or whatever they are called.
BTW, I'm working with the upstream mainline sources, if that helps.

I'm running a 2x2 distribute-replicate volume.
4 nodes with 1 brick per node.
1 brick for the hot tier for tiering.

As far as I can tell, I haven't done anything fancy with the setup.
And I have confirmed that there is no directory named '/lustervols'
on any of my cluster nodes.

--
Milind
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regression burn-in summary over the last 7 days

2016-08-04 Thread Niels de Vos
On Thu, Aug 04, 2016 at 12:00:53AM +0200, Niels de Vos wrote:
> On Wed, Aug 03, 2016 at 10:30:28AM -0400, Vijay Bellur wrote:
> 
...
> > ./tests/bugs/gfapi/bug-1093594.t ; Failed 1 times
> > Regression Links:
> > https://build.gluster.org/job/regression-test-burn-in/1423/consoleFull
> 
> I have not seen this fail yet... All gfapi tests are running in a loop
> on a test-system now, we'll see if it reproducible in a few days or so.

It seems that glfs_fini() returns -1 every now and then (once after 1027
iterations, once after 287). Some of the gfapi test cases actually
succeed their intended test, but still return an error when glfs_fini()
fails. I am tempted to just skip this error in most tests and have only
tests/basic/gfapi/libgfapi-fini-hang error out on it. (Obviously also
intend to fix the failure.)

Thoughts?

Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Packaging libgfapi-python

2016-08-04 Thread Niels de Vos
On Thu, Aug 04, 2016 at 05:47:04AM -0400, Prashanth Pai wrote:
> Hi all,
> 
> The official python bindings to libgfapi has been around for a while now
> in it's source repo[1] and the API documentation[2] is also up. However,
> it wasn't made available over regular release channels until now and users
> had to install it from source (which is very easy[3] for Python projects)
> 
> There are few ways how libgfapi-python can be made available as packages over
> yum/pypi:
> 
> 1. Import libgfapi-python repo as a git submodule in glusterfs repo. Modify 
> the
>spec file to create a python-libgfapi package. (Or make it part of existing
>python-gluster package). This way it'll be part of glusterfs release cycle
>and development will continue to happen in it's external repo.
> 
> 2. Have libgfapi-python maintain it's own release numbers and release 
> lifecycle
>externally. Package it and make it available over pypi so that users can
>simply do a pip install. This will also allow python bindings to work
>with any glusterfs versions or release series (3.7, 3.8, master) which is
>really nice.

This. We want libgfapi-python to be independent as much as possible. The
same counts for Java, PHP and other bindings to libgfapi. With libgfapi
we try very hard not to break existing users, libgfapi-python is one,
just like Samba, NFS-Ganesha, QEMU, etc.

But we also need to provide RPM packages in different distributions so
that users can install and manage the packages with the standard tools.
pip works good for developers and certain environments where versions
are not watched/maintained carefully. pip may also not be installed
everywhere, but we would still like Python applications to be able to
use libgfapi-python. (Many admins are against installing software that
does not come from the standard distribution repositories.)

Users mostly start looking for packages that are available in their
distribution. For Fedora they would do a "dnf search gluster", CentOS
has "yum search gluster" and Debian based distributions use "apt-cache".
Integration in distributions is important, so that we (and the packge
maintainers for those distributions) can test the versions that are
provided. pip does not offer the tight integration with distributions
that I expect from a stable software package.

Thanks,
Niels


> 3. Import entire libgfapi-python source code into glusterfs repo and deprecate
>libgfapi-python repo. Continue all further development in glusterfs repo.
> 
> My personal favourite is to make it available over pypi which will work
> across distros but pypi doesn't seem to be popular here.
> 
> libgfapi-python bindings has been tested only against Linux x86-64 and Python
> versions 2.6 and 2.7 in Fedora/CentOS so far. Niels has setup a job in the
> CentOS CI infra that runs ligfapi python tests against glusterfs nightly 
> builds.
> 
> Your preference and inputs on how do we go about packaging libgfapi-python
> will help.
> 
> [1]: https://github.com/gluster/libgfapi-python
> [2]: http://libgfapi-python.rtfd.io
> [3]: http://libgfapi-python.rtfd.io/en/latest/install.html
> 
>  -Prashanth Pai


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Packaging libgfapi-python

2016-08-04 Thread Prashanth Pai
Hi all,

The official python bindings to libgfapi has been around for a while now
in it's source repo[1] and the API documentation[2] is also up. However,
it wasn't made available over regular release channels until now and users
had to install it from source (which is very easy[3] for Python projects)

There are few ways how libgfapi-python can be made available as packages over
yum/pypi:

1. Import libgfapi-python repo as a git submodule in glusterfs repo. Modify the
   spec file to create a python-libgfapi package. (Or make it part of existing
   python-gluster package). This way it'll be part of glusterfs release cycle
   and development will continue to happen in it's external repo.

2. Have libgfapi-python maintain it's own release numbers and release lifecycle
   externally. Package it and make it available over pypi so that users can
   simply do a pip install. This will also allow python bindings to work
   with any glusterfs versions or release series (3.7, 3.8, master) which is
   really nice.

3. Import entire libgfapi-python source code into glusterfs repo and deprecate
   libgfapi-python repo. Continue all further development in glusterfs repo.

My personal favourite is to make it available over pypi which will work
across distros but pypi doesn't seem to be popular here.

libgfapi-python bindings has been tested only against Linux x86-64 and Python
versions 2.6 and 2.7 in Fedora/CentOS so far. Niels has setup a job in the
CentOS CI infra that runs ligfapi python tests against glusterfs nightly builds.

Your preference and inputs on how do we go about packaging libgfapi-python
will help.

[1]: https://github.com/gluster/libgfapi-python
[2]: http://libgfapi-python.rtfd.io
[3]: http://libgfapi-python.rtfd.io/en/latest/install.html

 -Prashanth Pai
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] GlusterFS-3.7.14 released

2016-08-04 Thread Pranith Kumar Karampuri
On Thu, Aug 4, 2016 at 11:30 AM, Serkan Çoban  wrote:

> Thanks Pranith,
> I am waiting for RPMs to show, I will do the tests as soon as possible
> and inform you.
>

I guess on 3.7.x the RPMs are not automatically built. Let me find how it
can be done. I will inform you after finding that out. Give me a day.


>
> On Wed, Aug 3, 2016 at 11:19 PM, Pranith Kumar Karampuri
>  wrote:
> >
> >
> > On Thu, Aug 4, 2016 at 1:47 AM, Pranith Kumar Karampuri
> >  wrote:
> >>
> >>
> >>
> >> On Thu, Aug 4, 2016 at 12:51 AM, Serkan Çoban 
> >> wrote:
> >>>
> >>> I use rpms for installation. Redhat/Centos 6.8.
> >>
> >>
> >> http://review.gluster.org/#/c/15084 is the patch. In some time the rpms
> >> will be built actually.
> >
> >
> > In the same URL above it will actually post the rpms for fedora/el6/el7
> at
> > the end of the page.
> >
> >>
> >>
> >> Use gluster volume set  disperse.shd-max-threads  >> (range: 1-64)>
> >>
> >> While testing this I thought of ways to decrease the number of crawls as
> >> well. But they are a bit involved. Try to create same set of data and
> see
> >> what is the time it takes to complete heals using number of threads as
> you
> >> increase the number of parallel heals from 1 to 64.
> >>
> >>>
> >>> On Wed, Aug 3, 2016 at 10:16 PM, Pranith Kumar Karampuri
> >>>  wrote:
> >>> >
> >>> >
> >>> > On Thu, Aug 4, 2016 at 12:45 AM, Serkan Çoban  >
> >>> > wrote:
> >>> >>
> >>> >> I prefer 3.7 if it is ok for you. Can you also provide build
> >>> >> instructions?
> >>> >
> >>> >
> >>> > 3.7 should be fine. Do you use rpms/debs/anything-else?
> >>> >
> >>> >>
> >>> >>
> >>> >> On Wed, Aug 3, 2016 at 10:12 PM, Pranith Kumar Karampuri
> >>> >>  wrote:
> >>> >> >
> >>> >> >
> >>> >> > On Thu, Aug 4, 2016 at 12:37 AM, Serkan Çoban
> >>> >> > 
> >>> >> > wrote:
> >>> >> >>
> >>> >> >> Yes, but I can create 2+1(or 8+2) ec using two servers right? I
> >>> >> >> have
> >>> >> >> 26 disks on each server.
> >>> >> >
> >>> >> >
> >>> >> > On which release-branch do you want the patch? I am testing it on
> >>> >> > master-branch now.
> >>> >> >
> >>> >> >>
> >>> >> >>
> >>> >> >> On Wed, Aug 3, 2016 at 9:59 PM, Pranith Kumar Karampuri
> >>> >> >>  wrote:
> >>> >> >> >
> >>> >> >> >
> >>> >> >> > On Thu, Aug 4, 2016 at 12:23 AM, Serkan Çoban
> >>> >> >> > 
> >>> >> >> > wrote:
> >>> >> >> >>
> >>> >> >> >> I have two of my storage servers free, I think I can use them
> >>> >> >> >> for
> >>> >> >> >> testing. Is two server testing environment ok for you?
> >>> >> >> >
> >>> >> >> >
> >>> >> >> > I think it would be better if you have at least 3. You can test
> >>> >> >> > it
> >>> >> >> > with
> >>> >> >> > 2+1
> >>> >> >> > ec configuration.
> >>> >> >> >
> >>> >> >> >>
> >>> >> >> >>
> >>> >> >> >> On Wed, Aug 3, 2016 at 9:44 PM, Pranith Kumar Karampuri
> >>> >> >> >>  wrote:
> >>> >> >> >> >
> >>> >> >> >> >
> >>> >> >> >> > On Wed, Aug 3, 2016 at 6:01 PM, Serkan Çoban
> >>> >> >> >> > 
> >>> >> >> >> > wrote:
> >>> >> >> >> >>
> >>> >> >> >> >> Hi,
> >>> >> >> >> >>
> >>> >> >> >> >> May I ask if multi-threaded self heal for distributed
> >>> >> >> >> >> disperse
> >>> >> >> >> >> volumes
> >>> >> >> >> >> implemented in this release?
> >>> >> >> >> >
> >>> >> >> >> >
> >>> >> >> >> > Serkan,
> >>> >> >> >> > At the moment I am a bit busy with different work,
> Is
> >>> >> >> >> > it
> >>> >> >> >> > possible
> >>> >> >> >> > for you to help test the feature if I provide a patch?
> >>> >> >> >> > Actually
> >>> >> >> >> > the
> >>> >> >> >> > patch
> >>> >> >> >> > should be small. Testing is where lot of time will be spent
> >>> >> >> >> > on.
> >>> >> >> >> >
> >>> >> >> >> >>
> >>> >> >> >> >>
> >>> >> >> >> >> Thanks,
> >>> >> >> >> >> Serkan
> >>> >> >> >> >>
> >>> >> >> >> >> On Tue, Aug 2, 2016 at 5:30 PM, David Gossage
> >>> >> >> >> >>  wrote:
> >>> >> >> >> >> > On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson
> >>> >> >> >> >> >  wrote:
> >>> >> >> >> >> >>
> >>> >> >> >> >> >> On 2/08/2016 5:07 PM, Kaushal M wrote:
> >>> >> >> >> >> >>>
> >>> >> >> >> >> >>> GlusterFS-3.7.14 has been released. This is a regular
> >>> >> >> >> >> >>> minor
> >>> >> >> >> >> >>> release.
> >>> >> >> >> >> >>> The release-notes are available at
> >>> >> >> >> >> >>>
> >>> >> >> >> >> >>>
> >>> >> >> >> >> >>>
> >>> >> >> >> >> >>>
> >>> >> >> >> >> >>>
> >>> >> >> >> >> >>>
> >>> >> >> >> >> >>>
> https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.14.md
> >>> >> >> >> >> >>
> >>> >> >> >> >> >>
> >>> >> >> >> >> >> Thanks Kaushal, I'll check it out
> >>> >> >> >> >> >>
> >>> >> >> >> >> >
> >>> >> >> >> >> > So far on my test box its working as expected.  At least
> >>> >> >> >> >> > the
> >>> >> >> >> >> > issues
> >>> >> >> >> >> > that
> >>> >> >> >> >> > prevented it from running as before have disappeared.
> Will
> >>> >> >> >> >> > need
> >>> >> >> >> >> > to
> >>> >> >> >> >> > see
> >>> >> >> >> >> > how
> >>> >> >> >> >> > my test VM behaves after a few days.
> >>> >>