Re: [Gluster-devel] problem with recent change to glfs_realpath

2016-10-20 Thread Michael Adam
On 2016-10-21 at 01:15 +0200, Michael Adam wrote:
> Hi all,
> 
> Anoop has brought to my attention that
> recently glfs_realpath was changed in an incompatible way:
> 
> Previously, glfs_realpath returned an allocated string
> that the caller would have to free with 'free'. Now  after
> the change, free segfaults on the returned string, and
> the caller needs to call glfs_free.

I meant to give reference:

85e959052148ec481823d55c8b91cdee36da2b43 (master commit)

https://review.gluster.org/#/c/15332/


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] problem with recent change to glfs_realpath

2016-10-20 Thread Michael Adam
Hi all,

Anoop has brought to my attention that
recently glfs_realpath was changed in an incompatible way:

Previously, glfs_realpath returned an allocated string
that the caller would have to free with 'free'. Now  after
the change, free segfaults on the returned string, and
the caller needs to call glfs_free.

That change makes no sense, imho, because the result from
a realpath implementation may be used by the application
using libgfapi, outside of the scope of the actual libgfapi
using code. E.g. in samba, the gfapi calls are hidden behind
the vfs api in the gluster backend. But the realpath result
is used outside the vfs module. I think this should be quite
normal a use case, and hence glfs_realpath should behave
as one would expect from a realpath implementation
(and as described in the realpath(3) manpage): return a string
that needs to be freed with 'free'...

Now for samba, after thorough discussion with Anoop and Rajesh,
we have proposed a fix/workaround by using the variant of
glfs_realpath that hands in a pre-allocated result string.
This renders us independent of the allocation method used by
glfs_realpath. But we wanted to point this out to the list, since
it is a potential problem for other users, too.

Cheers - Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Need help in understanding IOZone config file

2016-10-20 Thread Menaka Mohan
Hello,


Thank you so much for helping me in understanding it.  I have read more about 
that and with the help of other community members, I submitted a pull request 
to update it in the Gluster docs [1] for better understanding which was 
accepted and merged.


Since each record in the config file represents a thread in IOZone terminology, 
we may need to have as many records with respect to the number of threads 
defined and number of clients in our setup.


[1] : 
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Performance%20Testing/


Thanks and Regards,

Menaka M


From: Dustin Black 
Sent: Monday, October 17, 2016 8:16:27 PM
To: Menaka Mohan
Cc: gluster-devel@gluster.org
Subject: Re: [Gluster-devel] Need help in understanding IOZone config file

Hi Menaka,

We test Gluster with something like this using iozone for 4GB file sizes.

iozone -t 6 -s 4g -r 4m -+m clients.ioz -c -e -+z -+n -i 0 && iozone -t 6 -s 4g 
-r 4m -+m clients.ioz -c -e -+z -+n -i 1


Where the clients.ioz file might look like the below for 6 clients/workers:

c0 /path/to/gluster/client/mount /path/to/binary/iozone
c1 /path/to/gluster/client/mount /path/to/binary/iozone
c2 /path/to/gluster/client/mount /path/to/binary/iozone
c3 /path/to/gluster/client/mount /path/to/binary/iozone
c4 /path/to/gluster/client/mount /path/to/binary/iozone
c5 /path/to/gluster/client/mount /path/to/binary/iozone


Dustin Black, RHCA
Senior Architect, Software-Defined Storage
Red Hat, Inc.


On Mon, Oct 10, 2016 at 3:48 PM, Menaka Mohan 
> wrote:

Hi,


I am Menaka M. I am new to this open source world. Kindly help me with the 
following query.


I have set up the Gluster development environment with two servers and one 
client. I am trying to run the basic bench test on the Gluster cluster from 
this GitHub 
repo. I 
also have IOZone installed. In that, how to generate the clients.ioz file 
(prerequisite 3) ? Does that refer to the file containing (client_name   
work_dirpath_to_IOZone_on_client) ?


I have read multiple blogs on how to analyze the IOZone results and also the 
performance testing section in docs. Kindly help me resolve this confusion. If 
I had asked a very basic thing, apologies. I will quickly learn them.


Regards,

Menaka M

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Possible race condition bug with tiered volume

2016-10-20 Thread Dan Lambright
Dustin,

Your python code looks fine to me... I've been in Ceph C++ weeds lately, I 
kinda miss python ;)

If I run back-to-back smallfile operation "create", then on the second 
smallfile run, I consistently see:  

0.00% of requested files processed, minimum is  70.00
at least one thread encountered error, test may be incomplete

Is this what you get? We can follow up off the mailing list.

Dan

glusterfs 3.7.15 built on Oct 20 2016, with two clients running small file 
against a tiered volume (using ram disk as hot tier, cold disks JBOD, copied 
below) on Fedora 23.

./smallfile_cli.py  --top /mnt/p66p67 --host-set gprfc066,gprfc067 --threads 8 
--files 5000 --file-size 64 --record-size 64 --fsync N --operation read

volume - 

Status: Started
Number of Bricks: 28
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: gprfs020:/home/ram 
Brick2: gprfs019:/home/ram 
Brick3: gprfs018:/home/ram 
Brick4: gprfs017:/home/ram 
Cold Tier:
Cold Tier Type : Distributed-Disperse
Number of Bricks: 2 x (8 + 4) = 24
Brick5: gprfs017:/t0
Brick6: gprfs018:/t0
Brick7: gprfs019:/t0
Brick8: gprfs020:/t0
Brick9: gprfs017:/t1
Brick10: gprfs018:/t1
Brick11: gprfs019:/t1
Brick12: gprfs020:/t1
Brick13: gprfs017:/t2
Brick14: gprfs018:/t2
Brick15: gprfs019:/t2
Brick16: gprfs020:/t2
Brick17: gprfs017:/t3
Brick18: gprfs018:/t3
Brick19: gprfs019:/t3
Brick20: gprfs020:/t3
Brick21: gprfs017:/t4
Brick22: gprfs018:/t4
Brick23: gprfs019:/t4
Brick24: gprfs020:/t4
Brick25: gprfs017:/t5
Brick26: gprfs018:/t5
Brick27: gprfs019:/t5
Brick28: gprfs020:/t5
Options Reconfigured:
cluster.tier-mode: cache   
features.ctr-enabled: on   
performance.readdir-ahead: on


- Original Message -
> From: "Dustin Black" 
> To: "Dan Lambright" 
> Cc: "Milind Changire" , "Annette Clewett" 
> , gluster-devel@gluster.org
> Sent: Wednesday, October 19, 2016 3:23:04 PM
> Subject: Re: [Gluster-devel] Possible race condition bug with tiered volume
> 
> # gluster --version
> glusterfs 3.7.9 built on Jun 10 2016 06:32:42
> 
> 
> Try not to make fun of my python, but I was able to make a small
> modification to the to the sync_files.py script from smallfile and at least
> enable my team to move on with testing. It's terribly hacky and ugly, but
> works around the problem, which I am pretty convinced is a Gluster bug at
> this point.
> 
> 
> # diff bin/sync_files.py.orig bin/sync_files.py
> 6a7,8
> > import errno
> > import binascii
> 27c29,40
> < shutil.rmtree(master_invoke.network_dir)
> ---
> > try:
> > shutil.rmtree(master_invoke.network_dir)
> > except OSError as e:
> > err = e.errno
> > if err != errno.EEXIST:
> > # workaround for possible bug in Gluster
> > if err != errno.ENOTEMPTY:
> > raise e
> > else:
> > print('saw ENOTEMPTY on stonewall, moving shared
> directory')
> > ext = str(binascii.b2a_hex(os.urandom(15)))
> > shutil.move(master_invoke.network_dir,
> master_invoke.network_dir + ext)
> 
> 
> Dustin Black, RHCA
> Senior Architect, Software-Defined Storage
> Red Hat, Inc.
> (o) +1.212.510.4138  (m) +1.215.821.7423
> dus...@redhat.com
> 
> 
> On Tue, Oct 18, 2016 at 7:09 PM, Dustin Black  wrote:
> 
> > Dang. I always think I get all the detail and inevitably leave out
> > something important. :-/
> >
> > I'm mobile and don't have the exact version in front of me, but this is
> > recent if not latest RHGS on RHEL 7.2.
> >
> >
> > On Oct 18, 2016 7:04 PM, "Dan Lambright"  wrote:
> >
> >> Dustin,
> >>
> >> What level code ? I often run smallfile on upstream code with tiered
> >> volumes and have not seen this.
> >>
> >> Sure, one of us will get back to you.
> >>
> >> Unfortunately, gluster has a lot of protocol overhead (LOOKUPs), and they
> >> overwhelm the boost in transfer speeds you get for small files. A
> >> presentation at the Berlin gluster summit evaluated this.  The expectation
> >> is md-cache will go a long way towards helping that, before too long.
> >>
> >> Dan
> >>
> >>
> >>
> >> - Original Message -
> >> > From: "Dustin Black" 
> >> > To: gluster-devel@gluster.org
> >> > Cc: "Annette Clewett" 
> >> > Sent: Tuesday, October 18, 2016 4:30:04 PM
> >> > Subject: [Gluster-devel] Possible race condition bug with tiered volume
> >> >
> >> > I have a 3x2 hot tier on NVMe drives with a 3x2 cold tier on RAID6
> >> drives.
> >> >
> >> > # gluster vol info 1nvme-distrep3x2
> >> > Volume Name: 1nvme-distrep3x2
> >> > Type: Tier
> >> > Volume ID: 21e3fc14-c35c-40c5-8e46-c258c1302607
> >> > Status: Started
> >> > Number of Bricks: 12
> >> > Transport-type: tcp
> >> > Hot Tier :
> >> > Hot Tier Type : Distributed-Replicate
> >> > Number of Bricks: 

Re: [Gluster-devel] New style community meetings - No more status updates

2016-10-20 Thread Amye Scavarda
On Thu, Oct 20, 2016 at 7:06 AM, Kaushal M  wrote:

> Hi All,
>
> Our weekly community meetings have become mainly one hour of status
> updates. This just drains the life out of the meeting, and doesn't
> encourage new attendees to speak up.
>
> Let's try and change this. For the next meeting lets try skipping
> updates all together and instead just dive into the 'Open floor' part
> of the meeting.
>
> Let's have the updates to the regular topics be provided by the
> regular owners before the meeting. This could either be through
> sending out emails to the mailing lists, or updates entered into the
> meeting etherpad[1]. As the host, I'll make sure to link to these
> updates when the meeting begins, and in the meeting minutes. People
> can view these updates later in their own time. People who need to
> provide updates on AIs, just update the etherpad[1]. It will be
> visible from there.
>
> Now let's move why I addressed this mail to this large and specific
> set of people. The people who have been directly addressed are the
> owners of the regular topics. You all are expected, before the next
> meeting, to either,
>  - Send out an update on the status for the topic you are responsible
> for to the mailing lists, and then link to it on the the etherpad
>  - or, provide you updates directly in the etherpad.
> Please make sure you do this without fail.
> If you do have anything to discuss, add it to the "Open floor" section.
> Also, if I've missed out anyone in the addressed list, please make
> sure they get this message too.
>
> Anyone else who wants to share their updates, add it to the 'Other
> updates' section.
>
> Everyone else, go ahead and add anything you want to ask to the "Open
> floor" section. Ensure to have your name with the topic you add
> (etherpad colours are not reliable), and attend the meeting next week.
> When your topic comes up, you'll have the floor.
>
> I hope that this new format helps make our meetings more colourful and
> lively.
>
> As always, our community meetings will be held every Wednesday at
> 1200UTC in #gluster-meeting on Freenode.
> See you all there.
>
> ~kaushal
>
> [1]: https://public.pad.fsfe.org/p/gluster-community-meetings
>

I really like this idea and am all in favor of color + liveliness.
Let's give this new format three weeks or so, and we'll review around
November 9th to see if we like this experiment.
Fair?
-- amye

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] New style community meetings - No more status updates

2016-10-20 Thread Kaushal M
Hi All,

Our weekly community meetings have become mainly one hour of status
updates. This just drains the life out of the meeting, and doesn't
encourage new attendees to speak up.

Let's try and change this. For the next meeting lets try skipping
updates all together and instead just dive into the 'Open floor' part
of the meeting.

Let's have the updates to the regular topics be provided by the
regular owners before the meeting. This could either be through
sending out emails to the mailing lists, or updates entered into the
meeting etherpad[1]. As the host, I'll make sure to link to these
updates when the meeting begins, and in the meeting minutes. People
can view these updates later in their own time. People who need to
provide updates on AIs, just update the etherpad[1]. It will be
visible from there.

Now let's move why I addressed this mail to this large and specific
set of people. The people who have been directly addressed are the
owners of the regular topics. You all are expected, before the next
meeting, to either,
 - Send out an update on the status for the topic you are responsible
for to the mailing lists, and then link to it on the the etherpad
 - or, provide you updates directly in the etherpad.
Please make sure you do this without fail.
If you do have anything to discuss, add it to the "Open floor" section.
Also, if I've missed out anyone in the addressed list, please make
sure they get this message too.

Anyone else who wants to share their updates, add it to the 'Other
updates' section.

Everyone else, go ahead and add anything you want to ask to the "Open
floor" section. Ensure to have your name with the topic you add
(etherpad colours are not reliable), and attend the meeting next week.
When your topic comes up, you'll have the floor.

I hope that this new format helps make our meetings more colourful and lively.

As always, our community meetings will be held every Wednesday at
1200UTC in #gluster-meeting on Freenode.
See you all there.

~kaushal

[1]: https://public.pad.fsfe.org/p/gluster-community-meetings
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] GoGFAPI - Go bindings to libgfapi - Now under a Gluster project

2016-10-20 Thread Kaushal M
Hi All,

The GoGFAPI Go package is now a Gluster project [1]!

I'd created the github.com/kshlm/gogfapi/gfapi package over 3 years
ago, as a learning project to learn Go.
Since then, the project has been moving slowly, and found some users
and contributors. There are still TODOs left to be done[2], issues to
be fixed [3] and the package also needs to be updated to support the
newer APIs introduced into libgfapi since then.

With this move to being a Gluster project, I hope to revive activity
around this project and actually get it to completion and a production
ready state. So please go ahead and use the package, test it, file
issues, pick up TODOs/Issues to fix, write tests etc. Any sort
contribution is welcome.

Let's try to make this package better together.

Cheers!
~kaushal

[1]: https://github.com/gluster/gogfapi
[2]: https://github.com/gluster/gogfapi/blob/master/TODO.md
[3]: https://github.com/gluster/gogfapi/issues
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Container Repo Change + > 50K downloads of Gluster Container images

2016-10-20 Thread Humble Devassy Chirammal
Hi All,

We have kept our official Gluster Container images  in Docker hub for
CentOS and Fedora distros for some time now.

https://hub.docker.com/r/gluster/gluster-centos/
https://hub.docker.com/r/gluster/gluster-fedora/


I see massive increase in the download of these container images for past
few months. This is indeed a good sign.  :) .. It seems that, we will be
crossing 100k+ downloads soon. :)

As a side note, to address some of the copyright issues, I have renamed our
source container repo to "https://github.com/gluster/gluster-containers;
from "https://github.com/gluster/docker;.   Whoever  forked this repo or
contributing to this repo may  take a note of this change .

Please let us know if you  have any queries on this.

--Humble
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel