[Gluster-devel] Gluster Solution for Non Shared Persistent Storage in Docker Container

2016-06-23 Thread Pranith Kumar Karampuri
In case you missed the post on Gluster twitter/facebook,

https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/

We would love to hear your feedback on this.

-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] glusterfs-rpms and glusterfs-rpms-el6 jobs disabled

2016-06-23 Thread Nigel Babu
Hello,

The following jobs have been disabled in Jenkins:

glusterfs-rpms
glusterfs-rpms-el6

I've created the following jobs with Jenkins Job Builder to replace them:

rpms-fedora
rpms-el6

As I continue to work on Jenkins Job Builder, I'll be deactivating more jobs
and creating jobs to replace them. This is the easiest way to fallback in case
I make a mistake in configuring the new jobs.

--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Reduce memcpy in glfs read and write

2016-06-23 Thread Sachin Pandit
I have submitted the code upstream. Can you please review that.
http://review.gluster.org/#/c/14784/

Thank you,
Sachin Pandit.

-Original Message-
From: Sachin Pandit 
Sent: Thursday, June 23, 2016 9:00 AM
To: 'Raghavendra Gowdappa' ; Mohammed Rafi K C 
; Pranith Kumar Karampuri 
Cc: gluster-devel@gluster.org; Ankireddypalle Reddy 
Subject: RE: [Gluster-devel] Reduce memcpy in glfs read and write

Thank you Raghavendra, Pranith and Rafi.

Raghavendra, I have implemented the similar mechanism as you suggested (I hope 
I did). The only thing which is different is, I have not changed the existing 
pub_glfs_writev, whereas I have introduced a new code path which does a similar 
thing using iobuf pool. As you said, the changes work fine. I was stuck at the 
part when I tried to reuse the buffer. After looking at Rafi's comment and your 
comment, now it is clear to me then I cannot reuse the buffer. I will send out 
the patch to master branch very shortly.

Thanks,
Sachin Pandit.

-Original Message-
From: Raghavendra Gowdappa [mailto:rgowd...@redhat.com]
Sent: Wednesday, June 22, 2016 1:48 AM
To: Mohammed Rafi K C 
Cc: Sachin Pandit ; Pranith Kumar Karampuri 
; gluster-devel@gluster.org; Ankireddypalle Reddy 

Subject: Re: [Gluster-devel] Reduce memcpy in glfs read and write

Nice to see you :). Welcome again :).

- Original Message -
> From: "Mohammed Rafi K C" 
> To: "Sachin Pandit" , "Pranith Kumar Karampuri" 
> 
> Cc: gluster-devel@gluster.org, "Ankireddypalle Reddy" 
> 
> Sent: Wednesday, June 22, 2016 12:26:03 PM
> Subject: Re: [Gluster-devel] Reduce memcpy in glfs read and write
> 
> Hi Sachin,
> 
> Good to see you again in Gluster-devel.
> 
> I had implemented those API which I mentioned for a POC. In fact I 
> couldn't push into master.
> 
> Regarding your questions, my comments are inline.
> 
> On 06/22/2016 05:50 AM, Sachin Pandit wrote:
> 
> 
> 
> 
> 
> Hey Pranith, I am good, I hope you are doing good too.
> 
> Please find the comments inline.
> 
> 
> 
> From: Pranith Kumar Karampuri [ mailto:pkara...@redhat.com ]
> Sent: Tuesday, June 21, 2016 5:58 AM
> To: Sachin Pandit 
> Cc: gluster-devel@gluster.org
> Subject: Re: [Gluster-devel] Reduce memcpy in glfs read and write
> 
> 
> 
> 
> Hey!!
> 
> 
> Hope you are doing good. I took a look at the bt. So when flush comes 
> write-behind has to flush all the writes down. I see the following 
> frame hung in iob_unref:
> Thread 7 (Thread 0x7fa601a30700 (LWP 16218)):
> #0 0x7fa60cc55225 in pthread_spin_lock () from
> /lib64/libpthread.so.0
> << Does it always hang there?
> 
> -
> 
> >>It does always hang here.
> 
> -
> #1 0x7fa60e1f373e in iobref_unref (iobref=0x19dc7e0) at
> iobuf.c:907
> #2 0x7fa60e246fb2 in args_wipe (args=0x19e70ec) at
> default-args.c:1593
> #3 0x7fa60e1ea534 in call_stub_wipe_args (stub=0x19e709c) at
> call-stub.c:2466
> #4 0x7fa60e1ea5de in call_stub_destroy (stub=0x19e709c) at
> call-stub.c:2482
> 
> 
> Is this on top of master branch? It seems like we missed an unlock of 
> the spin-lock or the iobref has junk value which gives the feeling 
> that it is in locked state (May be double free?). Do you have any 
> extra patches you have in your repo which make changes in iobuf?
> 
> --
> 
> >>I have implemented a method to reduce memcpy in libgfapi (My patch 
> >>is on top of master branch), by making use of buffer from iobuf pool 
> >>and passing the buffer to application. However, I have not made any 
> >>changes in iobuf core feature. I don’t think double free is 
> >>happening anywhere in the code (I did check this using logs)
> 
> 
> 
> Method that I have implemented:
> 
> 1) Application asks for a buffer of specific size, and the buffer is 
> allocated from the iobuf pool.
> 
> 2) Buffer is passed on to application, and the application writes the 
> data into that buffer.
> 
> 3) Buffer with data in it is passed from application to libgfapi and 
> the underlying translators (no memcpy in glfs_write)
> 
> 
> 
> I have couple of questions, and observations:
> 
> 
> 
> Observations:
> 
> --
> 
> 1) For every write if I get a fresh buffer then I don’t see any 
> problem. All the writes are going through.
> 
> 2) If I try to make use of buffer for consecutive writes, then I am 
> seeing the hang in flush.
> 
> 
> 
> Question1: Is it fine if I reuse the buffer for consecutive writes??
> 
> The answer is no, If IO-cache and write-behind are enabled. In case of 
> IO-cache, It just take a ref on the buffer to store into the cache, 
> which means the io-cache is still using the buffer,
> 
> In case of write-behind, if it decided to aggregate multiple write 
> request , in this case also it will take a ref on the iobuf and will 
> lie to the appluca

That's correct. You cannot use the same iobuf for more than one write call. 
What you can do is:

1. Application requests an iobuf f

Re: [Gluster-devel] Documentation @ readthedocs.org - search broken

2016-06-23 Thread Amye Scavarda
On Thu, Jun 23, 2016 at 1:43 AM, André Bauer  wrote:
> Imho this is still a problem and should be fixed finaly.
>
> I just wanted to search for "libgfapi" and all search results are
> leading to "SORRY - This page does not exist yet" again.
>
> How should somebody use GlusterFS, when getting Infos is so hard?
>
> Maybe readthedocs.io is not the right service, if it just not works.
>
> Regards
> André

André,
We definitely appreciate your feedback on this, this is unfortunately
a bug with ReadTheDocs that we're working with them on.

You may try using Google with 'site:gluster.readthedocs.io' to be able
to get the results you're looking for.
- amye

>
> Am 19.01.2016 um 11:18 schrieb André Bauer:
>> Hi List,
>>
>> for me it seems the documentation is somewhat broken at them moment.
>>
>> If you go to https://gluster.readthedocs.org/en/latest/ an search for
>> "sharding" you get 3 links. All links are broken:
>>
>> "SORRY - This page does not exist yet"
>>
>> If you search for something in Google (e.g. "glusters sharding") you're
>> pointed to the old gluster documentation with the first link, which
>> should have an info, that the docs now are at readthedocs.org
>>
>>
>
>
> --
> Mit freundlichen Grüßen
> André Bauer
>
> MAGIX Software GmbH
> André Bauer
> Administrator
> August-Bebel-Straße 48
> 01219 Dresden
> GERMANY
>
> tel.: 0351 41884875
> e-mail: aba...@magix.net
> aba...@magix.net 
> www.magix.com 
>
> Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
> Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205
>
> Find us on:
>
>  
>  
> --
> The information in this email is intended only for the addressee named
> above. Access to this email by anyone else is unauthorized. If you are
> not the intended recipient of this message any disclosure, copying,
> distribution or any action taken in reliance on it is prohibited and
> may be unlawful. MAGIX does not warrant that any attachments are free
> from viruses or other defects and accepts no liability for any losses
> resulting from infected email transmissions. Please note that any
> views expressed in this email may be those of the originator and do
> not necessarily represent the agenda of the company.
> --
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel



-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Reduce memcpy in glfs read and write

2016-06-23 Thread Sachin Pandit
Thank you Raghavendra, Pranith and Rafi.

Raghavendra, I have implemented the similar mechanism as you suggested (I hope 
I did). The only thing which is different is, I have not changed the existing 
pub_glfs_writev, whereas I have introduced a new code path which does a similar 
thing using iobuf pool. As you said, the changes work fine. I was stuck at the 
part when I tried to reuse the buffer. After looking at Rafi's comment and your 
comment, now it is clear to me then I cannot reuse the buffer. I will send out 
the patch to master branch very shortly.

Thanks,
Sachin Pandit.

-Original Message-
From: Raghavendra Gowdappa [mailto:rgowd...@redhat.com] 
Sent: Wednesday, June 22, 2016 1:48 AM
To: Mohammed Rafi K C 
Cc: Sachin Pandit ; Pranith Kumar Karampuri 
; gluster-devel@gluster.org; Ankireddypalle Reddy 

Subject: Re: [Gluster-devel] Reduce memcpy in glfs read and write

Nice to see you :). Welcome again :).

- Original Message -
> From: "Mohammed Rafi K C" 
> To: "Sachin Pandit" , "Pranith Kumar Karampuri" 
> 
> Cc: gluster-devel@gluster.org, "Ankireddypalle Reddy" 
> 
> Sent: Wednesday, June 22, 2016 12:26:03 PM
> Subject: Re: [Gluster-devel] Reduce memcpy in glfs read and write
> 
> Hi Sachin,
> 
> Good to see you again in Gluster-devel.
> 
> I had implemented those API which I mentioned for a POC. In fact I 
> couldn't push into master.
> 
> Regarding your questions, my comments are inline.
> 
> On 06/22/2016 05:50 AM, Sachin Pandit wrote:
> 
> 
> 
> 
> 
> Hey Pranith, I am good, I hope you are doing good too.
> 
> Please find the comments inline.
> 
> 
> 
> From: Pranith Kumar Karampuri [ mailto:pkara...@redhat.com ]
> Sent: Tuesday, June 21, 2016 5:58 AM
> To: Sachin Pandit 
> Cc: gluster-devel@gluster.org
> Subject: Re: [Gluster-devel] Reduce memcpy in glfs read and write
> 
> 
> 
> 
> Hey!!
> 
> 
> Hope you are doing good. I took a look at the bt. So when flush comes 
> write-behind has to flush all the writes down. I see the following 
> frame hung in iob_unref:
> Thread 7 (Thread 0x7fa601a30700 (LWP 16218)):
> #0 0x7fa60cc55225 in pthread_spin_lock () from 
> /lib64/libpthread.so.0
> << Does it always hang there?
> 
> -
> 
> >>It does always hang here.
> 
> -
> #1 0x7fa60e1f373e in iobref_unref (iobref=0x19dc7e0) at 
> iobuf.c:907
> #2 0x7fa60e246fb2 in args_wipe (args=0x19e70ec) at 
> default-args.c:1593
> #3 0x7fa60e1ea534 in call_stub_wipe_args (stub=0x19e709c) at
> call-stub.c:2466
> #4 0x7fa60e1ea5de in call_stub_destroy (stub=0x19e709c) at
> call-stub.c:2482
> 
> 
> Is this on top of master branch? It seems like we missed an unlock of 
> the spin-lock or the iobref has junk value which gives the feeling 
> that it is in locked state (May be double free?). Do you have any 
> extra patches you have in your repo which make changes in iobuf?
> 
> --
> 
> >>I have implemented a method to reduce memcpy in libgfapi (My patch 
> >>is on top of master branch), by making use of buffer from iobuf pool 
> >>and passing the buffer to application. However, I have not made any 
> >>changes in iobuf core feature. I don’t think double free is 
> >>happening anywhere in the code (I did check this using logs)
> 
> 
> 
> Method that I have implemented:
> 
> 1) Application asks for a buffer of specific size, and the buffer is 
> allocated from the iobuf pool.
> 
> 2) Buffer is passed on to application, and the application writes the 
> data into that buffer.
> 
> 3) Buffer with data in it is passed from application to libgfapi and 
> the underlying translators (no memcpy in glfs_write)
> 
> 
> 
> I have couple of questions, and observations:
> 
> 
> 
> Observations:
> 
> --
> 
> 1) For every write if I get a fresh buffer then I don’t see any 
> problem. All the writes are going through.
> 
> 2) If I try to make use of buffer for consecutive writes, then I am 
> seeing the hang in flush.
> 
> 
> 
> Question1: Is it fine if I reuse the buffer for consecutive writes??
> 
> The answer is no, If IO-cache and write-behind are enabled. In case of 
> IO-cache, It just take a ref on the buffer to store into the cache, 
> which means the io-cache is still using the buffer,
> 
> In case of write-behind, if it decided to aggregate multiple write 
> request , in this case also it will take a ref on the iobuf and will 
> lie to the appluca

That's correct. You cannot use the same iobuf for more than one write call. 
What you can do is:

1. Application requests an iobuf from gfapi. gfapi gets an iobuf (probably does 
an iobuf_ref).
2. Application does write using pub_glfs_writev with iov populated with memory 
belonging to iobuf. But current pub_glfs_writev doesn't take iobuf as an 
argument. So, you need to change that to take an iobuf/iobref as argument so 
that application doesn't free up write payload while it is still being cached 
in write-behind. In other words a

[Gluster-devel] performance issues Manoj found in EC testing

2016-06-23 Thread Pranith Kumar Karampuri
hi Xavi,
  Meet Manoj from performance team Redhat. He has been testing EC
performance in his stretch clusters. He found some interesting things we
would like to share with you.

1) When we perform multiple streams of big file writes(12 parallel dds I
think) he found one thread to be always hot (99%CPU always). He was asking
me if fuse_reader thread does any extra processing in EC compared to
replicate. Initially I thought it would just lock and epoll threads will
perform the encoding but later realized that once we have the lock and
version details, next writes on the file would be encoded in the same
thread that comes to EC. write-behind could play a role and make the writes
come to EC in an epoll thread but we saw consistently there was just one
thread that is hot. Not multiple threads. We will be able to confirm this
in tomorrow's testing.

2) This is one more thing Raghavendra G found, that our current
implementation of epoll doesn't let other epoll threads pick messages from
a socket while one thread is processing one message from that socket. In
EC's case that can be encoding of the write/decoding read. This will not
let replies of operations on different files to be processed in parallel.
He thinks this can be fixed for 3.9.

Manoj will be raising a bug to gather all his findings. I just wanted to
introduce him and let you know the interesting things he is finding before
you see the bug :-).
-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Stale file handle when accessing shared drive,

2016-06-23 Thread Chaloulos, Klearchos (Nokia - GR/Athens)
Hello,

We use glusterfs version 3.6.9 as a shared storage solution. The linux kernel 
version is 4.1.20. Our setup consists of replica 2 volumes, no distribute. We 
have seen that occasionally readdir operations return the "Stale file handle" 
error. Below are the client logs:

[2016-06-15 09:29:59.717521] W [fuse-bridge.c:1001:fuse_fd_cbk] 
0-glusterfs-fuse: 598: OPENDIR() /folder1/folder2/folder3/folder4/folder5 => -1 
(Stale file handle)
[2016-06-15 09:29:59.717851] W [defaults.c:2177:default_releasedir] (--> 
/usr/lib64/glusterfs/libglusterfs.so.0(_gf_log_callingfn+0x218)[0x7f28346e9bdd] 
(--> 
/usr/lib64/glusterfs/libglusterfs.so.0(default_releasedir+0x44)[0x7f28347035d4] 
(--> /usr/lib64/glusterfs/libglusterfs.so.0(+0x5c6a1)[0x7f28347236a1] (--> 
/usr/lib64/glusterfs/libglusterfs.so.0(fd_unref+0x9d)[0x7f28347238c3] (--> 
/usr/lib64/glusterfs/glusterfs/3.6.9/xlator/protocol/client.so(client_local_wipe+0x56)[0x7f282bdd2549]
 ) 0-fuse: xlator does not implement releasedir_cbk

The issue is temporary. We have created a script that continuously does an ls 
on the directory, and the error appeared for 30 seconds in one case. For these 
30 seconds, the ls command showed the following output:
ls: cannot open directory '/folder1/folder2/folder3/folder4/folder5 ': Stale 
file handle
After 30 seconds, no error appeared and the directory contents were listed 
normally.

Do you think this is related to the bugs below:
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1041109
[2] 
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/3.0_Update_4_Release_Notes/chap-Known_Issues.html

Are the bugs above still applicable in the 3.6.9 release? In [2] there is a 
suggested workaround, to use "gluster volume set VOLNAME quick-read off". Do 
you think it will fix the stale file handle issue? Won't this cause a decrease 
in performance?

On a more general level, what can cause the "Stale file handle"? In this link
[3] http://www.cyberciti.biz/tips/nfs-stale-file-handle-error-and-solution.html
it says that it occurs when one client holds an active handle (open file 
descriptor?) to a file/directory that is deleted by another client or directly 
on the server. But in our case the issue is temporary so it doesn't look that 
the problem is deletion by another client.

Best regards,

Klearchs






___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Documentation @ readthedocs.org - search broken

2016-06-23 Thread André Bauer
Imho this is still a problem and should be fixed finaly.

I just wanted to search for "libgfapi" and all search results are
leading to "SORRY - This page does not exist yet" again.

How should somebody use GlusterFS, when getting Infos is so hard?

Maybe readthedocs.io is not the right service, if it just not works.

Regards
André

Am 19.01.2016 um 11:18 schrieb André Bauer:
> Hi List,
> 
> for me it seems the documentation is somewhat broken at them moment.
> 
> If you go to https://gluster.readthedocs.org/en/latest/ an search for
> "sharding" you get 3 links. All links are broken:
> 
> "SORRY - This page does not exist yet"
> 
> If you search for something in Google (e.g. "glusters sharding") you're
> pointed to the old gluster documentation with the first link, which
> should have an info, that the docs now are at readthedocs.org
> 
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net 
www.magix.com 

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

 
 
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel